https://cstwiki.wtb.tue.nl/api.php?action=feedcontributions&user=Asinha&feedformat=atomControl Systems Technology Group - User contributions [en]2024-03-29T13:36:21ZUser contributionsMediaWiki 1.39.5https://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39884Implementation MSD162017-05-09T10:12:37Z<p>Asinha: /* Reference generator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39883Implementation MSD162017-05-09T10:12:25Z<p>Asinha: /* Reference generator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGeneratr.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39882Implementation MSD162017-05-09T10:11:54Z<p>Asinha: /* Reference generator */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGener.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Autonomous_Referee_System&diff=39712Autonomous Referee System2017-05-04T20:44:00Z<p>Asinha: </p>
<hr />
<div><div align="left"><br />
<font size="4">'An objective referee for robot football'</font><br />
</div><br />
<br />
<div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_large}}</center></div><br />
__NOTOC__<br />
<br />
<center>[[File:underConstruction.jpg|thumb|center|750px|We are still working on the contents of this website]]</center><br />
<br />
A football referee can hardly ever make "the correct decision", at least not in the eyes of the thousands or sometimes millions of fans watching the game. When a decision will benefit one team, there will always be complaints from the other side. It is oft-times forgotten that the referee is also merely a human. To make the game more fair, the use of technology to support the referee is increasing. Nowadays, several stadiums are already equipped with [https://en.wikipedia.org/wiki/Goal-line_technology goal line technology] and referees can be assisted by a [http://quality.fifa.com/en/var/ Video Assistant Referee (VAR)]. If the use of technology keeps increasing, a human referee might one day become entirely obsolete. The proceedings of a match could be measured and evaluated by some system of sensors. With enough (correct) data, this system would be able to recognize certain events and make decisions based on these event.<br />
<br />
<br />
The aim of this project is to do just that; making a system which can evaluate a soccer match, detect events and make decisions accordingly. Making a functioning system which could actually replace the human referee would probably take a couple of years, which we don't have. This project will focus on creating a high level system architecture and giving a prove of concept by refereeing a robot-soccer match, where currently the refereeing is also still done by a human. This project will build upon the [[Robotic_Drone_Referee|Robotic Drone Referee]] project executed by the first generation of Mechatronics System Design trainees. <br />
<br />
<br />
To navigate through this wiki, the internal navigation box on the right side of the page can be used. <br />
<br />
<br />
<center>[[File:tumbnail_test_video.png|center|750px|link=https://www.youtube.com/embed/XyRR3rPQ4R0?autoplay=1]]</center><br />
<br />
<br />
=Team=<br />
This project was carried out for the second module of the 2016 MSD PDEng program. The team consisted of the following members:<br />
* Akarsh Sinha<br />
* Farzad Mobini<br />
* Joep Wolken<br />
* Jordy Senden<br />
* Sa Wang<br />
* Tim Verdonschot<br />
* Tuncay Uğurlu Ölçer<br />
<br />
<br />
<br />
<center>[[File:Drone Ref.png|thumb|center|1000px|Illustration by Peter van Dooren, BSc student at Mechanical Engineering, TU Eindhoven, November 2016.]]</center><br />
<br />
=Acknowledgements=<br />
A project like this is never done alone. We would like to express our gratitude to the following parties for their support and input to this project.<br />
<br />
<center>[[File:logoAcknowledgements.png|center|1000px]]</center><br />
<br />
<br />
<br />
<br />
<br />
<br />
<!--<br />
<br />
==Ground Robot==<br />
<br />
[[File:Ground_Robot_specs.png|thumb|right|500px|Ground robot specs]]<br />
<br />
[[File:Ground_Robot_overview.png|thumb|right|400px|Ground robot w.r.t. field]]<br />
<br />
'''Requirements for Ground Robot'''<br />
<br />
<br><br />
<br />
*''Motion:''<br />
** The GR should be able to keep the ball in sight of its Kinect camera. If the ball is lost, GR should try to find it again with the Kinect.<br />
** Since the ball is best tracked with the Kinect, the omni-vision camera can be used to keep track of the players. <br />
<br />
<br><br />
<br />
*''Vision:''<br />
** Position self with respect to field lines<br />
** Detect ball<br />
** Estimate global ball position and velocity<br />
** Detect objects (players) in field<br />
** Estimate global position and velocity of objects<br />
** Determine which team the player belongs to<br />
<br />
<br><br />
<br />
*''Communication:''<br />
: Send to laptop:<br />
:* Ball position + velocity estimate<br />
:* Player position + velocity estimate<br />
:* Player team/label<br />
:* Own position + velocity<br />
:* Own side/home goal<br />
:* Own detection of B.O.O.P. or Collision (maybe)<br />
<br />
: Receive from laptop:<br />
:* Reference position <br />
:* Detection flag<br />
<br />
<br><br />
<br />
*''Extra:''<br />
** Get ball after B.O.O.P.<br />
** Communicate with second Ground Robot<br />
<br />
==Drone==<br />
*AR Parrot Drone Elite Addition 2.0<br />
*19 min. flight time (ext. battery)<br />
*720p Camera (but used as 360p)<br />
*~70° Diagonal FOV (measured)<br />
*Image ratio 16:9<br />
===Drone control===<br />
*Has own software & controller<br />
*Possible to drive by MATLAB using arrow keys<br />
*Driving via position command and format of the input data is a work to do<br />
*x, y, θ position feedback via top cam and/or UWBS<br />
*z position will be constant and decided according FOV<br />
<br />
==Positioning==<br />
<br />
Positioning System block is responsible for creating the reference position of the drone and the ground robot referee based on the information of the players and the ball. The low level controller of the both system will incorporate the reference position as a desired state for tracking purposes. <br />
[[File:Positioning.png|thumb|right|400px|Depiction of the positioning subsystem.]]<br />
Currently : <br />
*Ground referee (Turtle) focuses on ball<br />
*Drone focuses on collision/players<br />
<br />
==Detection==<br />
The fault detection should<br />
*Receive images and estimations of state related parameter from the drone and the ground robot. <br />
*Based on the information, evaluate which of the two rules (BOOP and Collision) are violated.<br />
*Communicate with respective refs the final verdict<br />
** Collaboration with the ground ref<br />
*** Receive estimated<br />
**** Ball Position and velocity <br />
**** Player position and velocity<br />
**** Position of line/ ball boundary<br />
*** Transmit decision flag regarding BOOP <br />
** Collaboration with the drone ref<br />
*** Receive estimated<br />
**** Player position and velocity <br />
**** Ball Position and velocity <br />
*** Transmit decision flag regarding Collision <br />
<br />
<p><br />
===Definition of fault/foul===<br />
The definition of foul/fault or offence is based on the Robo Cup MSL Rule Book <ref> [http://wiki.robocup.org/Middle_Size_League#Rules "Middle Size Robot League Rules and Regulations"] </ref> . Simple physical contact does not represent an offence. Speed and impact of physical contact shall be used to define offence or a foul. There are two cases in which foul detection should be formulated.<br />
*'''Case 1: One of the robots is in possession of the ball'''<br />
[[File:Contact Between Robots.png|thumb|right|450px|Indirect (left) and direct (right) contact between robots. ]]<br />
** A foul will be defined in this case if Robot B impedes the progress of the opponent by <br />
**#Colliding after charging at A with v unit velocity<br />
**#Applying (instantaneous) pushing with ≥ 𝑭 unit force <br />
**#Continuing to push for time ≥ t seconds <br />
**#Knocking the ball off A by sudden (Instantaneous) application of force (≥ 𝑭 unit force)<br />
*Possible ways of measuring these <br />
***Velocity<br />
**#Visual odometry (Image-based Object Velocity Estimation)<br />
***Application of (instantaneous) force<br />
**#Use visual odometry and calculate velocity/ acceleration and include time data. <br />
**#Estimate force accordingly<br />
**Continuous push (B is pushing A)<br />
**#Detect instantaneous application of F unit force<br />
**#Detect if B changes direction of movement within t seconds<br />
**Knocking off ball (only visual data)<br />
**#Detect collision<br />
**#Detect ball and Player A after collision <br />
<br />
*'''Case 2: None of the robots are in possession of the ball''' <br />
[[File:No Robot Has Ball Possession.png|thumb|right|300px|No robot has ball possession.]]<br />
**A foul will be defined in this case if Robot either A or B impedes the progress of the opponent by <br />
**#Colliding with larger momentum (say, pB ≥ pA units) <br />
**#Continues with the momentum the for time ≥ t seconds (dp/dt=0,for t seconds after impact)<br />
**Possible ways of measuring these <br />
***Momentum<br />
***#Use visual odometry to estimate velocity (and elapsed time)<br />
***#Estimate momentum accordingly<br />
***Continuous application of momentum<br />
***#Detect if defaulter changes direction of movement within t seconds<br />
</p><br />
<br />
==Image processing==<br />
===Capturing images===<br />
'''Objective''': Capturing images from the (front) camera of the drone.<br />
<br />
<br />
'''Method''':<br />
*MATLAB<br />
** ffmpeg<br />
** ipcam<br />
** gigecam<br />
** hebicam<br />
* C/C++/Java/Python<br />
** opencv<br />
…<br />
No method chosen yet, but ipcam, gigecam and hebicam are tested and do not work for the camera of the drone. FFmpeg is also tested and does work, but capturing one image takes 2.2s which is way too slow. Therefore, it might be better to use software written in C/C++ instead of MATLAB.<br />
<br />
===Processing images===<br />
'''Objective''': Estimating the player (and ball?) positions from the captured images.<br />
<br />
<br />
'''Method''': Detect ball position (if on the image) based on its (orange/yellow) color and detect the player positions based on its shape/color (?).<br />
<br />
== Top Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
<br />
=References=<br />
<references/><br />
<br />
--></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Autonomous_Referee_System&diff=39711Autonomous Referee System2017-05-04T20:43:45Z<p>Asinha: </p>
<hr />
<div><div align="left"><br />
<font size="4">'An objective referee for robot football'</font><br />
</div><br />
<br />
<div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_large}}</center></div><br />
__NOTOC__<br />
<br />
<center>[[File:underConstruction.jpg|thumb|center|750px|We are still working on the contents of this website]]</center><br />
<br />
A soccer referee can hardly ever make "the correct decision", at least not in the eyes of the thousands or sometimes millions of fans watching the game. When a decision will benefit one team, there will always be complaints from the other side. It is oft-times forgotten that the referee is also merely a human. To make the game more fair, the use of technology to support the referee is increasing. Nowadays, several stadiums are already equipped with [https://en.wikipedia.org/wiki/Goal-line_technology goal line technology] and referees can be assisted by a [http://quality.fifa.com/en/var/ Video Assistant Referee (VAR)]. If the use of technology keeps increasing, a human referee might one day become entirely obsolete. The proceedings of a match could be measured and evaluated by some system of sensors. With enough (correct) data, this system would be able to recognize certain events and make decisions based on these event.<br />
<br />
<br />
The aim of this project is to do just that; making a system which can evaluate a soccer match, detect events and make decisions accordingly. Making a functioning system which could actually replace the human referee would probably take a couple of years, which we don't have. This project will focus on creating a high level system architecture and giving a prove of concept by refereeing a robot-soccer match, where currently the refereeing is also still done by a human. This project will build upon the [[Robotic_Drone_Referee|Robotic Drone Referee]] project executed by the first generation of Mechatronics System Design trainees. <br />
<br />
<br />
To navigate through this wiki, the internal navigation box on the right side of the page can be used. <br />
<br />
<br />
<center>[[File:tumbnail_test_video.png|center|750px|link=https://www.youtube.com/embed/XyRR3rPQ4R0?autoplay=1]]</center><br />
<br />
<br />
=Team=<br />
This project was carried out for the second module of the 2016 MSD PDEng program. The team consisted of the following members:<br />
* Akarsh Sinha<br />
* Farzad Mobini<br />
* Joep Wolken<br />
* Jordy Senden<br />
* Sa Wang<br />
* Tim Verdonschot<br />
* Tuncay Uğurlu Ölçer<br />
<br />
<br />
<br />
<center>[[File:Drone Ref.png|thumb|center|1000px|Illustration by Peter van Dooren, BSc student at Mechanical Engineering, TU Eindhoven, November 2016.]]</center><br />
<br />
=Acknowledgements=<br />
A project like this is never done alone. We would like to express our gratitude to the following parties for their support and input to this project.<br />
<br />
<center>[[File:logoAcknowledgements.png|center|1000px]]</center><br />
<br />
<br />
<br />
<br />
<br />
<br />
<!--<br />
<br />
==Ground Robot==<br />
<br />
[[File:Ground_Robot_specs.png|thumb|right|500px|Ground robot specs]]<br />
<br />
[[File:Ground_Robot_overview.png|thumb|right|400px|Ground robot w.r.t. field]]<br />
<br />
'''Requirements for Ground Robot'''<br />
<br />
<br><br />
<br />
*''Motion:''<br />
** The GR should be able to keep the ball in sight of its Kinect camera. If the ball is lost, GR should try to find it again with the Kinect.<br />
** Since the ball is best tracked with the Kinect, the omni-vision camera can be used to keep track of the players. <br />
<br />
<br><br />
<br />
*''Vision:''<br />
** Position self with respect to field lines<br />
** Detect ball<br />
** Estimate global ball position and velocity<br />
** Detect objects (players) in field<br />
** Estimate global position and velocity of objects<br />
** Determine which team the player belongs to<br />
<br />
<br><br />
<br />
*''Communication:''<br />
: Send to laptop:<br />
:* Ball position + velocity estimate<br />
:* Player position + velocity estimate<br />
:* Player team/label<br />
:* Own position + velocity<br />
:* Own side/home goal<br />
:* Own detection of B.O.O.P. or Collision (maybe)<br />
<br />
: Receive from laptop:<br />
:* Reference position <br />
:* Detection flag<br />
<br />
<br><br />
<br />
*''Extra:''<br />
** Get ball after B.O.O.P.<br />
** Communicate with second Ground Robot<br />
<br />
==Drone==<br />
*AR Parrot Drone Elite Addition 2.0<br />
*19 min. flight time (ext. battery)<br />
*720p Camera (but used as 360p)<br />
*~70° Diagonal FOV (measured)<br />
*Image ratio 16:9<br />
===Drone control===<br />
*Has own software & controller<br />
*Possible to drive by MATLAB using arrow keys<br />
*Driving via position command and format of the input data is a work to do<br />
*x, y, θ position feedback via top cam and/or UWBS<br />
*z position will be constant and decided according FOV<br />
<br />
==Positioning==<br />
<br />
Positioning System block is responsible for creating the reference position of the drone and the ground robot referee based on the information of the players and the ball. The low level controller of the both system will incorporate the reference position as a desired state for tracking purposes. <br />
[[File:Positioning.png|thumb|right|400px|Depiction of the positioning subsystem.]]<br />
Currently : <br />
*Ground referee (Turtle) focuses on ball<br />
*Drone focuses on collision/players<br />
<br />
==Detection==<br />
The fault detection should<br />
*Receive images and estimations of state related parameter from the drone and the ground robot. <br />
*Based on the information, evaluate which of the two rules (BOOP and Collision) are violated.<br />
*Communicate with respective refs the final verdict<br />
** Collaboration with the ground ref<br />
*** Receive estimated<br />
**** Ball Position and velocity <br />
**** Player position and velocity<br />
**** Position of line/ ball boundary<br />
*** Transmit decision flag regarding BOOP <br />
** Collaboration with the drone ref<br />
*** Receive estimated<br />
**** Player position and velocity <br />
**** Ball Position and velocity <br />
*** Transmit decision flag regarding Collision <br />
<br />
<p><br />
===Definition of fault/foul===<br />
The definition of foul/fault or offence is based on the Robo Cup MSL Rule Book <ref> [http://wiki.robocup.org/Middle_Size_League#Rules "Middle Size Robot League Rules and Regulations"] </ref> . Simple physical contact does not represent an offence. Speed and impact of physical contact shall be used to define offence or a foul. There are two cases in which foul detection should be formulated.<br />
*'''Case 1: One of the robots is in possession of the ball'''<br />
[[File:Contact Between Robots.png|thumb|right|450px|Indirect (left) and direct (right) contact between robots. ]]<br />
** A foul will be defined in this case if Robot B impedes the progress of the opponent by <br />
**#Colliding after charging at A with v unit velocity<br />
**#Applying (instantaneous) pushing with ≥ 𝑭 unit force <br />
**#Continuing to push for time ≥ t seconds <br />
**#Knocking the ball off A by sudden (Instantaneous) application of force (≥ 𝑭 unit force)<br />
*Possible ways of measuring these <br />
***Velocity<br />
**#Visual odometry (Image-based Object Velocity Estimation)<br />
***Application of (instantaneous) force<br />
**#Use visual odometry and calculate velocity/ acceleration and include time data. <br />
**#Estimate force accordingly<br />
**Continuous push (B is pushing A)<br />
**#Detect instantaneous application of F unit force<br />
**#Detect if B changes direction of movement within t seconds<br />
**Knocking off ball (only visual data)<br />
**#Detect collision<br />
**#Detect ball and Player A after collision <br />
<br />
*'''Case 2: None of the robots are in possession of the ball''' <br />
[[File:No Robot Has Ball Possession.png|thumb|right|300px|No robot has ball possession.]]<br />
**A foul will be defined in this case if Robot either A or B impedes the progress of the opponent by <br />
**#Colliding with larger momentum (say, pB ≥ pA units) <br />
**#Continues with the momentum the for time ≥ t seconds (dp/dt=0,for t seconds after impact)<br />
**Possible ways of measuring these <br />
***Momentum<br />
***#Use visual odometry to estimate velocity (and elapsed time)<br />
***#Estimate momentum accordingly<br />
***Continuous application of momentum<br />
***#Detect if defaulter changes direction of movement within t seconds<br />
</p><br />
<br />
==Image processing==<br />
===Capturing images===<br />
'''Objective''': Capturing images from the (front) camera of the drone.<br />
<br />
<br />
'''Method''':<br />
*MATLAB<br />
** ffmpeg<br />
** ipcam<br />
** gigecam<br />
** hebicam<br />
* C/C++/Java/Python<br />
** opencv<br />
…<br />
No method chosen yet, but ipcam, gigecam and hebicam are tested and do not work for the camera of the drone. FFmpeg is also tested and does work, but capturing one image takes 2.2s which is way too slow. Therefore, it might be better to use software written in C/C++ instead of MATLAB.<br />
<br />
===Processing images===<br />
'''Objective''': Estimating the player (and ball?) positions from the captured images.<br />
<br />
<br />
'''Method''': Detect ball position (if on the image) based on its (orange/yellow) color and detect the player positions based on its shape/color (?).<br />
<br />
== Top Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
<br />
<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
<br />
=References=<br />
<references/><br />
<br />
--></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Kf_result7.png&diff=39710File:Kf result7.png2017-05-04T20:38:54Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39709Implementation MSD162017-05-04T20:38:35Z<p>Asinha: /* 2.3 Model identification from input to position */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
'''Data analysis''' <br><br />
The model is built under the assumption that there is no delay from the inputs. However, according to the Simulink model built by David Escobar Sanabria and Pieter J. Mosterman, they measured and built the model of AR drone with 4 samples delays due the wireless communication. Compared with the results measured several times, the estimation is reasonable. <br><br><br />
<br />
In real world, nothing is linear. The nonlinear behavior of system may cause the mismatch part of the identified model.<br><br><br />
'''Summary'''<br><br />
The model for input b, which will be used for further kalman filter design, is estimated with a certain accuracy. But the repeatability of the drone is a critical issue which has been investigated. The data selected for identification is measured in situation that battery is full, the orientation is fixed and no drone started from steady state. <br><br><br />
''System identification for a (drone front-back tilt)''<br><br />
The identified model in y direction is described as a state space model with the state name [(y ) ̇y] which means velocity and position.<br><br> <br />
The model then is:<br><br />
[EQUATION 2] <br />
<center>[[File:Kf_result7.png|thumb|center|750px|Bode plot of identified model]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Kf_result6.png&diff=39708File:Kf result6.png2017-05-04T20:35:39Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Kf_result5.png&diff=39707File:Kf result5.png2017-05-04T20:35:26Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Kf_result4.png&diff=39706File:Kf result4.png2017-05-04T20:35:13Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Kf_result3.png&diff=39705File:Kf result3.png2017-05-04T20:34:57Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Kf_rot_mat.png&diff=39704File:Kf rot mat.png2017-05-04T20:34:42Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Kf_coordinate.png&diff=39703File:Kf coordinate.png2017-05-04T20:34:24Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Kf_result2.png&diff=39702File:Kf result2.png2017-05-04T20:34:12Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Kf_result1.png&diff=39701File:Kf result1.png2017-05-04T20:34:00Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:KF_Overview.png&diff=39700File:KF Overview.png2017-05-04T20:33:46Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39699Implementation MSD162017-05-04T20:33:20Z<p>Asinha: /* Kalman filter */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
The drone is actuated via UDP commend sent by host computer. The command contains the control signals in pitch angle, roll angle, yaw angle and vertical direction. The corresponding forward velocity and side velocity in bode frame can be measured by sensors inside the drone. At the same time, there are three LEDs on the drone which can be detected by camera on the top of the field. Based on the LEDs on the captured image, the position and orientation of drone on the field can calculated via image processing. <br><br><br />
As the camera on the top of the field cannot detect the drone LEDs every time, kalman filter needs to be designed to predict the drone motion and minimize the measurement noise. Therefore, the further close loop control system for drone can be robust. As the flying height of drone does not have much requirement for system, the height part of drone is not considered in kalman filter design. <br />
<center>[[File:KF_Overview.png|thumb|center|750px|Command (a) is forward-back tilt -floating-point value in range [-1 1]. Command (b) is left- right tilt- floating -point value in range [-1 1]. d is drone angular speed in range [-1 1 ]. Forward and side velocity is displayed in body frame (orange). Position (x, y, Psi) is displayed in global frame (blue). ]]</center><br />
=== System identification and dynamic modeling ===<br />
The model needed to be identified is the drone block in figure 1. The drone block in figure 1 is regarded as a black box. To model the dynamic of this black box, predefined signals are given as inputs. The corresponding outputs are measured by both top cam and velocity inside the drone. The relation between inputs and outputs are analyzed and estimated in following chapters. <br />
====Data preprocessing ====<br />
As around 25% of the data measured by camera is empty, the drone positon information reflected is incomplete. The example (fig.2) provide a visualized concept of original data measured from top camera. Based on fig 2, the motion data indicted clearly what motion of drone is like in one degree of freedom. To make it continuous, interpolation can be in implemented. <br />
<center>[[File:Kf_result1.png|thumb|center|750px|Original data point from top camera. ]]</center><br />
<center>[[File:Kf_result2.png|thumb|center|750px|Processed data ]]</center><br />
The processed data shows that the interpolation operation estimates reasonable guess for empty data points. <br />
====2.2 Coordinates system introduction ====<br />
As the drone is flying object with four degree of freedom in the field, there exist two coordinate systems. One is the coordinate system in body frame, the other one is the global frame. <br />
<center>[[File:kf_coordinate.png|thumb|center|750px|Coordinate system description. The black line represents global frame, whereas the blue line represents body frame. ]]</center><br />
The drone is actuated in body frame coordinate system via control signals (a, b, c, d). The velocities measured are displayed also in the body frame coordinate system. The positions measured by the top camera are calculated in global coordinate system. <br><br><br />
The data can be transformed between body frame and global frame via the rotation matrix. To simplify the identification process, the rotation matrix will be build outside the kalman filter. The model identified is the response of the input commends (a, b, c and d) in body frame. Then the filtered data will be transferred back to global frame as feedback. The basic concept is filtering data in body frame to avoid make parameter varying kalman filter. Figure 5 describes the basic concept in block diagram. <br />
<center>[[File:kf_rot_mat.png|thumb|center|750px|Concept using rotation matrix ]]</center><br />
====2.3 Model identification from input to position ====<br />
The input and corresponding output in velocity in decoupled in body frame theoretically. Therefore, the dynamic model can be identified for each degree of freedom separately. <br><br><br />
'''System identification for b (drone left-right tilt)<br><br>'''<br />
The response of input b is measured by the top camera. The preprocessed data is shown in following. And this processed data will be used in model identification. <br />
<center>[[File:Kf_result3.png|thumb|center|750px|Data preprocess]]</center><br />
<center>[[File:Kf_result4.png|thumb|center|750px|Input output]]</center><br />
The above displays the input and corresponding output. System Identification Toolbox in MATLAB is used to estimate mathematical model with data shown above. As in real world, nothing is linear due external disturbance and components uncertainty. Hence, some assumptions need to be made to help Matlab make a reasonable estimation of model. Base on the response from output, the system behaves similar to a 2nd system. The states name is defined asX= [x ̇ x], which means velocity and positions. <br />
And identified model is demonstrated in state- space form: <br />
[EQUATION 1]<br />
The frequency response based on this state space model is shown below: <br />
<center>[[File:Kf_result5.png|thumb|center|750px|Bode plot of identified model]]</center><br />
The accuracy of the identified model compared with real response is evaluated in Matlab. The result represents the extent about how the model fits the real response. <br />
<center>[[File:Kf_result6.png|thumb|center|750px|Results validation of input b]]</center><br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39698Implementation MSD162017-05-04T20:22:20Z<p>Asinha: /* World Model */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39697Implementation MSD162017-05-04T20:15:05Z<p>Asinha: /* Ball position filter and sensor fusion */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Equation_wm_particle.png&diff=39696File:Equation wm particle.png2017-05-04T20:14:16Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39695Implementation MSD162017-05-04T20:14:06Z<p>Asinha: /* World Model */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<math>v_{new}= \alpha_{v}v_{old}+\alpha_{x}\frac{z_{new}-x_{old}}{dt}+\alpha_{z}\alpha_{x}\frac{z_{new}-z_{old}}{dt}</math><br><br><br />
<center>[[File:equation_wm_particle.png|thumb|center|750px|]]</center><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Particle_filter_Parameters.png&diff=39694File:Particle filter Parameters.png2017-05-04T20:10:57Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39693Implementation MSD162017-05-04T20:10:48Z<p>Asinha: /* World Model */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<math>v_{new}= \alpha_{v}v_{old}+\alpha_{x}\frac{z_{new}-x_{old}}{dt}+\alpha_{z}\alpha_{x}\frac{z_{new}-z_{old}}{dt}</math><br><br><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<center>[[File:particle_filter_Parameters.png|thumb|center|750px|]]</center><br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39692Implementation MSD162017-05-04T20:09:31Z<p>Asinha: /* Ball position filter and sensor fusion */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br><br />
<math>v_{new}= \alpha_{v}v_{old}+\alpha_{x}\frac{z_{new}-x_{old}}{dt}+\alpha_{z}\alpha_{x}\frac{z_{new}-z_{old}}{dt}</math><br><br><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Storage_table_2.png&diff=39691File:Storage table 2.png2017-05-04T20:08:51Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39690Implementation MSD162017-05-04T20:08:43Z<p>Asinha: /* Storage */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_2.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br />
<math>v_{new}= \alpha_{v}v_{old}+\alpha_{x}\frac{z_{new}-x_{old}}{dt}+\alpha_{z}\alpha_{x}\frac{z_{new}-z_{old}}{dt}</math><br><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Storage_table_1.png&diff=39689File:Storage table 1.png2017-05-04T20:08:25Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39688Implementation MSD162017-05-04T20:08:17Z<p>Asinha: /* World Model */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Set functions of the World Model ‘W’]]</center><br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<center>[[File:storage_table_1.png|thumb|center|750px|Commands to request data from World Model ‘W’]]</center><br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br />
<math>v_{new}= \alpha_{v}v_{old}+\alpha_{x}\frac{z_{new}-x_{old}}{dt}+\alpha_{z}\alpha_{x}\frac{z_{new}-z_{old}}{dt}</math><br><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39687Implementation MSD162017-05-04T20:07:12Z<p>Asinha: /* Storage */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
<br />
Note that the WM is called ‘W’ here, i.e. initialized as W = WorldModel(n), where n represents the number of players per team. Since this number can vary (while the number of balls is hardcoded to 1), the players are a class of their own, while ball, drone and turtle are simply properties of class ‘WorldModel’. Accessing the player data is therefore slightly different from accessing the other data, as Table 2 shows.<br />
<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br />
<math>v_{new}= \alpha_{v}v_{old}+\alpha_{x}\frac{z_{new}-x_{old}}{dt}+\alpha_{z}\alpha_{x}\frac{z_{new}-z_{old}}{dt}</math><br><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39686Implementation MSD162017-05-04T20:06:17Z<p>Asinha: /* World Model */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.png|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br />
<math>v_{new}= \alpha_{v}v_{old}+\alpha_{x}\frac{z_{new}-x_{old}}{dt}+\alpha_{z}\alpha_{x}\frac{z_{new}-z_{old}}{dt}</math><br><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=File:Worldmodel.png&diff=39685File:Worldmodel.png2017-05-04T20:05:51Z<p>Asinha: </p>
<hr />
<div></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39684Implementation MSD162017-05-04T20:05:11Z<p>Asinha: /* World Model */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
<center>[[File:worldmodel.jpg|thumb|center|750px|World Model and its processes]]</center><br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br />
<math>v_{new}= \alpha_{v}v_{old}+\alpha_{x}\frac{z_{new}-x_{old}}{dt}+\alpha_{z}\alpha_{x}\frac{z_{new}-z_{old}}{dt}</math><br><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39683Implementation MSD162017-05-04T20:02:30Z<p>Asinha: /* Ball position filter and sensor fusion */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br />
<math>v_{new}= \alpha_{v}v_{old}+\alpha_{x}\frac{z_{new}-x_{old}}{dt}+\alpha_{z}\alpha_{x}\frac{z_{new}-z_{old}}{dt}</math><br><br />
with v_old the previous particle velocity, z_new and z_old the new and previous measurements, X_old the previous position (x,y) and dt the time since the previous measurement.<br><br />
The tunable parameters for the filter are given by table 1. Increasing α_v makes the filter ‘stronger’, increasing α_x makes the filter ‘weaker’ (i.e. trust the measurements more) and increasing α_z makes the filter ‘stronger’ with respect to the direction, but increases the average error of the prediction (i.e. the prediction might run parallel to the measurements).<br><br />
<br />
As said before, measurements of the ball originate from multiple sources, i.e. the drone and the turtle. These measurement are both used by the same particle filter, as it does not matter from what source the measurement comes. Ideally, these sensors pass along a confidence parameter, like a variance in case of a normally distributed uncertainty. This variance determines how much the measurement is trusted, and makes a distinction between accurate and inaccurate sensors. In its current implementation, this variance is fixed, irrespective of the source, but the code is easily adaptable to integrate it.<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39682Implementation MSD162017-05-04T19:58:37Z<p>Asinha: /* Ball position filter and sensor fusion */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br />
Especially tasks 2) and 3) are conflicting, since the filter cannot determine whether a measurement is “off-track” due to noise, or due to an actual change in direction (e.g. caused by a collision with a player). In contrast, tasks 1) and 3) are closely related in the sense that if measurement noise is filtered out, the prediction will be more accurate. These two relations mean that apparently tasks 1) and 2) are conflicting as well, and that a trade-off has to be made.<br><br />
The main reason to know the (approximate) ball position at all times is that the supervisor and coordinator can function properly. For example, the ball moves out of the field of view (FOV) of the system, and the supervisor transitions to the ‘Search for ball’ state. The coordinator now needs to assign the appropriate skills to each agent, and knowing where the ball approximately is, makes this easier. This implies that task 1) is the most important one, although task 2) still has some significance (e.g. when the ball changes directions, and then after a few measurements the ball moves out of the FOV).<br><br />
A solution to this conflict is to keep track of 2 hypotheses, which both represent a potential ball position. The first one is using a ‘strong’ filter, in the sense that it filters out measurements to a degree where the estimated ball hardly changes direction. The second one is using a ‘weak’ filter, in the sense that this estimate hardly filters out anything, in order to quickly detect a change in direction. The filter then keeps track whether these hypotheses are more than a certain distance (related to the typical measurement noise) apart, for more than a certain number of measurements (i.e. one outlier could indicate a false positive in the image processing, while multiple outliers in the same vicinity probably indicate a change in direction). When this occurs, the weak filter acts as the new initial position of the strong filter, with the new velocity corresponding to the change in direction.<br><br />
This can be further expanded on by predicting collisions between the ball and another object (e.g. players), to also predict the moment in time where a change in direction will take place. This is also useful to know when an outlier measurement really is a false positive, since the ball cannot change direction on its own.<br><br />
Currently, the weak filter is not implemented explicitly, but rather its hypothesis is updated purely by new measurements. In case two consecutive measurements are further than 0.5 meters removed from the estimation at that time, the last one acts as the new initial value for the strong filter. <br><br />
When a new measurement arrives, the new particle velocity v_new is calculated according to<br><br />
<math>v_{new}= \alpha_{v}v_{old}</math><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39681Implementation MSD162017-05-04T19:53:38Z<p>Asinha: /* Ball position filter and sensor fusion */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
1) Predict ball position based on previous measurement<br><br />
2) Adequately deal with sudden change in direction<br><br />
3) Filter out measurement noise<br><br />
<br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=Implementation_MSD16&diff=39680Implementation MSD162017-05-04T19:52:56Z<p>Asinha: /* World Model */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
<br />
= Tasks =<br />
<br />
The tasks which are implemented are:<br />
* Detect Ball Out Of Bound (BOOP)<br />
* Detect Collision<br />
<br />
The skill that are needed to achieve these tasks are explained in the section ''Skills''.<br />
<br />
= Skills =<br />
<br />
== Detection skills ==<br />
For the B.O.O.P. detection, it is necessary to know where the ball is and where the outer field lines are. To detect a collision the system needs to be able to detect players. Since we decided to use agents with cameras in the system, detecting balls, lines, and players requires some image-processing. The TURTLE already has good image processing software on its internal computer. This software is the product of years of development and has already been tested thoroughly, which is why we will not alter it and use it as is. Preferably we could use this software to also process the images from the drone. However, trying to understand years worth of code in order to make it useable for the drone camera (AI-ball) would take much more time than developing our own code. For this project, we decided to use Matlab's image processing toolbox to process the drone images. The images comming from the AI-ball are in the RGB color space. For detecting the field, the lines, objects and (yellow or orange) balls, it is more convenient to first transpose the image to the [https://en.wikipedia.org/wiki/YCbCr YCbCr] color space.<br />
<br />
=== Detect lines ===<br />
<br />
=== Detect balls ===<br />
<br />
=== Detect objects ===<br />
<br />
== Refereeing ==<br />
<br />
=== B.O.O.P. ===<br />
<br />
=== Collision detection ===<br />
<br />
<br />
== Locating skills ==<br />
<br />
=== Locate agents ===<br />
<br />
=== Locate objects ===<br />
<br />
<br />
== Path planning ==<br />
<br />
The path planning block is mainly responsible for generating optimal path for agents to send it as a desired position for their controllers. In the system Architecture, the coordinator block decides about the skill that needs to be performed by agents. For instance, this blocks sends detect ball for agent A (drone) and locate player for agent B as a task. Then, path-planning block requests from the World-Model latest information about the target object position and velocity as well as the position and velocity of agents. Using information, the Path-Planning block will then generate reference point for an agent controller. As it is shown in Fig.1, it is assumed that the world model is able to provide position and velocity of objects like ball whether it has been updated by agent camera or not. In the latter case, the particle filter gives an estimation of ball position and velocity based on dynamics of the ball. Therefore, it is assumed that estimated information about an object which is assigned by coordinator is available.<br />
<br />
<center>[[File:pp-Flowchart.jpg|thumb|center|750px|Fig.1: Flowchart for path planning skill]]</center><br />
<br />
There are two factors that have been addressed in the Path-Planning block. The first one is related to the case of multiple drone in order to avoid collision between them. Second, generating an optimal path as reference input for drone controller.<br />
<br />
<br />
=== Reference generator ===<br />
As we discussed earlier, the coordinator assigned a task to an agent for locating an object in a field. Subsequently, the world model would provide the path planner with the latest update of position and velocity of that object. Path-Planning block simply could us the position of the ball and send it to agent controller as a reference input. This could be a good decision when the agent and the object are relatively closed to each other. However, it is possible to take into account the velocity vector of the object in an efficient way. <br />
<br />
<br />
<center>[[File:pp-RefGenerator.jpg|thumb|center|750px|Fig.2: Trajectory of drone]]</center><br />
<br />
As it is shown in Fig.2, in a case of far distance between drone and ball, the drone should track the position ahead of the object to meet it in the intersection of the velocity vectors. Using the current ball position as a reference input would result in a curve trajectory (red line). However, if the estimated position of the ball in time ahead sent as a reference, the trajectory would be in a less curvy shape with less distance (blue line). This approach would result in better performance of tracking system, but more computational effort is needed. The problem that will be arises is the optimal time ahead t0 that should be set as a desired reference. To solve, we require a model of the drone motion with the controller to calculate the time it takes to reach a certain point given the initial condition of the drone. Then, in the searching Algorithm, for each time step ahead of the ball, the time to target (TT) for the drone will be calculated (see Fig.3). The target position is simply calculated based on the time ahead. The reference position is then the position that satisfies the equation t0=TT. Hence, the reference position would be [x(t+t0), y(t+t0)], instead of [x(t), y(t)]. It should be noted that, this approach wouldn’t be much effective in a case that the drone and object are close to each other. Furthermore, for the ground agents, that moves only in one direction, the same strategy could be applied. For the ground robot, the reference value should be determined only in moving direction of the turtle. Hence, only X component (turtle moving direction) of position and velocity of the interested object must be taken into account.<br />
<br />
<center>[[File:pp-searchAlg.jpg|thumb|center|750px|Fig.3: Searching algorithm for time ahead ]]</center><br />
<br />
<br />
=== Collision avoidance ===<br />
When Drones are flying above a field, the path planning should create a path for agents in a way that avoid collision between them. This can be done in collision avoidance block that has higher priority compared to optimal path planning that is calculated based on objective of drones (see Fig.4). Collision Avoidance-block is triggered when the drones state meet certain criteria that indicate imminent collision between them. Supervisory control then switch to the collision avoidance mode to repel the drones from getting closer. This is fulfilled by sending a relatively strong command to drones in a direction that maintain safe distance. Command as a velocity must be perpendicular to velocity vector of each drone. This is being sent to the LLC as a velocity command in the direction that results in collision avoidance and will be stopped after the drones are in safe positions. In this project, since we are dealing with only one drone, implementation of collision avoidance will not be conducted. However, it could be a possible area of interest to other to continue with this project.<br />
<br />
<center>[[File:pp-collisionAvoid.jpg|thumb|center|750px|Fig.4: Collision Avoidance Block Diagram]]</center><br />
<br />
= World Model =<br />
In order to perform tasks in an environment, robots need an internal interpretation of this environment. Since this environment is dynamic, sensor data needs to be processed and continuously incorporated into the so-called World Model (WM) of a robot application. Within the system architecture, the WM can be seen as the central block which receives input from the skills, processes these inputs in filters and applies sensor fusion if applicable, stores the filtered information, and monitors itself to obtain outputs (flags) going to the supervisor block. Figure 1 shows these WM processes and the position within the system. <br />
==Storage==<br />
One task of the WM is to act like a storage unit. It saves the last known positions of several objects (ball, drone, turtle and all players), to represent how the system perceives the environment at that point in time. This information can be accessed globally, but should only be changed by specific skills. The World Model class (not integrated) accomplishes this by requiring specific ‘set’ functions to be called to change the values inside the WM, as shown by Table 1. This prevents processes from accidentally overwriting WM data.<br />
==Ball position filter and sensor fusion==<br />
For the system it is useful to know where the ball is located within the field at all times. Since measurements of the ball position are inaccurate and irregular, as well as originating from multiple sources, a filter can offer some advantages. It is chosen to use a particle filter, also known as Monte Carlo Localization. The main reason is that a particle filter can handle multiple object tracking, which will prove useful when this filter is adapted for player detection, but also for multiple hypothesis ball tracking, as will be explained in this document. Ideally, this filter should perform three tasks:<br><br />
==Player position filter and sensor fusion==<br />
In order to detect collisions, the system needs to know where the players are. More specifically, it needs to detect at least all but one players to be able to detect any collision between two players. In order to track them even when they are not in the current field of view, as well as to deal with multiple sensors, again a particle filter is used. This particle filter is similar to that for the ball position, with the distinction that it needs to deal with the case that the sensor(s) can detect multiple players. Thus, the system needs to somehow know which measurement corresponds to which player. This is handled by the ‘Match’ function, nested in the particle filter function. <br><br />
In short, this ‘Match’ function matches the incoming set of measured positions to the players that are closest by. It performs a nearest neighbor search for each incoming position measurement, to match them to the last known positions of the players in the field. However, the implemented algorithm is not optimal in case this set of nearest neighbors does not correspond to a set of unique players (i.e. in case two measurements are both matched to the same player). In this case, the algorithm finds the second nearest neighbor for the second measured player. With a high update frequency and only two players, this generally is not a problem. However, in case of a larger number of players, which could regularly enter and leave the field of view of a particular sensor, this might decrease the performance of the refereeing system.<br><br />
Sensor fusion is again handled the same way as with the ball position, i.e. any number of sensors can be the input for this filter, where they would again ideally also transfer a confidence parameter. Here, this confidence parameter is again fixed irrespective of the source of the measurement.<br><br />
<br />
== Kalman filter ==<br />
<br />
== Particle filter ==<br />
<br />
== Estimator ==<br />
<br />
= Hardware =<br />
== Drone ==<br />
In the autonomous referee project, commercially available AR Parrot Drone Elite Edition 2.0 is used for the refereeing issues. The built-in properties of the drone that given in the manufacturer’s website are listed below in Table 1. Note that only the useful properties are covered, the internal properties of the drone are excluded.<br />
<br />
[[File:Table1.png|thumb|centre|500px]]<br />
<br />
The drone is designed as a consumer product and it can be controlled via a mobile phone thanks to its free software (both for Android and iOS) and send high quality HD streaming videos to the mobile phone. The drone has a front camera whose capabilities are given in Table 1. It has its own built-in computer, controller, driver electronics etc. Since it is a consumer product, its design, body and controller are very robust. Therefore, in this project, the drone own structure, control electronics and software are decided to use for positioning of the drone. Apart from that, the controlling of a drone is complicated and is also out of scope of the project.<br />
<br />
===Experiments, Measurements, Modifications===<br />
==== Swiveled Camera ====<br />
As mentioned before, the drone has its own camera and this camera is used to catch images. The camera is placed in front of the drone. However for the refereeing, it should look to the bottom side. Therefore it will be disassembled and will be connected to a swivel to tilt down 90 degrees. This will create some change in the structures. When this change is finished, it will be added here.<br />
<br />
==== Software Restrictions on Image Processing ====<br />
Using a commercial and non-modifiable drone in this manner brings some difficulties. Since the source code of the drone is not open, it is very hard to reach some data on the drone including the images of the camera. The image processing will be achieved in MATLAB. However, taking snapshots from the drone camera directly using MATLAB is not possible with its built-in software. Therefore an indirect way is required and this causes some processing time. The best time obtained with the current capturing algorithm is 0.4 Hz using 360p standard resolution (640x360). Although the camera can capture images with higher resolution, processing will be achieved using this resolution to decrease the required processing time.<br />
<br />
====FOV Measurement ====<br />
One of the most important properties of a vision system is the field of view (FOV) angle. The definition of the field of view angle can be seen in the figure. The captured images has a ratio of 16:9. Using this fact and after some measurements the achieved measurements showed that it is near to 70° view although given that the camera has 92° diagonal FOV. The achieved measurements and obtained results are summarized in Table 2 . Here corresponding distance per pixel is calculated in standard resolution (640x360).<br />
[[File:Field1.png|thumb|centre|500px]]<br />
[[File:Table2.png|thumb|centre|500px]]<br />
<br />
=== Initialization ===<br />
The following properties have to be initialized to be able to use the drone. For the particular drone that is used during this project, these properties have the values indicated by <value>:<br />
* SSID <ardrone2><br />
* Remote host <192.168.1.1><br />
* Control<br />
** Local port <5556><br />
* Navdata<br />
** Local port <5554><br />
** Timeout <1 ms><br />
** Input buffer size <500 bytes><br />
** Byte order <litte-endian><br />
<br />
Note that for all properties to initialize the UDP objects that are not mentioned here, MATLAB's default values are used.<br />
<br />
After initializing the UDP objects, the Navdata stream must be initiated by following the steps in the picture below.<br />
<br />
[[File:navdata_initiation.png|thumb|centre|600px|Navdata stream initiation <ref name=sdk>[http://developer.parrot.com/docs/SDK2/ "AR.Drone SDK2"]</ref>]]<br />
<br />
Finally, a reference of the horizontal plane has to be set for the drone internal control system by sending the command FTRIM. <ref name=sdk /><br />
<br />
=== Wrapper ===<br />
As seen in the initialization, the drone can be seen as a block with expects a UDP packet containing a string as input and which gives an array of 500 bytes as output. To make communicating with this block easier, a wrapper function is written that ensures that the both the input and output are doubles. To be more precise, the input is a vector of four values between -1 and 1 where the first two represent the tilt in front (x) and left (y) direction respectively. The third value is the speed in vertical (z) direction and the fourth is the angular speed (psi) around the z-axis. The output of the block is as follows:<br />
* Battery percentage [%]<br />
* Rotation around x (roll) [°]<br />
* Rotation around y (pitch) [°]<br />
* Rotation around z (yaw) [°]<br />
* Velocity in x [m/s]<br />
* Velocity in y [m/s]<br />
* Position in z (altitude) [m]<br />
<br />
== Top-Camera ==<br />
The topcam is a camera that is fixed above the playing field. This camera is used to estimate the location and orientation of the drone. This estimation is used as feedback for the drone to position itself to a desired location.<br />
The topcam can stream images with a framerate of 30 Hz to the laptop, but searching the image for the drone (i.e. image processing) might be slower. This is not a problem, since the positioning of the drone itself is far from perfect and not critical as well. As long as the target of interest (ball, players) is within the field of view of the drone, it is acceptable.<br />
<br />
== Ai-Ball ==<br />
<br />
<br />
== TechUnited TURTLE ==<br />
Originally, the Turtle is constructed and programmed to be a football playing robot. The details on the mechanical design and the software developed for the robots can be found [http://www.techunited.nl/wiki/index.php?title=Hardware here] and [http://www.techunited.nl/wiki/index.php?title=Software here] respectively. <br />
<br />
<br />
For this project it was to be used as a referee. All the software that has been developed at TechUnited did not need any further expansion as some part of the extensive code could be used to fulfill the role of the referee. This is explained in the section ''Software/Communication Protocol Implementation'' of this wiki-page.<br />
<br />
== Player ==<br />
<br />
[[File:omnibot.jpeg|thumb|centre|600px|Omnibot with and without protection cover]]<br />
<br />
The robots that are used as football players are shown in the picture. In the right side of the picture, the robot is shown as it was delivered at the start of the project. This robot contains a Raspberry Pi, an Arduino and three motors (including encoders/controllers) to control three omni-wheels independently. Left of this robot, a copy including a cover is shown. This cover must prevent the robots from being damaged when they are colliding. Since one of the goals of the project is to detect collisions, it must be possible to collide more than once.<br />
<br />
To control the robot, Arduino code and a Python script to run on the Raspberry Pi are provided. The python script can receive strings via UDP over Wi-Fi. Furthermore, it processes these strings and sends commands to the Arduino via USB. To control the drone with a Windows device, MATLAB functions are implemented. Moreover, an Android application is developed to be able to control the robot with a smartphone. All the code can be found on GitHub.<ref name=git>[https://github.com/guidogithub/jamesbond/ "GitHub"]</ref><br />
><br />
<br />
= Supervisory Blocks =<br />
<br />
= Integration =<br />
== Hardware Inter-connections ==<br />
The Kinect and the omni-vision camera on the TechUnited Turle allow the robot to take images of the on-going game. With image processing algorithms useful information from the game can be extracted and a mapping of the game-state, i.e.<br> <br />
1. the location of the Turtle,<br><br />
2. the location of the ball,<br><br />
3. the location of players<br><br />
and other entities present on the field can be computed. This location is with respect to the global coordinate system fixed at the geometric center of the pitch. At TechUnited, this mapping is stored (in the memory of the turtle) and maintained (updated regularly) in a real-time data-base (RTDb) which is called the WorldMap. The details on this can be obtained from the [http://www.techunited.nl/wiki/index.php?title=Software software page] of TechUnited. In a Robocup match, the participating robots, maintain this data-base locally. Therefore, the Turtle which is used for the referee system, has a locally stored global map of the environment. This information was needed to be extracted from the Turtle and fused with the other algorithms and software that was developed for the drone. These algorithms and software were created on MATLAB and Simulink while the TechUnited software is written in C and uses Ubuntu as the operating system. The player-robots from TechUnited, communicate with each other via the UDP communication protocol and this is executed by the (wireless) comm block shown in the figure that follows.<br><br />
<center>[[File:communication.png|thumb|center|500px|UDP communication]]</center><br />
The basestation computer in this figure is a DevPC (Development PC) from TechUnited which is used to communicate (i.e. send and receive date) with the Turtle. Of all the data that is received from the Turtle, only a part of it was handpicked as it suited the needs of the project the best. This data, as stated earlier is information on the location of the turtle, the ball and the players.<br> <br />
A small piece of code from the code-base of TechUnited was taken out. This piece consisted of functions which extracted the necessary information from the outputs generated by image processing running on the Turtle by listening to this information through the S-funcation [[sf_test_rMS_wMM.c]]''Italic text'' created in MATLAB’s environment and sent to the main computer (running on Windows) via the UDP Send and UDP Receive block in ''Simunlink''. <br />
This is figuratively shown in the picture below.<br><br />
<center>[[File:interconnection.png|thumb|center|500px|Inter-connections of the hardware components]]</center><br />
The s-function behind the communication link between the Turtle and the Ubuntu PC was implemented in Simulink and is depicted as follows. The code can be accessed through the repository.<br />
<br />
=References=<br />
<references/></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39679System Architecture MSD162017-05-04T19:48:08Z<p>Asinha: /* Layered Approach */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
<br />
NOTE :<br> '''&''' respresents ''' AND''' relationship between requirements. <br> '''||''' respresents '''OR''' relationship between requirements.<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
'''&'''Battery percentage above 20%<br><br />
'''&'''Drone position is known in world model<br><br />
<br />
=====LAND=====<br />
'''||'''Battery percentage not above 20%<br><br />
'''||'''Drone is outside field<br><br />
<br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
'''&'''Drone is (approximately) at 1 meter<br><br />
<br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
'''&'''Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
'''&'''Drone is (approximately) at reference height<br />
<br />
====PLAN PATH FOR TURTLE====<br />
'''&'''Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
'''||'''Ball (in snapshot) is located in region with label “out”<br><br />
'''||'''Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR'''*'''=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
'''*'''Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39678System Architecture MSD162017-05-04T19:22:14Z<p>Asinha: /* Supervisor Requirements */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
<br />
NOTE :<br> '''&''' respresents ''' AND''' relationship between requirements. <br> '''||''' respresents '''OR''' relationship between requirements.<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
'''&'''Battery percentage above 20%<br><br />
'''&'''Drone position is known in world model<br><br />
<br />
=====LAND=====<br />
'''||'''Battery percentage not above 20%<br><br />
'''||'''Drone is outside field<br><br />
<br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
'''&'''Drone is (approximately) at 1 meter<br><br />
<br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
'''&'''Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
'''&'''Drone is (approximately) at reference height<br />
<br />
====PLAN PATH FOR TURTLE====<br />
'''&'''Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
'''||'''Ball (in snapshot) is located in region with label “out”<br><br />
'''||'''Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR'''*'''=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
'''*'''Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39677System Architecture MSD162017-05-04T19:19:33Z<p>Asinha: /* PLAN PATH FOR DRONE */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
'''&'''Battery percentage above 20%<br><br />
'''&'''Drone position is known in world model<br><br />
<br />
=====LAND=====<br />
'''||'''Battery percentage not above 20%<br><br />
'''||'''Drone is outside field<br><br />
<br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
'''&'''Drone is (approximately) at 1 meter<br><br />
<br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
'''&'''Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
'''&'''Drone is (approximately) at reference height<br />
<br />
====PLAN PATH FOR TURTLE====<br />
'''&'''Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
'''||'''Ball (in snapshot) is located in region with label “out”<br><br />
'''||'''Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR'''*'''=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
'''*'''Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39676System Architecture MSD162017-05-04T19:19:25Z<p>Asinha: /* PLAN PATH FOR TURTLE */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
'''&'''Battery percentage above 20%<br><br />
'''&'''Drone position is known in world model<br><br />
<br />
=====LAND=====<br />
'''||'''Battery percentage not above 20%<br><br />
'''||'''Drone is outside field<br><br />
<br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
'''&'''Drone is (approximately) at 1 meter<br><br />
<br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
'''&'''Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
'''&'''Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
'''||'''Ball (in snapshot) is located in region with label “out”<br><br />
'''||'''Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR'''*'''=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
'''*'''Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39675System Architecture MSD162017-05-04T19:19:18Z<p>Asinha: /* MOVE TOWARDS CENTER OF FIELD */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
'''&'''Battery percentage above 20%<br><br />
'''&'''Drone position is known in world model<br><br />
<br />
=====LAND=====<br />
'''||'''Battery percentage not above 20%<br><br />
'''||'''Drone is outside field<br><br />
<br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
'''&'''Drone is (approximately) at 1 meter<br><br />
<br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
'''&'''Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
'''||'''Ball (in snapshot) is located in region with label “out”<br><br />
'''||'''Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR'''*'''=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
'''*'''Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39674System Architecture MSD162017-05-04T19:19:09Z<p>Asinha: /* (ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
'''&'''Battery percentage above 20%<br><br />
'''&'''Drone position is known in world model<br><br />
<br />
=====LAND=====<br />
'''||'''Battery percentage not above 20%<br><br />
'''||'''Drone is outside field<br><br />
<br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
'''&'''Drone is (approximately) at 1 meter<br><br />
<br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
'''||'''Ball (in snapshot) is located in region with label “out”<br><br />
'''||'''Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR'''*'''=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
'''*'''Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39673System Architecture MSD162017-05-04T19:18:58Z<p>Asinha: /* TAKEOFF */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
'''&'''Battery percentage above 20%<br><br />
'''&'''Drone position is known in world model<br><br />
<br />
=====LAND=====<br />
'''||'''Battery percentage not above 20%<br><br />
'''||'''Drone is outside field<br><br />
<br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
Drone is (approximately) at 1 meter<br><br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
'''||'''Ball (in snapshot) is located in region with label “out”<br><br />
'''||'''Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR'''*'''=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
'''*'''Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39672System Architecture MSD162017-05-04T19:17:14Z<p>Asinha: /* LAND */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
Battery percentage above 20%<br><br />
Drone position is known in world model<br><br />
=====LAND=====<br />
'''||'''Battery percentage not above 20%<br><br />
'''||'''Drone is outside field<br><br />
<br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
Drone is (approximately) at 1 meter<br><br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
'''||'''Ball (in snapshot) is located in region with label “out”<br><br />
'''||'''Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR'''*'''=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
'''*'''Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39671System Architecture MSD162017-05-04T19:16:56Z<p>Asinha: /* Whistle Requirements */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
Battery percentage above 20%<br><br />
Drone position is known in world model<br><br />
=====LAND=====<br />
Battery percentage not above 20%<br><br />
Drone is outside field<br><br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
Drone is (approximately) at 1 meter<br><br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
'''||'''Ball (in snapshot) is located in region with label “out”<br><br />
'''||'''Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR'''*'''=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
'''*'''Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39670System Architecture MSD162017-05-04T19:16:29Z<p>Asinha: /* LINE DETECTOR* */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
Battery percentage above 20%<br><br />
Drone position is known in world model<br><br />
=====LAND=====<br />
Battery percentage not above 20%<br><br />
Drone is outside field<br><br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
Drone is (approximately) at 1 meter<br><br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
Ball (in snapshot) is located in region with label “out”<br><br />
Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR'''*'''=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
'''*'''Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39669System Architecture MSD162017-05-04T19:16:19Z<p>Asinha: /* LINE DETECTOR* */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
Battery percentage above 20%<br><br />
Drone position is known in world model<br><br />
=====LAND=====<br />
Battery percentage not above 20%<br><br />
Drone is outside field<br><br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
Drone is (approximately) at 1 meter<br><br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
Ball (in snapshot) is located in region with label “out”<br><br />
Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR*=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
*Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39668System Architecture MSD162017-05-04T19:16:03Z<p>Asinha: /* LINE DETECTOR* */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
Battery percentage above 20%<br><br />
Drone position is known in world model<br><br />
=====LAND=====<br />
Battery percentage not above 20%<br><br />
Drone is outside field<br><br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
Drone is (approximately) at 1 meter<br><br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
Ball (in snapshot) is located in region with label “out”<br><br />
Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR*=====<br />
'''&'''Lines are expected to be visible in a snapshot<br><br />
* Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39667System Architecture MSD162017-05-04T19:15:56Z<p>Asinha: /* COLLISION DETECTOR */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
Battery percentage above 20%<br><br />
Drone position is known in world model<br><br />
=====LAND=====<br />
Battery percentage not above 20%<br><br />
Drone is outside field<br><br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
Drone is (approximately) at 1 meter<br><br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
Ball (in snapshot) is located in region with label “out”<br><br />
Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR*=====<br />
Lines are expected to be visible in a snapshot<br><br />
* Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
'''&'''Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinhahttps://cstwiki.wtb.tue.nl/index.php?title=System_Architecture_MSD16&diff=39666System Architecture MSD162017-05-04T19:15:39Z<p>Asinha: /* LOCATE BALL */</p>
<hr />
<div><div STYLE="float: left; width:80%"><br />
</div><div style="width: 35%; float: right;"><center>{{:Content_MSD16_small}}</center></div><br />
__TOC__<br />
<br />
= Paradigm =<br />
There are many paradigms available for developing architecture for robotic systems. These paradigms are available in Chapter 8 in the [http://download.springer.com/static/pdf/864/bok%253A978-3-540-30301-5.pdf?originUrl=http%3A%2F%2Flink.springer.com%2Fbook%2F10.1007%2F978-3-540-30301-5&token2=exp=1490798126~acl=%2Fstatic%2Fpdf%2F864%2Fbok%25253A978-3-540-30301-5.pdf%3ForiginUrl%3Dhttp%253A%252F%252Flink.springer.com%252Fbook%252F10.1007%252F978-3-540-30301-5*~hmac=10768f14783408dd794f24107129219bf0bd18a79067a7f204b450620c0608b5 Springer Handbook of Robotics]. In this project the [http://cstwiki.wtb.tue.nl/images/20150429-EMC-TUe-CompositionPattern-nup.pdf paradigm] developed at KU Leuven in collaboration with TU Eindhoven and was used. This paradigm is followed at TechUnited. <br><br />
<center>[[File:pradigm.png|thumb|center|500px|Architecture paradigm]]</center><br />
This paradigm defines Tasks (blue block) using the objectives set with respect to the context of the project, i.e. what needs to be done. The Skills (yellow block) then define the implementation, i.e. how to these tasks will be completed. The hardware (grey block) or the robots, i.e. the ones who will complete the tasks by implementing the skills are the agents available at the system-architects’ disposal and the choice made to pick certain agents over the other is influenced by system requirements. The system requirements are influenced by a number of factors especially the objectives and the context of the project. Coming back to the agents, they gather information from the environment and this information is first processed (filtered, fused etc.) and then stored in the world model (green block) which allows it to be accessed afterwards. Finally the visualization or the user interface (orange block) is a tool to observe how the system sees the environment. <br><br />
In the task block, a sub-block is the Task Monitor (highlighted in red within the Task context block). This block was interpreted as a supervisor and a coordinator. It keeps an eye on the tasks and skills. It is possible to have some tasks which are a completed by a series of skills. In such a case it becomes important to schedule skills as these could either have to be executed sequentially or parallelly. It might become important for the system to track which skills have been completed and which need to be executed next. Task control feedback/ feedforward was not taken into consideration in much detail in the system architecture that was developed for this project. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
<br />
= Layered Approach =<br />
<br />
The system architecture derived from the concepts presented in the above-described paradigm can be seen below. Every block but the GUI context is highlighted in this architecture. The Coordinator block interacts with the Supervisor to determine which skills need to be executed and is assisted by the Skill Matcher in picking the right hardware agent best suited to execute the necessary skill. The concept behind the skill matcher is explained in a later section. <br><br />
<center>[[File:architecture_1.png|thumb|center|600px|Architecture]]</center><br />
While developing the architecture, a layered approach was used. Based on the objectives, and the requirements/constraints imposed by them, design choices were made. As shown in the Figure below starting from the top, with each subsequent layer the level of detail increases. <br><br />
<center>[[File:layered_Approach.png|thumb|center|450px|The three layers in the architecture]]</center><br />
<br />
= System Layer =<br />
Detecting predefined events and enforcing the corresponding rules were the two main system objectives that could be taken into account for this architecture. Two events, namely<br><br />
1. ball going out of pitch (B.O.O.P) and<br><br />
2. collision between players<br><br />
were considered to be in scope of this project. For detection and enforcement of these rules the Tasks and Skills blocks were filled as shown in the figure below.<br><br />
<center>[[File:Layer1.png|thumb|center|750px|System layer of the architecture]]</center><br />
= Approach Layer =<br />
Further, by taking into account some of the system requirements one can bring more tasks and skills into the scope. For example, if one of the requirement is ‘consistency’, it can be defined as, ‘the autonomous referee system should be able to capture the gameplay dynamically’. This requirement might be further refined and result in the decision, ‘the hardware must have multiple cameras in the field which allow effective capture the active gameplay ’. Further, this decision could be polished into having ‘multiple movable cameras in the field which allow effective capture the active gameplay ’. The current layer in the architecture represents an evolution towards implementation. This is reflected in the narrowing down of the elements in the Hardware Block from Sensors and/or devices to multiple cameras which could move. Though this statement is still vague, it is more refined than its predecessor.<br />
<center>[[File:Layered2.png|thumb|center|750px|The approach layer of the architecture]]</center><br />
<br />
= Implementation Layer =<br />
Referring back to the ''Three layers in the architecture'' diagram as the requirements were further refined, several design choices were made on the software and hardware. These choices were influenced not only by requirements but also by constraints. For example, using a drone and the TechUnited turtle were imposed on the project. These did not come into the picture in the previous layers. The current layer of the architecture is the Implementation Layer as depicted in the figure.<br><br />
<center>[[File:Layer3.png|thumb|center|700px|Implementation layer in the architecture]]</center><br />
At this level of the architecture, one can write the software. The major decisions that were made here were regarding the software such as specific: <br><br />
1. image processing algorithms (=skills), <br><br />
2. sensor fusion algorithms (in the world model) and<br><br />
3. communication protocols (communication between different hardware components).<br><br />
This layer is also the realistic representation of the actual implementation the project team ended up with i.e. only the detection based tasks (and the corresponding skills).<br> Below is the screen-shot of the Simulink implementation. What is highlighted here is the fact that the software developed in this project had a proper correspondence with the paradigm used of the system architecture. The congruity in the colors of the blocks here and the ones that have shown earlier in the description of the paradigm is evident. <br><br />
<br />
<center>[[File:Simulink_Arch.png|thumb|center|700px|Simulink implementation]]</center><br />
<br />
The Supervisor, coordinator and the skill-matcher were not in the final implementation. Bu their conceptual inception is presented here and can be further worked on and included in future implementations. <br><br />
<br />
==Skill-Matcher==<br />
For each defined skill, the skill-selector needs to determine which piece of hardware is able to perform this skill. Therefore the capabilities of each piece of hardware needs to be known and registered somewhere in a predefined standard format. These hardware capabilities can be compared to the hardware capability requirements (HCR; required in the skill framework) of each skill. This matching between hardware and skill can be done by the skill-selector during initialization of the system. To do this in a smart way, the description of the hardware configuration and the HCR need to be in a similar format. One possible way of doing this will be discussed below.<br><br />
<br />
===Hardware configuration files===<br />
On the approach level of the system architecture, it was decided to use moving robots with camera’s to perform the system tasks. Different robots might have different capabilities. This should not hinder the system from operating correctly. It should preferably even be possible to swap robots without losing system functionality. This is why the hardware is abstracted in the architecture. Each piece of hardware needs to know its own performance and share this information with the skill-selector. For the skill-selector to function properly, the performance of the hardware needs to be defined in a structured way. To do this, it is decided that every piece of hardware needs to have its own configuration file. In this file, a wide range of quantifiable performance indices are stored. Because one robot can have multiple cameras, the performance can be split into a robot-specific part and one or more camera parts.<br />
<center>[[File:HCR.png|thumb|center|500px|Hardware configuration]]</center><br />
===Skill HCR===<br />
Some skills might require some capability of the hardware (HCR) in order to perform as desired. To make it clear to the skill-matcher what these requirements are, each skill is prefixed with an HCR. This HCR should give bounds on the performance indicators in the configuration files. For instance; “to execute skill 1 a piece of hardware is required which can move in x and y-direction with at least 5 m/s”. The skill-matcher can, in turn, look at each configuration file in order to determine what hardware can drive in x and y-direction with at least this speed. As a simple first implementation, the HCR can be implemented as a structure. This structure will consist of a robot-capability structure and a camera capability structure:<br><br />
'''Struct''' ''HCR_struct'' {RobotCap,CameraCap}<br><br />
<br />
'''Struct''' ''RobotCap''{<br><br />
DOF_matrix<br><br />
Occupancy_bound_representation<br><br />
Signal_device<br><br />
Number_of_cams<br><br />
}<br><br />
'''Struct''' ''CameraCap''{<br><br />
DOF_matrix<br><br />
Resolution<br><br />
Frame_rate<br><br />
Detection_box_representation<br><br />
}<br><br />
Each entry in the HCR structure is already discussed. The DOF-matrix holds all information about the possible degrees of freedom and their bounds in one matrix. The rows of this matrix will represent the direction of the degree of freedom. The first column will say whether the robot has this DOF and the other columns will give upper and lower bounds. One example is given below:<br><br />
<center>[[File:skillHCR.png|thumb|center|500px|Skill-HCR]]</center><br />
This could reflect a robot which is able to move in the x- and y-direction and rotate around the ψ-angle. The movement in these directions is not limited in space, hence the infinite position bounds. The robot is able to drive with a maximum speed of 5 m/s and a maximum acceleration of 2 m/s^2 in both directions. It can rotate with a maximum speed of 8 rad/s and a maximum acceleration of 4 rad/s^2. For this example the bounds are symmetric, this need not necessarily be the case for each robot. A similar matrix can be made for every camera on the robot. <br><br />
===Hardware matching===<br />
Every skill that is implemented can be numbered, either predefined or during initialization of the system. Each hardware component and its cameras can be numbered as well. During initialization, the skill selector can check for every skill what the HCR are for that skill. It can then compare this to the configuration files of every hardware component. Based on this comparison, the hardware component can either perform the skill or not. A matrix can be constructed that stores which skill can be executed by which component. An example of such a matching matrix is given below:<br><br />
<center>[[File:HWMatching.png|thumb|center|500px|HW Matching matrix]]</center><br />
<br />
From this matrix it can be seen that there exist six skills in total and there are three agents available. Agent 1 has two camera’s which it can use to perform the entire set of skills. Agent 2 has only one camera and with this camera it can only perform skill 1 and 4. Agent 3 can perform all skills with camera 3 and can even perform some skills with multiple cameras. From this matrix it becomes clear that all skills can be performed with this set of agents. Some skills, like skill 6, can only be performed by two cameras while skill 1 is very well covered. This matrix can be used to determine if the set of agents is enough to cover all skills. It could also be used to determine which robot is most suited for which role, e.g. main referee and line referee. This matrix is constructed during initialization of the system. However, it could happen that an agent is added or subtracted from the system. Therefore, the skill selector should frequently check which agents are still in the system and, if necessary, update the matching matrix. <br><br />
===Role restirctions===<br />
If there are multiple agents in the system, it could be necessary to assign roles to each of them. This role assignment could be determined beforehand, for instance by specifying a preferred role in the configuration file of each agent. Another way to assign roles can be based on the matching matrix. Based on the skills each agent can perform the roles are divided. Some roles might impose restrictions on the agent. These restrictions can be interpreted in the same way as the HCR of the skill-framework. Where the HCR imposes some minima on the hardware requirements, the role restriction (RR) sets some maximal allowed values for the hardware. For example, the line referee is only allowed to move alongside one end of the field with a maximum linear velocity of 3 m/s. The DOF-matrix of the RR structure might look like:<br><br />
<center>[[File:roleRestriction.png|thumb|center|500px|Role restriction matrix]]</center><br />
Suppose the field is 12 meters long and 9 meters wide. The agent which is performing the role described by this RR is only able to move next to the field, with a margin of half a meter, over the entire length of the field. The maximal absolute velocity at which it is allowed to do this is 3 m/s.<br><br />
<br />
===Role matching===<br />
The RR also restricts the skills which an agent with a certain role is allowed to do. If it is decided which agent will fulfill a certain role, the matching matrix needs to be updated. This update can only remove entries in the matching matrix, since the restriction can never add functionality to the hardware. This process raises a problem: if the role assigning is done based on the matching matrix and the RR changes this matrix, the role assigning might change. This will probably call for an iterative approach to determine which agent is best fitted for a role in order to get a good coverage of all the skills.<br />
<br />
==Supervisor (and coordinator)==<br />
The supervisor is responsible for monitoring the system tasks and for coordinating the subsystems with respect to these tasks. This involves dynamically distributing subtasks amongst the subsystems in an efficient and effective manner. The operation of the supervisor can be divided into a default process flow and a set of alternative process flows. The latter are activated by a certain trigger occurring during the default process, and can be terminated by another trigger. <br><br />
The top half of Figure 1 shows the default process flow of the supervisor. For each subsystem, the process flow is divided into two sections: one (the colored rectangle) representing processes taking place on the subsystem hardware, and one (the black rectangle) representing the processes taking place on a separate processing unit. These processing units can be one and the same, or actually separated, in which case the world model data needs to be synchronized between these units.<br><br />
<center>[[File:supervisor_Flow.png|thumb|center|600px|Supervisor process flow]]</center><br />
Note that the central process in this default flow is “Update World Model”, indicating that the entire default process flow revolves around maintaining an accurate World Model (WM). Information, or lack of it, in this WM can then trigger an alternative process flow. The lower half of Figure 1 shows these alternative flows, five in total. Table 1 lists the names, descriptions and activating and terminating triggers for each alternative flow.<br />
<center>[[File:table_Supervisor.png|thumb|center|600px|Alternative flows, including descriptions and activation and termination triggers.]]</center><br />
===Supervisor Requirements===<br />
====Skills====<br />
<br />
=====Basic=====<br />
• Move drone<br><br />
• Move turtle<br><br />
• Take snapshot with top camera<br><br />
• Take snapshot with Ai-Ball<br><br />
• Take snapshot with Kinect<br><br />
• Take snapshot with omnivision<br><br />
• Whistle<br><br />
=====Advanced=====<br />
• Detect lines<br><br />
• Detect regions<br><br />
• Search ball<br><br />
• Locate ball<br><br />
• Determine whether ball is in/out<br><br />
• Search players<br><br />
• Locate players<br><br />
• Detect space between players<br><br />
• Locate drone<br><br />
• Plan paths for drone<br><br />
• Locate turtle<br><br />
• Plan paths for turtle<br><br />
<br />
====Control automata====<br />
<center>[[File:control_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:control_Automata_2.png|thumb|center|600px|]]</center><br />
<br />
====Observer automata====<br />
<center>[[File:Observer_Automata_1.png|thumb|center|600px|]]</center><br />
<center>[[File:Observer_Automata_2.png|thumb|center|600px|]]</center><br />
====Drone movement requirements====<br />
=====TAKEOFF=====<br />
Battery percentage above 20%<br><br />
Drone position is known in world model<br><br />
=====LAND=====<br />
Battery percentage not above 20%<br><br />
Drone is outside field<br><br />
=====(ENABLE REFERENCE TO) FLY TO HEIGHT OF 2 METER=====<br />
Drone is (approximately) at 1 meter<br><br />
=====MOVE TOWARDS CENTER OF FIELD=====<br />
Drone is close to the outer line of the field<br><br />
<br />
====Path planning Requirements====<br />
=====PLAN PATH FOR DRONE=====<br />
Drone is (approximately) at reference height<br />
====PLAN PATH FOR TURTLE====<br />
Turtle is at the side-line<br />
<br />
====Whistle Requirements====<br />
Ball (in snapshot) is located in region with label “out”<br><br />
Two objects (in snapshot) are touching (and visible as an blob)<br />
<br />
====Enable/Disable detectors====<br />
=====LINE DETECTOR*=====<br />
Lines are expected to be visible in a snapshot<br><br />
* Enabling the algorithm for detecting lines will also enable the algorithm to label the regions separated by the detected lines.<br><br />
<br />
=====COLLISION DETECTOR=====<br />
Two players are expected to be visible in a snapshot<br><br />
<br />
=====LOCATE BALL=====<br />
'''&''' Ball position is known in world model<br><br />
<br />
=====LOCATE PLAYERS=====<br />
'''&''' All player positions are known in world model<br></div>Asinha