AutoRef implementation: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
No edit summary
 
(68 intermediate revisions by 3 users not shown)
Line 1: Line 1:
The implementation for the [[AutoRef - Autonomous Referee System|AutoRef]] autonomous referee for [https://msl.robocup.org/ RoboCup Middle Size League (MSL)] robot soccer is the proposed design of the AutoRef system.
The implementation for the [[AutoRef - Autonomous Referee System|AutoRef]] autonomous referee for [https://msl.robocup.org/ RoboCup Middle Size League (MSL) robot soccer] is the execution of the AutoRef system design.  


In 2021 contributions by [[AutoRef MSD 2020|MSD 2020]] focused on detecting ball-to-player distance violations.
Implemented design elements are based on [[AutoRef_system_architecture#Functional_specification|AutoRef's functional specification]] as part of [[AutoRef system architecture|its system architecture]], a specification of which is based on the law-task-skill breakdown of the [https://msl.robocup.org/rules MSL rulebook]. The design of implemented tasks in the AutoRef project are provided in main articles linked in the chapters below.
 
[[#Team contributions|Team contributions]] to AutoRef's implementation from 2021 onwards are integrated within the implementation pages. Contributions from 2016–2020 are detailed by their respective teams in their [[AutoRef - Autonomous Referee System#Team contributions|own pages]].


__TOC__
__TOC__


==Introduction==
==Distance violation checking==
 
:''Main article: [[AutoRef distance violation checking]]''
===Objective statement===
 
The main objective of the implementation part of the project was to detect violations of the rules related to the distance between the ball and players during the following game states:  
*Free kick
*Kick-off
*Corner kick
*Goal kick
*Throw-in
*Dropped-ball
*Penalty kick
 
These rules are described in Laws number 8, 10, 13, 14, and 15 of the MSL Rulebook.
 
===Motivation===
This objective was chosen due to several reasons:
 
- Past projects analysis has shown that this functionality has never been designed before
 
- Stakeholder interviews (the MSL referees) have led to the conclusion that this kind of rules are hard to control for a human being
 
- Proof of concept for the developed functional specification was desirable
 
- Learning goals of the team members correspond to the technical solutions necessary for the functionality development
 
===Scope of work===
 
The following topics were included in the implementation scope:
 
* Requirements formulation
 
* Architectural decomposition development
 
* Individual code blocks development
 
* Individual code integration
 
* Software testing on images and videos
 
These topics are explained in detail in the following sections.
 
==Process model==
 
===Introduction===
The development activities of the design team need support from process models. In this project, the V-model is chosen to guide the development procedure from requirement engineering to system validation. Due to the particularity of the project itself, some details of the model have been changed. At the same time, the agile approach was used during system development and combined with the traditional V-model, which makes the project progress more flexible and efficient.
 
===Use of V-model===
 
V-model has the following advantages for the development of the project:  
 
- Design team's project is based on machine vision and software algorithms. V-model was first proposed in the software development environment and has matured in the software development field.
 
- The project team has five members, all of whom can participate in the development of subsystems and they can be developed at the same time. On the premise that the system architecture is determined, V-model can greatly improve the development efficiency.
 
- The system development starts from the fourth week, which means that the team needs to complete the system development in five weeks, and the mature and ready-made V-model process can save a lot of time spent on project management.
<center>[[File:V-model.jpg|800px|frame|none|alt=Alt text|Left:V-model used for guiding the project process for design team; Right: Test plan derived from the V-model]]</center>
 
Based on the general V-model, a detailed test plan has been made for the verification of the system both from functional and performance perspectives.
 
Based on the requirements derivation results, functional and performance requirements were set up and related tests were planned as shown in the figure above. In this plan, the details of the V-model are supplemented, and more detailed test steps and iterations are added in the test and verification phases. The technical blocks were integrated into the first phase, then several images regarding typical use cases are created from the simulation environment (refer to Section 3) in order to verify the functional requirements. Videos were created to test the performance of the system based on particular scenarios (refer to Section 4). Code was updated iteratively after several times of tests. After the code was verified, a simulated game video was created in the simulation environment to illustrate how the system works in the 'real' world.
 
===Agile approach===
Due to the limited project time and various uncertainties in the system development process, Agile approach was applied in the system development process, which is mainly reflected in the system architecture and design choice part.
 
There are two main difficulties in the development of this project:
 
- How to implement an efficient and fast detection algorithm?
 
- How to achieve an accurate image capture in reality?
 
Usually, the algorithm needs to be executed after the system obtains the image, but it is worth noting that the design of vision system has the following two considerations, which greatly increases the complexity of the system design:
 
- Fixed camera OR moved camera
 
- One camera OR multiple cameras
 
After careful evaluation, we thought that time spent on algorithm development will be greatly reduced in the design of the vision system, which is not the result we want. Therefore, we finally determined the system development scheme based on Agile. That is, a single fixed camera is initially used, and a simulated game situation has been created under the software simulation environment as a reference sample for algorithm development. Optimization of vision system can be carried out after the algorithm is developed. 
 
The main idea is quickly designing and checking the performance of the algorithm we developed, which also confirms the rationality of ‘decision fast’ in the Agile approach.
 
==Major design choices==
 
===Programming language===
 
MATLAB was chosen as the programming language due to the availability of in-built functions and documentation, which is useful in a quick test of a proof-of-concept of the implemented algorithm.
 
===Selection of test environment===
 
It was decided to use a simulation environment, to quickly test and verify the functionality of the ball-player distance violation check algorithm. It was desirable to verify the implementation in a simulation environment before committing to a specific choice of hardware. Other factors leading to the choice of a simulation environment included—time limitations, limited access to the tech-united playing field (covid restrictions), and the limited availability of match video footage with the desirable qualities (RGB top-view).
 
A custom simulation environment was built using MATLAB, Simulink 3D animation, and the Virtual Reality Modelling Language (VRML). Another option was to use GreenField, the visualization software used by Tech United in replaying recorded match data. This was not done due to less familiarity with Linux systems in the team, and a potential dependency on Tech-united developers, which could lead to a time-consuming learning curve.
 
===Vision system parameters===
 
The main design choices to be made were regarding the selection of the vision system, involving the following aspects:
# Mobility of the camera(s) (static vs moving)
# Location(s) of the camera(s)
# Camera resolution
 
 
Motivations for choosing a static camera were as following:
 
*Simplified localization requirements
*Simplified implementation architecture
*Less risk of invasiveness (spatial and audio)
 
 
Motivations for selecting a top camera view were as following:
 
*Using a top-view camera, projection errors can be minimized, which makes the implementation easier
*Having a movable top view camera that stays right above the ball would help avoid perspective distortions to a greater extent, but:
:* We would need to account for localization uncertainty in the case of drones
:* Multiple cameras might still be needed to detect events that are not in the vicinity of the ball
*It would be possible to view the entire field using a camera of sufficient height or field of view
 
 
Parameters to consider when using a top-view camera:
 
*Mounting height
*Field of View
*Resolution or Image size
 
 
Parameters to consider for the ball-player distance violation decision-making algorithm:
 
*Pixel to meter ratio or Resolution
*Frequency rate—The frequency rate required for the decision-making algorithm is defined based on
 
:*The pixel-meter ratio,
:*Worst-case scenarios defined considering robot dimensions, speed, and acceleration limits
 
*Maximum allowed perspective distortion
 
:*When using a single top camera, perspective distortions are unavoidable
:*High distortions can affect the visibility of the ball, separation of team players, and also lead to incorrect position estimate of the players.
 
 
Possible issues of using a single static top camera
 
*Perspective distortions
 
:*Objects that are not directly below the camera would be seen at an oblique angle
 
*Occlusions
 
:*The ball may not be completely visible and be occluded by players
:*Players that are too close to each other may not be detected separately using a simple camera
 
*Limitations on accuracy
 
:*Placing the camera very high above the field, in such a way that the entire field is visible, could lead to objects being very small in the images, and affect the accuracy of detections
:*Going with a higher resolution camera would improve the accuracy of the field
 
 
Final decision:
 
Considering time limitations, and the implementation complexity related to the use of multiple cameras and moving cameras, a decision was made to test the ball-player distance violation algorithm for a single top camera concept. The camera was considered to be at a height of 12 m above the center of the field and facing downwards. A choice was also made to keep the field of view to a manageable value of 1.2 radians, such that perspective distortions are minimized. Meanwhile, to have a manageable resolution, the images taken were considered to be Full HD (1920 x 1080) RGB images.
 
==Requirements==
 
===Functional requirements===
 
===Performance requirements===
 
====Frequency====
 
====Accuracy====
 
===Other context information===
 
====Colour detection requirement====
 
====Minimal distortion requirement====
 
 
==Architecture decomposition==
 
==Explanation of individual code blocks==
 
===Zone of field detection===
 
===Ball detection===
 
===Player detection===
 
===Area of interest===
 
===Player classification===
 
===Decision making function===
 
 
==Verification==
 
===Image use case testing===
 
===Video use case testing===
 
===Long video simulation===
 
==Conclusion and recommendations for future work==
 
===Conclusion===
 
===recommendations===

Latest revision as of 09:54, 1 April 2021

The implementation for the AutoRef autonomous referee for RoboCup Middle Size League (MSL) robot soccer is the execution of the AutoRef system design.

Implemented design elements are based on AutoRef's functional specification as part of its system architecture, a specification of which is based on the law-task-skill breakdown of the MSL rulebook. The design of implemented tasks in the AutoRef project are provided in main articles linked in the chapters below.

Team contributions to AutoRef's implementation from 2021 onwards are integrated within the implementation pages. Contributions from 2016–2020 are detailed by their respective teams in their own pages.

Distance violation checking

Main article: AutoRef distance violation checking