PRE2017 3 Groep5: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
Line 33: Line 33:


=== Camera detection ===
=== Camera detection ===
Cameras can also be used to detect and monitor animals. Different cameras technologies, such as RGB, Infrared (IR) and Time-of-Flight (TOF) can all be used for animal identification, location tracking and status monitoring.
Cameras and Computer Vision can be used to detect and monitor animals. Different cameras technologies, such as RGB, Infrared (IR) and Time-of-Flight (TOF) can all be used for animal identification, location tracking and status monitoring.


In a paper by Zhu et al, an 3D machine vision of livestock is described.<ref>Zhu, Q., Ren, J., Barclay, D., McCormack, S., & Thomson, W. (2015, October). Automatic animal detection from kinect sensed images for livestock monitoring and assessment. In Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on (pp. 1154-1157). IEEE.</ref> In previous papers Internet Protocol(IP) cameras have been implemented to track weight of animals and to ensure they do not get unhealthy. IP cameras capture RGB images, which makes them dependent on room light, shadows and contacts between animals. In order to tackle those problems, the method used in the paper adds an IR camera to the RGB image. This gives information about the depth of every pixel thus giving a true 3D data for more accurate detection of the animals. The authors of the paper use Microsoft Kinect, which has RGB camera, IR projector and IR camera. By setting different thresholds and creating a software, it is possible to estimate the weight of a pig very accurately and find ones which are over- or under-weighted.
In a paper by Zhu et al, an 3D machine vision of livestock is described.<ref>Zhu, Q., Ren, J., Barclay, D., McCormack, S., & Thomson, W. (2015, October). Automatic animal detection from kinect sensed images for livestock monitoring and assessment. In Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on (pp. 1154-1157). IEEE.</ref> In previous papers Internet Protocol(IP) cameras have been implemented to track weight of animals and to ensure they do not get unhealthy. IP cameras capture RGB images, which makes them dependent on room light, shadows and contacts between animals. In order to tackle those problems, the method used in the paper adds an IR camera to the RGB image. This gives information about the depth of every pixel thus giving a true 3D data for more accurate detection of the animals. The authors of the paper use Microsoft Kinect, which has RGB camera, IR projector and IR camera. By setting different thresholds and creating a software, it is possible to estimate the weight of a pig very accurately and find ones which are over- or under-weighted.

Revision as of 13:25, 18 February 2018

Group members

Bogdans Afonins, 0969985

Andrei Pintilie, 0980402

Stijn Slot, 0964882

Andrei Agaronian, 1017525

Veselin Manev, 0939171

Project definition

Subject

The world population is rising exponentially, increasing the demand for commodities such as eggs, meat, milk, leather and wool. The supply can be increased by either increasing the number of livestock or its productivity. Farmers are forced to keep hundreds of animals, which makes the manual tracking of the individual animals not practical. Domestic animals can get sick, lost or get stolen and sometimes it is too difficult to detect this when the animal is along with hundred others. Current advances in technology can be adopted for use in the farming sector and by doing so all the previously described problems can be easily solved. This will ensure the animals welfare, increase productivity and ease the work of farmers around the world.

Objectives

The objective is to design and possibly implement a solution for the automation of the farming sector. More precisely, realize an animal tracking model, which is capable of first distinguishing the individual animals and after that determining their health status, based on their behavior.

Users

User requirements

Background

Animal sickness

Animal detection

RFID

Radio Frequency Identification (RFID) tags can store and transmit data through electromagnetic transmission. RFID readers can be used to detect RFID tags within certain ranges. Combination of RFID tags and readers can be used for detecting moving object such as animals.

In a paper by Seol et al, RFID tags are used for tracking large number of moving objects.[1] The idea is as follows: each entity that is supposed to be tracked must be equipped with a basic RFID tag, that can receive queries and respond to so called readers. The readers are static and are supposed to be positioned all around the area. Every reader has a certain range it can operate in. So it can communicate/detect entities only within that range, for example 5 or 10 meters. The readers will pass the presence information of a certain tag to the central server that stores this information appropriately. The central server is responsible for gathering data and operating on it, for example by approximating a path that a tag took, keeping track of the number times that a certain tag appeared in a certain location.

In a US patent by Huisma, RFID tags are used for detecting animal visits to feeding stations.[2] Animals are equiped with a RFID tag, that can be read in close proximity to the feeding stations. These detected visited are used together with weighing devices in the feeding troughs to measure the difference in weight before and after a consumption event, using a mathematically weighted filter technique. The reduction in food is divided between the RFID tag last seen and the next one. By detecting animals at the food stations with RFID tags, the food intake of each animal can be recorded. This information can be used for finding animals with abnormal feeding behaviour. The patent mentions the obstacles with this method, namely inaccurate RFID readings (few seconds delay, readings by other stations) and inaccurate food reduction measurements (wind, rodents, inaccurate division).

Camera detection

Cameras and Computer Vision can be used to detect and monitor animals. Different cameras technologies, such as RGB, Infrared (IR) and Time-of-Flight (TOF) can all be used for animal identification, location tracking and status monitoring.

In a paper by Zhu et al, an 3D machine vision of livestock is described.[3] In previous papers Internet Protocol(IP) cameras have been implemented to track weight of animals and to ensure they do not get unhealthy. IP cameras capture RGB images, which makes them dependent on room light, shadows and contacts between animals. In order to tackle those problems, the method used in the paper adds an IR camera to the RGB image. This gives information about the depth of every pixel thus giving a true 3D data for more accurate detection of the animals. The authors of the paper use Microsoft Kinect, which has RGB camera, IR projector and IR camera. By setting different thresholds and creating a software, it is possible to estimate the weight of a pig very accurately and find ones which are over- or under-weighted.

In a paper by Salau et al, a TOF camera is used for determining body traits of cows.[4] They first introduce the manual method of body trait determination which relies two measures that is used to describe a cow’s body condition - Body Condition Score, which is gathered by visually and manually judging the fat layer upon specific bone structures and how sunken the animal’s rear area is, and the BackFat Thickness. However, this manual system for body trait determination has higher costs, more stressful for the animals, doesn’t avoid errors during manual data transcription, and can not provide large volumes of data for use in genetic evaluation, rather than an automated method would do. Then the paper fully focuses on introducing and further explaining of the automated system which relies on collecting data from the camera - Time Of Flight. Technical aspects of TOF method, which is based on using camera mounted into a cow barn, and its implementation with testing and numbers are present in this paper. It is clearly indicated that this automated system was able to carry out the tasks camera setup, calibration, animal identification, image acquisition, sorting, segmentation and the determination of the region of interest as well as the extraction of body traits automatically. At the end, it is summed up that the application of TOF in determination of body traits is promising, since traits could be gathered at comparable precision as BFT. However, animal effect is very large and thus further analyses to specify the cows’ properties leading to the differences in image quality, reliability in measurement and trait values need to be carried out.

A paper by Kumar et al focuses on tracking pet animals that are lost.[5] They mention the fact that there is an increase in the number of pet animals that are abandoned, lost, swapped, etc., and that the current methods to identify and distinguish them are manual and not effective. For example, ear-tagging, ear-tipping or notching and embedding of microchips in the body of pet animals for their recognition purpose. The authors point out that these methods are not robust and do not help to solve the problem of identification of an animal. The idea is to use animal biometric characteristics to recognize an individual animal. To recognize and monitor pet animals (dogs) an automatic recognition system is proposed in the paper. Facial images are used for the recognition part and surveillance cameras are used for the tracking purposes. Results of the research are quite impressing, but it has yet to be tested in a real life environment.

Similarly, a paper by Yu et al describes an automated method of identifying wildlife species using pictures captured by remote camera traps.[6] Researchers not only described the technical aspects of the method, but also tested the method on a dataset with over 7,000 camera trap images of 18 species from two different field sites. After all, they’ve achieved an average classification accuracy of 82%. Summing up, it was shown that object recognition techniques from computer vision science can be effectively used to recognize and identify wild mammals on sequences of photographs taken by camera traps in nature, which are notorious for high levels of noise and clutter. In the future work, the authors say, some biometric features that are important for species analysis will be included in the local features, such as color, spots, and size of the body, which are partly responsible for determining body traits.

Animal monitoring

Sensors

Monitoring devices

References

  1. Seol, S., Lee, E. K., & Kim, W. (2017). Indoor mobile object tracking using RFID. Future Generation Computer Systems, 76, 443-451.
  2. Huisma, C. (2015). U.S. Patent No. 8,930,148. Washington, DC: U.S. Patent and Trademark Office.
  3. Zhu, Q., Ren, J., Barclay, D., McCormack, S., & Thomson, W. (2015, October). Automatic animal detection from kinect sensed images for livestock monitoring and assessment. In Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on (pp. 1154-1157). IEEE.
  4. Salau, J., Haas, J. H., Junge, W., Bauer, U., Harms, J., & Bieletzki, S. (2014). Feasibility of automated body trait determination using the SR4K time-of-flight camera in cow barns. SpringerPlus, 3(1), 225.
  5. Kumar, S., & Singh, S. K. (2016). Monitoring of pet animal in smart cities using animal biometrics. Future Generation Computer Systems.
  6. Yu, X., Wang, J., Kays, R., Jansen, P. A., Wang, T., & Huang, T. (2013). Automated identification of animal species in camera trap images. EURASIP Journal on Image and Video Processing, 2013(1), 52.

Project

Approach

In order to complete the project within deadlines, we plan to start reading and summarizing the papers in the first week, being followed by establishing the USE aspect plus the impact of the technology on the animals. In the fifth week already, we plan to start working on the prototype which will be a simulation in Java that will show a location(has to be determined) and few types of animals(chosen according to the sick behavior study). Then, by allowing the user to select an animal, that animal becomes sick, the simulation detects that and finally reports it to the user. During last weeks we also have to specify why is it robotics and what advantages or disadvantages are present.

Planning

Planning v1.png

Milestones

Milestones are shown in the planning picture. In the first week, we already plan to have summarized the papers and use them to identify patterns in sick animals behavior and what technologies can be used to detect these changes. In the 3rd week, the milestone is to deliver a full analysis of the USE aspects. During the following two weeks, we expect the simulation to work. The last milestone is to prepare all deliverables for handing in.

Deliverables

As deliverables we decided to prepare the following:

A presentation, which will be held in the last week. During this presentation, all of our work will be presented and a simulation will run.

A simulation of the subject treated. For example, a barn that has a lot of animals(about 200-300) of different species(cows, pigs, chickens). The user will be allowed to place the identification technologies around the barn and "make" some of the animals sick. The simulation shall be able to discover which animal is sick by analyzing its behavior. The user is notified which animal might be sick.

Research paper of the technology described above, which will take into account the advantages, disadvantages, costs, and impact of such an implementation.

Roles

Coaching Questions

Coaching Questions Group 5