PRE2019 4 Group3

From Control Systems Technology Group
Jump to navigation Jump to search

SPlaSh: The Plastic Shark


Group members

Student name Student ID Study E-mail
Kevin Cox 1361163 Mechanical Engineering k.j.p.cox@student.tue.nl
Menno Cromwijk 1248073 Biomedical Engineering m.w.j.cromwijk@student.tue.nl
Dennis Heesmans 1359592 Mechanical Engineering d.a.heesmans@student.tue.nl
Marijn Minkenberg 1357751 Mechanical Engineering m.minkenberg@student.tue.nl
Lotte Rassaerts 1330004 Mechanical Engineering l.rassaerts@student.tue.nl

Introduction

NOTE THAT WE CHANGED OUR PROBLEM STATEMENT: the general subject stays the same, namely plastic in oceans, rivers etc. However, after research it appears that there are already good clean up robots that operate at the water surface in rivers (which was also an area we want to choose due to less waves). Those robots did not use image recognition, but did work good enough without, and marine life was not damaged significantly. Therefore, we made a switch and chose a new problem, namely recognizing and quantifying the amount of plastic below the water surface. The general approach stays the same, the robot will just have a different task and therefore some different requirements.

Problem statement

Over 5 trillion pieces of plastic are currently floating around in the oceans [1]. For a part, this so-called plastic soup, exists of large plastics, like bags, straws, and cups. But it also contains a vast concentration of microplastics: these are pieces of plastic smaller than 5mm in size [2]. There are five garbage patches across the globe [1]. In the garbage patch in the Mediterranean sea, the most prevalent microplastics were found to be polyethylene and polypropyline [3].

A study in the Northern Sea showed that 5.4% of the fish had ingested plastic [4]. The plastic consumed by the fish accumulates - new plastic does go into the fish, but does not come out. The buildup of plastic particles results in stress in their livers [5]. Beside that, fish can become stuck in the larger plastics. Thus, the plastic soup is becoming a threat for sea life.

The locations of the five garbage patches around the globe

A lot of this plastic comes from rivers. A study published in 2017 found that about 80% of plastic trash is flowing into the sea from 10 rivers that run through heavily populated regions. The other 20% of plastic trash enters the ocean directly [6], for example trash blown from a beach or discarded from ships.

In 2019, over 200 volunteers walked along parts of the Maas and Waal [7], they found 77.000 pieces of litter of which 84% was plastic. This number was higher than expected. The best way to help cleaning up the oceans is to first make sure to stop the influx. In order to stop the influx, it must be known how much plastic is flowing through the rivers. The amount of litter was higher than expected, which means that at this moment there is no good monitoring of the rivers on the plastic flow.

In this project, a contribution will be made to the gathering of information on the litter flowing through the river Maas, specifically the part in Limburg. This is done by providing a concept of an information-gathering 'shark'. This machine uses image recognition to identify the plastic. A design will be made and the image recognition will be tested. Lastly, it will be thought out how the shark will be able to save information and communicate it.

Objectives

  • Do research into the state of the art of current recognition software, river cleanup devices and neural networks.
  • Create a software tool that distinguishes garbage from marine life.
  • Test this software tool and form a conclusion on the effectiveness of the tool.
  • Create a design for the SPlaSh
  • Think of a way to save and communicate the information gathered.

Users

In this part the different users will be discussed. With users are meant: the different groups that are involved with this problem.

Schone rivieren (Schone Maas)

Schone rivieren is a foundation which is established by IVN Natuureducatie, Plastic Soup Foundation and Stichting De Noordzee. This foundation has the goal to have all Dutch rivers plastic-free in 2030. They rely on volunteers to collectively clean up the rivers and gather information. They would benefit a lot from the SPlaSh, because it provides the organization with useful data that can be used to optimize the river cleanup.

A few of the partners will be listed below. These give an indication of the organizations this foundation is involved with.

  • University of Leiden - The science communication and society department of the University does a lot of research to the interaction between science and society, this expertise is used by the foundation.
  • Rijkswaterstaat (executive agency of the Ministry of Infrastructure and Water Management) - Rijkswaterstaat will provide knowledge that can be used for the project. Therefore, Rijkswaterstaat is also a user of its own, whom will be discussed later.
  • Nationale Postcode Loterij (national lottery) - Donated 1.950.000 euros to the foundation. This indicates that the problem is seen as significant. This donation helps the foundation to grow and allows them to use resources such as the SPlaSh.
  • Tauw - Tauw is a consultancy and engineering agency that offers consultancy, measurement and monitoring services in the environmental field. It also works on the sustainable development of the living environment for industry and governments.

Lastly, the foundation also works with the provinces, Brabant, Gelderland, Limburg, Utrecht and Limburg.

Rijkswaterstaat

Rijkswaterstaat is the executive agency of the Ministry of Infrastructure and Water Management, as mentioned before. This means that it is the part of the government that is responsible for the rivers of the Netherlands. They also are the biggest source of data regarding rivers and all water related topics in the Netherlands. Other independent researchers can request data from their database. This makes them a good user, since this project could add important data to that database. Rijkswaterstaat also funds projects, which can prove helpful if the concept that is worked out in the project is ever realized to a prototype.

RanMarine Technology (WasteShark)

RanMarine Technology is a company that is specialized in the design and development of industrial autonomous surface vessels (ASV’s) for ports, harbours and other marine and water environments. The company is known for the WasteShark.

This device floats on the water surface of rivers, ports and marinas to collect plastics, bio-waste and other debris [8]. It currently operates at coasts, in rivers and in harbours around the world - also in the Netherlands. The idea is to collect the plastic waste before a tide takes it out into the deep ocean, where the waste is much harder to collect.

The WasteShark in action

WasteSharks can collect 200 liters of trash at a time, before having to return to an on-land unloading station. They also charge there. The WasteShark has no carbon emissions, operating on solar power and batteries. The batteries can last 8-16 hours. Both an autonomous model and a remote-controlled model are available [8]. The autonomous model is even able to collaborate with other WasteSharks in the same area. They can thus make decisions based on shared knowledge [9]. An example of that is, when one WasteShark senses that it is filling up very quickly, other WasteSharks can come join it, for there is probably a lot of plastic waste in that area.

The autonomous WasteShark detects floating plastic that lies in the path of the WasteShark using laser imaging detection and ranging (LIDAR) technology. This means the WasteShark sends out a signal, and measures the time it takes until a reflection is detected [10]. From this, the software can figure out the distance of the object that caused the reflection. The WasteShark can then decide to approach the object, or stop / back up a little in case the object is coming closer [11], this is probably for self-protection. The design of the WasteShark makes it so that plastic waste can go in easily, but can hardly go out of it. The only moving parts of the design are two thrusters which propel the WasteShark forward or backward [9]. This means that the design is very robust, which is important in the environment it is designed to work in.

The fully autonomous version of the WasteShark can also collect water quality data, scan the seabed to chart its shape, and filter the water from chemicals that might be in it [11]. To perform autonomously, this design also has a mission planning ability. In the future, the device should even be able to construct a predictive model of where trash collects in the water [9]. The information provided by the SPlaSh can be used by RanMarine Technology in the future to guide the WasteShark to areas with a high number of litter.

This concept does seem to tick all the boxes (autonomous, energy neutral, and scalable) set by The Dutch Cleanup. A fully autonomous model can be bought for under $23000 [11], making it pretty affordable for governments to invest in.

The autonomous WasteShark detects floating plastic that lies in the path of the WasteShark using laser imaging detection and ranging (LIDAR) technology. This means the WasteShark sends out a signal, and measures the time it takes until a reflection is detected [10]. From this, the software can figure out the distance of the object that caused the reflection. The WasteShark can then decide to approach the object, or stop / back up a little in case the object is coming closer [11], this is probably for self-protection. The design of the WasteShark makes it so that plastic waste can go in easily, but can hardly go out of it. The only moving parts of the design are two thrusters which propel the WasteShark forward or backward [9]. This means that the design is very robust, which is important in the environment it is designed to work in.

The fully autonomous version of the WasteShark can also collect water quality data, scan the seabed to chart its shape, and filter the water from chemicals that might be in it [11]. To perform autonomously, this design also has a mission planning ability. In the future, the device should even be able to construct a predictive model of where trash collects in the water [9]. The information provided by the SPlaSh can be used by RanMarine Technology in the future to guide the WasteShark to areas with a high number of litter.

Albatross

A second device that focuses on collecting datasets of microplastics in rivers and oceans, is the Albatross from the company Pirika Inc. [12]. They do this by collecting water samples which are analysed with microscopes afterwards. These microplastics are collected using an plankton net with diameters of 0.1 or 0.3 mm. However, the device does not operate or navigate on it's own, it is a static measurement. The addition of the plankton net could be an addition to the WasteShark to focus on microplastics instead of macroplastics.

Requirements

The following points are the requirements. These requirements are conditions or tasks that must be completed to ensure the completion of the project.

Requirements for the Software

  • The program that is written should be able to identify and classify different types of plastic;
  • The program should be able to identify plastic in the water correctly for at least 85 percent of the time based on … database;
  • The image recognition should work with photos;
  • The image recognition should be live;
  • The same piece of plastic should not be counted multiple times.

Requirements for the Design

  • The design should be water resistant;
  • The design should be eco-friendly, i.e. not harmful to the environment, especially the marine life;
  • The designed robot should not hinder other water vehicles;
  • The robot should operate 24/7. Thus, it should be rechargeable within a short amount of time (few minutes);
  • The robot should be robust, so it should not be damaged easily;
  • The water should be scanned for plastic in an efficient way, e.g. a particular pattern.


Finally, literature research about the current state of the art must be provided. At least 25 sources must be used for the literature research of the software and design.

Planning

Approach

For the planning, a Gantt Chart is created with the most important things. The overall view of our planning is that in the first two weeks, a lot of research has to be done. This needs to be done for, among other things, the problem statement, users and the current technology. In the second week, more information about different types of neural networks and the working of different layers should be investigated to gain more knowledge. Also, this could lead to installing multiple packages or programs on our laptops, which needs time to test whether they work. During this second week, a data-set should be created or found that can be used to train our model. If this cannot be found online and thus should be created, this would take much more time than one week. However, it is hoped to be finished after the third week. After this, the group is split into people who create the design and applications of the robot, and people who work on the creation of the neural network. After week 5, an idea of the robotics should be elaborated with the use of drawings or digital visualizations. Also all the possible neural networks should be elaborated and tested, so that in week 6 conclusions can be drawn for the best working neural network. This means that in week 7, the Wiki-page can be concluded with a conclusion and discussion about the neural network that should be used and about the working of the device. Finally, week 8 is used to prepare for the presentation.

Currently, the activities are subdivided related to the Neural Network / image recognition and the design of the device. Kevin and Lotte will work on the design of the device and Menno, Marijn and Dennis will look work on the neural networks.

Gannt chart

Milestones

Week Milestones
1 (April 20th till April 26th) Correct information and knowledge for first meeting
2 (April 27th till May 3rd) Further research on different types of Neural Networks and having a working example of a CNN.
3 (May 4th till May 10th) Elaborate the first ideas of the design of the device and find or create a usable database.
4 (May 11th till May 17th) First findings of correctness of different Neural Networks and tests of different types of Neural Networks.
5 (May 18th till May 24th) Conclusion of the best working neural network and final visualisation of the design.
6 (May 25th till April 31th) First set-up of wiki page with the found conclusions of Neural Networks and design with correct visualisation of the findings.
7 (June 1st till June 7th) Creation of the final wiki-page
8 (June 8th till June 14th) Presentation and visualisation of final presentation

Deliverables

  • Design of the SPlaSh
  • Software for image recognition
  • Complete wiki-page
  • Final presentation

State-of-the-Art

Quantifying plastic waste

Plastic debris in rivers has been quantified before in three ways [13]. First of all, by quantifying the sources of plastic waste. Second of all, by quantifying plastic transport through modelling. Lastly, by quantifying plastic transport through observations. The last one is most in line with what will be done in this project. No uniform method for counting plastic debris in rivers was made. So, several plastic monitoring studies each thought of their own way to do so. The methods can be divided up into 5 different subcategories [13]:

1. Plastic tracking: Using GPS (Global Positioning System) to track the travel path of plastic pieces in rivers. The pieces are altered beforehand so that the GPS can pick up on it. This method can show where cluttering happens, where preferred flowlines are, etc.

2. Active sampling: Collecting samples from riverbanks, beaches, or from a net hanging from a bridge or a boat. This method does not only quantify the plastic transport, it also qualifies it - since it is possible to inspect what kinds of plastics are in the samples, how degraded they are, how large, etc. This method works mainly in the top layer of the river. The area of the riverbed can be inspected by taking sediment samples, for example using a fish fyke [14].

3. Passive sampling: Collecting samples from debris accumulations around existing infrastructure. In the few cases where infrastructure to collect plastic debris is already in place, it is just as easy to use them to quantify and qualify the plastic that gets caught. This method does not require any extra investment. It is, like active sampling, more focused on the top layer of the plastic debris, since the infrastructure is, too.

4. Visual observations: Watching plastic float by from on top of a bridge and counting it. This method is very easy to execute, but it is less certain than other methods, due to observer bias, and due to small plastics in a river possibly not being visible from a bridge. This method is adequate for showing seasonal changes in plastic quantities.

5. Citizen science: Using the public as a means to quantify plastic debris. Several apps have been made to allow lots of people to participate in ongoing research for classifying plastic waste. This method gives insight into the transport of plastic on a global scale.

Visual observations, done automatically

Cameras can be used to improve visual observations. One study did such a visual observation on a beach, using drones that flew about 10 meters above it. Based on input from cameras on the UAV's, plastic debris could be identified, located and classified (by a machine learning algorithm) [15]. Similar systems have also been used to identify macroplastics on rivers.

Another study made a deep learning algorithm (a CNN - to be exact, a "Visual Geometry Group-16 (VGG16) model, pre-trained on the large-scale ImageNet dataset" [16]) that was able to classify different types of plastic from images. These images were taken from above the water, so this study also focused on the top layer of plastic debris.

The plastic debris in these images was automatically classified by a deep learning algorithm.

The algorithm had a test set accuracy of 99%. But that doesn't say much about the performance of the algorithm, because it only says how well it categorizes the test images, which it has seen lots of times before. To find out the performance of an algorithm, it has to look at images it has never seen before (so, images that are not in the test set)Dit klopt niet, test data heeft het model nog nooit gezien, daarom is het de test data en niet de training data. The algorithm recognized plastic debris on 141 out of 165 brand new images that were fed into the system [16]. That leads to a validation accuracy of 86%. It was concluded that this shows the algorithm is pretty good at what it should do.

Their improvement points are that the accuracy could be even higher and more different kinds of plastic could be distinguished, while not letting the computational time be too long. This is something we should look into in this project, too.

Plastic under water

With the Ocean Cleanup, the main focus was the plastic debris floating at the surface of the water. However, there is also garbage at lower levels. A recent study revealed that a plastic bag, like the kind given away at a supermarket, is now the deepest known piece of plastic rubbish, found at a depth of 11,000 metres inside the Mariana Trench [17]. Scientists found it by looking through the Deep-Sea Debris Database, a collection of photos and videos taken from 5,010 dives over the past 30 years that was recently made public [17].

Other studies came to same the conclusion that there is a considerable amount of plastic at deeper layers of the ocean. One study found that, over the years, pieces of debris would show up again and again in the stomachs of certain fish, species that rarely come to the surface to feed. It also turns out that far more microplastics are hidden deep in the ocean than on the surface. The ROV found the highest concentrations of microplastics in water 200 to 600 meters, or 650 to 2,000 feet, below the surface. What’s more, those levels were comparable to the concentration closer to the surface of the infamous Great Pacific Garbage Patch, where ocean currents trap microplastics [18].

      • Maybe focus more on rivers here?

Neural Networks

Neural networks are a set of algorithms that are designed to recognize patterns. They interpret sensory data through machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors. Real-world data, such as images, sound, text or time series, needs to be translated into such numerical data to process it [19].

There are different types of neural networks [20]:

  • Recurrent neural network: Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. These networks are mostly used in the fields of natural language processing and speech recognition [21].
  • Convolutional neural networks: Convolutional neural networks, also known as CNNs, are used for image classification.
  • Hopfield networks: Hopfield networks are used to collect and retrieve memory like the human brain. The network can store various patterns or memories. It is able to recognize any of the learned patterns by uncovering data about that pattern [22].
  • Boltzmann machine networks: Boltzmann machines are used for search and learning problems [23].

Convolutional Neural Networks

In this project, the neural network should retrieve data from images. Therefore a convolutional neural network will be used. Convolutional neural networks are generally composed of the following layers [24]:

Layers in a convolutional neural network

The convolutional layer transforms the input data to detect patterns, edges and other characteristics in order to be able to correctly classify the data. The main parameters with which a convolutional layer can be changed are by choosing a different activation function, or kernel size. Max pooling layers reduce the number of pixels in the output size from the previously applied convolutional layer(s). Max pooling is applied to reduce overfitting. A problem with the output feature maps is that they are sensitive to the location of the features in the input. One approach to address this sensitivity is to use a max pooling layer. This has the effect of making the resulting downsampled feature maps more robust to changes in the position of the feature in the image. The pool-size determines the amount of pixels from the input data that is turned into 1 pixel from the output data. Fully connected layers connect all input values via separate connections to an output channel. Since this project has to deal with a binary problem, the final fully connected layer will consist of 1 output. Stochastic gradient descent (SGD) is the most common and basic optimizer used for training a CNN [25]. It optimizes the model using parameters based on the gradient information of the loss function. However, many other optimizers have been developed that could have a better result. Momentum keeps the history of the previous update steps and combines this information with the next gradient step to reduce the effect of outliers [26]. RMSProp also tries to keep the updates stable, but in a different way than momentum. RMSprop also takes away the need to adjust learning rate [27]. Adam takes the ideas behind both momentum and RMSprop and combines into one optimizer [28]. Nesterov momentum is a smarter version of the momentum optimizer that looks ahead and adjusts the momentum based on these parameters [29]. Nadam is an optimizer that combines RMSprop and Nesterov momentum [30].

Image Recognition

Over the past decade or so, great steps have been made in developing deep learning methods for image recognition and classification [31]. In recent years, convolutional neural networks (CNNs) have shown significant improvements on image classification [32]. It is demonstrated that the representation depth is beneficial for the classification accuracy [33]. Another method is the use of VGG networks, that are known for their state-of-the-art performance in image feature extraction. Their setup exists out of repeated patterns of 1, 2 or 3 convolution layers and a max-pooling layer, finishing with one or more dense layers. The convolutional layer transforms the input data to detect patterns and edges and other characteristics in order to be able to correctly classify the data. The main parameters with which a convolutional layer can be changed, is by choosing a different activation function or kernel size [33].

There are still limitations to the current image recognition technologies. First of all, most methods are supervised, which means they need big amounts of labelled training data, that need to be put together by someone [31]. This can be solved by using unsupervised deep learning instead of supervised. For unsupervised learning, instead of large databases, only some labels will be needed to make sense of the world. Currently, there are no unsupervised methods that outperform supervised. This is because supervised learning can better encode the characteristics of a set of data. The hope is that in the future unsupervised learning will provide more general features so any task can be performed [34]. Another problem is that sometimes small distortions can cause a wrong classification of an image [31] [35]. This can already be caused by shadows on an object that can cause color and shape differences [36]. A different pitfall is that the output feature maps are sensitive to the specific location of the features in the input. One approach to address this sensitivity is to use a max pooling layer. Max pooling layers reduce the number of pixels in the output size from the previously applied convolutional layer(s). The pool-size determines the amount of pixels from the input data that is turned into 1 pixel from the output data. Using this, has the effect of making the resulting down sampled feature maps more robust to changes in the position of the feature in the image [33].

Specific research has been carried out into image recognition and classification of fish in the water. For example, a study that used state-of-the-art object detection to detect, localize and classify fish species using visual data, obtained by underwater cameras, has been carried out. The initial goal was to recognize herring and mackerel and this work was specifically developed for poorly conditioned waters. Their experiments on a dateset obtained at sea, showed a successful detection rate of 66.7% and successful classification rate of 89.7% [37]. There are also studies that researched image recognition and classification of micro plastics. By using computer vision for analyzing required images, and machine learning techniques to develop classifiers for four types of micro plastics, an accuracy of 96.6% was achieved [38].

For these recognitions, image databases need to be found for the recognition of fish and plastic. First of all, ImageNet can be used, which is a database with many pictures of different subjects. Secondly 3 databases of different fishes have been found: http://groups.inf.ed.ac.uk/f4k/GROUNDTRUTH/RECOG/ https://wiki.qut.edu.au/display/cyphy/Fish+Dataset https://wiki.qut.edu.au/display/cyphy/Fish+Dataset (same?)

Further exploration

Location

Rivers are seen as a major source of debris in the oceans [39] . The tide has a big influence on the direction of the floating waste. During low tide the waste flows towards the sea, and during high tide it can flow over the river towards the river banks [40].

A big consequence of plastic waste in rivers, seas, oceans and river banks is that a lot of animals can mistake plastic for food, often resulting in death. There are also economic consequences. More waste in waters, means more difficult water purification, especially because of microplastics. It costs extra money to be able to purify the water. Also, cleaning of waste in river areas, costs millions a year [41].

A large-scale investigation has taken place into the wash-up of plastic on the banks of rivers. At river banks of the Maas, an average of 630 pieces of waste per 100 meters of river bank were counted, of which 81% is plastic. Some measurement locations showed a count of more than 1200 pieces of waste per 100 meters riverbank, and can be marked as hotspots. A big concentration of these hotspots can be found at the riverbanks of the Maas in the south of Limburg. A lot of waste, originating from France and Belgium, flows into the Dutch part of the Maas here. Evidence for this, is the great amount of plastic packaging with French texts. Also, in these hotspots the proportion of plastic is even higher, namely 89% instead of 81% [40].

The SPlaSh should help to tackle the problem of the plastic soup at its roots, the rivers. Because of the high plastic concentration in the Maas in the south of Limburg, there will be specifically looked into designing the live image recognition program and robot, for this part of the Maas. There are many different things that have to be taken into account to avoid negative influences of the SPlaSh. Two main things that need to be taken into account are river animals and boats. The SPlaSh should namely, of course, not 'eat' animals and it should not be broken by boats.

Plastic

An extensive research into the amount of plastic on the river banks of the Maas has been executed [40]. As explained before, plastic in rivers can float into the oceans or can end up on river banks. Therefore, the counted amount of plastic on the river banks of the Maas is only a part of the total amount of plastic in the rivers, since another part flows into the ocean. The exact numbers of how much plastic flows into the oceans are not clear. However, it is certain that at the south of Limburg an average of more than 1200 pieces of waste per 100 meters of riverbank of the Maas were counted, of which 89% is plastic.

A top 15 was made of which types of waste were encountered the most. The type of plastic most commonly found is indefinable pieces of soft/hard plastic and plastic film that are smaller than 50 [cm], including styrofoam. This indefinable pieces also include nurdles. This are small plastic granules, that are used as a raw element for plastic products. Again, the south of Limburg has the highest concentration of this type of waste. This is because there are relatively more industrial areas there. Another big part of the counted plastics are disposable plastics, often used as food and drink packaging. In total 25% of all encountered plastic is disposable plastic from food and drink packages.

Only plastic that has washed up on the riverbanks has been counted. Not much is known about how much plastic is in the water, below the water surface. From the state-of-the-art it appeared that there are clues, that plastic in waters is not only present at the surface, but also at lower levels. The robot and image recognition program that will be designed, will help to map the amount of plastic in deeper waters of the Maas in the south of Limburg, to get a better idea of how much plastic floats through that part of the river in total.

Maritime transport

SPlaSh needs to be able to navigate through the Maas. That means it has to take other maritime vehicles into account. According to the Shipping regulations ‘Gemeenschappelijke Maas’ [42], maritime transport is allowed for 24 hours a day, 7 days in the week. Also, every ship or assembly is allowed except for vessels longer than 100 m, wider than 12 m or with a draft of more than 2.80. The water has an average depth of 3 m, which is regulated by seven barrages. The SPlaSh will have to abide to all the Shipping regulations ‘Gemeenschappelijke Maas’, of which a few have been discussed above already. The biggest proportion of the vessels on the Maas will be inland vessels. It will also have to avoid other maritime vehicles, so no damage is done to the robot or the vehicles.

Image Database

The CNN can be pretrained on the large-scale ImageNet. Due to this pre-training, the model has learned certain image features from this large dataset. Secondly the neural network should be trained on a database specified on this subject. This database should then randomly be divided into 3 group. The biggest group is the training data, which the neural network uses to see patterns and to predict the outcome of the second dataset, the validation data. Ones this validation data has been analyzed, a new epoch is started, which means that the validation data is part of the training data. Once a final model has been created, a test dataset can be used to analyzed its performance.

It is difficult to find a database perfectly corresponding to our subject. First of all, a big dataset of plastic waste in the ocean is available [43]. This could be potentially usable for detection of plastic deeper in the river, but we would also like to detect plastic on the surface, where this is the place where most macro plastics float. This database contains a total amount of 3644 images of underwater waste containing 1316 mages of plastic. Further, a big dataset of plastic shapes can be used, although these are not from underwater [44]. Using image preprocessing, it could be possible to still find corresponding shapes of plastic from pictures that the underwater camera takes. Lastly, a dataset can be created by ourselves by taken screenshots from nature documentaries.


USE Analysis

User

The SPlaSh will be used to solve major environmental problems that plastic causes in the water. The production costs of this product will also probably be quite expensive. The places where the SPlaSh will be used are, most of the times, not property of ordinary citizens. This means, that this product isn’t something that would lead to a high demand by citizens.


Because the SPlaSh will be made to tackle the problem at its roots, the main places where it would probably be used are rivers and canals. One of the users of this product could be Rijkswaterstaat. Rijkswaterstaat manages the major waters in the Netherlands: the Wadden Sea, the North Sea, the IJsselmeer region, the Southwest Delta and various rivers and canals [45].


Another user of the SPlaSh could be Port of Rotterdam Authority. This is the manager, operator and developer of Rotterdam’s port and industrial area, the port of Rotterdam.

Society

The SPlaSh will have a very important impact on the society. It should namely help to massively decrease the amount of plastic waste in the oceans. The plastics in the sea not only have a bad influence on marine life, but probably also on the health of people. The average person eats at least 50,000 particles of microplastic a year. In reality, this number is probably even many times higher, as only a small number of foods and drinks have been analysed for plastic contamination [46]. This is partially caused by the plastic in the water. People can ingest plastic by eating sea animals, but also via drinking water. The health impacts of this plastic intake are unknown, but they could probably be dangerous. Because of this health-issues, this problem should be tackled globally.

Robot requirements and functionalities

Besides, the image recognition program, the robot itself will need to meet certain requirements and have certain functionalities to be able to carry out the chosen task of quantifying plastic waste in the Maas in the south of Limburg. The robot will need to have some sort of power source and drive, to be able to move forward and backward, and to be able to move up and down in the water. Besides, it will need GPS to be able to know its location, also to perhaps find a charging station. It is also important that the robot will not hit other water vehicles, or the river bottom or bank. So it will also need some kind of sensor to know its location in relationship to its environment. Furthermore, the robot will also need cameras and maybe lights, to be able to detect plastic waste. Besides, some form of data processing, storage and communication will be needed to be able to transfer the data to interested parties. Finally, the robot should be watertight to protect all electronics. So the main parts of the robot that will be looked into are:

  • Drive
  • Shape
  • Power source
  • GPS and localization
  • Cameras and lights
  • Data processing, storage and communication
  • Watertight

Drive

A many used drive form of water vehicles are underwater electrical thrusters. They are relatively cheap and easy to use, in comparison to hydraulic thrusters that need more additional components to work. By combining multiple thrusters maneuverability can be increased. Most electrical underwater thrusters use brush-less Permanent Magnet Synchronous Motors. Most thrusters need a gearbox, which reduces the weight and volume of the thrusters, but add weight because of the used gearbox. Direct drive, without a gearbox, can only be used when the ratio of motor torque to the thrust diameter is so high that the motor can rotate the propeller without a gearbox. For this, the motor needs to be heavier, but weight is also lost due to no gearbox. It also has the advantage of higher reliability and efficiency, but the prices are also higher[47].

The robot will need several thrusters, to be able to move in multiple directions. It will probably need at least two thrusters at the back, and at least four at the bottom. This way the robot can move forward and turn, by adjusting the speed of the separate thrusters. The four at the bottom, will make sure the robot can move upwards in a balanced way. Moving downwards, will probably not be necessary, since the robot will be heavy enough to sink. The total amount of needed thrusters depends on the weight of the robot and the power of each thruster.

(more useful thruster info: https://bluerobotics.com/store/thrusters/t100-t200-thrusters/t200-thruster/)

Shape

To make sure the robot will move as easily as possible through the water, it will be necessary to make an aerodynamic design, so friction or drag forces are as low as possible. This will also make sure that less energy is needed, and the robot can operate longer without recharging. It is also useful to have a design that is somewhat in balance. This way, there won't be one side that sinks faster than one of the other sides. If this is the case, an extra drive will be necessary at that part of the robot. Taking this into account as well, an ellipsoid shape is probably the best shape for the design, since it is symmetrical and has a low drag coefficient. This drag coefficient however is dependent on the exact shape of the ellipse, but becomes lower when the vertical radius is relatively small to the horizontal radius. These dimensions will have to be determined later, since they are also dependent on all the components on the inside. However, the general drag coefficient of an ellipse will still be beneficial and will give a balanced design [48].

Power source

Since the robot will mainly operate underwater a direct form of solar energy, by e.g. adding solar panels on top of the robot, will not be possible. Therefore, it is probably best to use a battery, that can be charged at a nearby charging station. The amount of batteries is dependent on how much energy the drive will use.

GPS and localization

Cameras and lights

Data processing, storage and communication

Watertightness

Test Plan

Goal

Test the amount of correctly identified plastic pieces in the water.

Hypothesis

At least 85% of the plastic will be identified correctly out of 50 images of plastic in water.

Materials
  • Underwater camera
  • Plastic waste
  • Image recognition software
  • Reservoir with water
Method
  • Throw different types of plastic waste in the water
  • Take 50 different images of this with the underwater camera
  • Add the images to a folder
  • Run the image recognition software
  • Analyze how much pieces of plastic are correctly identified

Useful sources

Convolutional neural networks for visual recognition [49]

Using Near-Field Stereo Vision for Robotic Grasping in Cluttered Environments [50]

Logbook

Week 1

Name Total hours Break-down
Kevin Cox 6 Meeting (1h), Problem statement and objectives (1.5h), Who are the users (1h), Requirements (0.5h), Adjustments on wiki-page (2h)
Menno Cromwijk 9 Meeting (1h), Thinking about project-ideas (4h), Working out previous CNN work (2h), creating planning (2h).
Dennis Heesmans 8.5 Meeting (1h), Thinking about project-ideas (3h), State-of-the-art: neural networks (3h), Adjustments on wiki-page (1.5h)
Marijn Minkenberg 7 Meeting (1h), Setting up wiki page (1h), State-of-the-art: ocean-cleaning solutions (part of which was moved to Problem Statement) (4h), Reading through wiki page (1h)
Lotte Rassaerts 7 Meeting (1h), Thinking about project-ideas (2h), State of the art: image recognition (4h)

Week 2

Name Total hours Break-down
Kevin Cox 4.5 Meeting (1.5h), Checking the wiki page (1h), Research and writing maritime transport (2h)
Menno Cromwijk 10 Meeting (1.5h), Installing CNN tools (2h), searching for biodiversity (4.5h), reading and updating wiki (2h)
Dennis Heesmans 6.5 Meeting (1.5h), Installing CNN tools (2h), USE analysis (3h)
Marijn Minkenberg 9 Meeting (1.5h), Checking the wiki page (1h), Installing CNN tools (2h), Research & writing WasteShark (4.5h)
Lotte Rassaerts 5.5 Meeting (1.5h), Research & writing Location and Plastic (4h)

Week 3

Name Total hours Break-down
Kevin Cox 7.5 Meeting (4h), project statement, objectives, users and requirements rewriting (3.5h)
Menno Cromwijk 16 Meeting (4h), planning (1h), reading wiki (1h), searching for database (6h), research Albatros (2h), reading and updating wiki page (2h)
Dennis Heesmans 11 Meeting (4.5h), Research & writing Plastic under water (2h), Calling IMS Services and installing keras (3h), Requirements + Test plan (1.5h)
Marijn Minkenberg 11.5 Meeting (4.5h), Research & writing Quantifying plastic waste (4h), Calling IMS Services and installing keras (3h)
Lotte Rassaerts 11.5 Meeting (4h), Research & rewriting further exploration (3h), Research & writing robot requirements & functionalities (3h), Requirements + Test plan (1.5h)

Template

Name Total hours Break-down
Kevin Cox hrs description (Xh)
Menno Cromwijk hrs description (Xh)
Dennis Heesmans hrs description (Xh)
Marijn Minkenberg hrs description (Xh)
Lotte Rassaerts hrs description (Xh)

References

  1. 1.0 1.1 Oceans. (2020, March 18). Retrieved April 23, 2020, from https://theoceancleanup.com/oceans/
  2. Wikipedia contributors. (2020, April 13). Microplastics. Retrieved April 23, 2020, from https://en.wikipedia.org/wiki/Microplastics
  3. Suaria, G., Avio, C. G., Mineo, A., Lattin, G. L., Magaldi, M. G., Belmonte, G., … Aliani, S. (2016). The Mediterranean Plastic Soup: synthetic polymers in Mediterranean surface waters. Scientific Reports, 6(1). https://doi.org/10.1038/srep37551
  4. Foekema, E. M., De Gruijter, C., Mergia, M. T., van Franeker, J. A., Murk, A. J., & Koelmans, A. A. (2013). Plastic in North Sea Fish. Environmental Science & Technology, 47(15), 8818–8824. https://doi.org/10.1021/es400931b
  5. Rochman, C. M., Hoh, E., Kurobe, T., & Teh, S. J. (2013). Ingested plastic transfers hazardous chemicals to fish and induces hepatic stress. Scientific Reports, 3(1). https://doi.org/10.1038/srep03263
  6. Stevens, A. (2019, December 3). Tiny plastic, big problem. Retrieved May 10, 2020, from https://www.sciencenewsforstudents.org/article/tiny-plastic-big-problem
  7. Peels, J. (2019). Plasticsoep in de Maas en de Waal veel erger dan gedacht, vrijwilligers vinden 77.000 stukken afval. Retrieved May 6, from https://www.omroepbrabant.nl/nieuws/2967097/plasticsoep-in-de-maas-en-de-waal-veel-erger-dan-gedacht-vrijwilligers-vinden-77000-stukken-afval
  8. 8.0 8.1 WasteShark ASV | RanMarine Technology. (2020, February 27). Retrieved May 2, 2020, from https://www.ranmarine.io/
  9. 9.0 9.1 9.2 9.3 9.4 CORDIS. (2019, March 11). Marine Litter Prevention with Autonomous Water Drones. Retrieved May 2, 2020, from https://cordis.europa.eu/article/id/254172-aquadrones-remove-deliver-and-safely-empty-marine-litter
  10. 10.0 10.1 Wikipedia contributors. (2020, May 2). Lidar. Retrieved May 2, 2020, from https://en.wikipedia.org/wiki/Lidar
  11. 11.0 11.1 11.2 11.3 11.4 Swan, E. C. (2018, October 31). Trash-eating “shark” drone takes to Dubai marina. Retrieved May 2, 2020, from https://edition.cnn.com/2018/10/30/middleeast/wasteshark-drone-dubai-marina/index.html
  12. Albatross, floating microplastic database, from https://en.opendata.plastic.research.pirika.org/
  13. 13.0 13.1 Emmerik, T., & Schwarz, A. (2019). Plastic debris in rivers. WIREs Water, 7(1). https://doi.org/10.1002/wat2.1398
  14. Morritt, D., Stefanoudis, P. V., Pearce, D., Crimmen, O. A., & Clark, P. F. (2014). Plastic in the Thames: A river runs through it. Marine Pollution Bulletin, 78(1–2), 196–200. https://doi.org/10.1016/j.marpolbul.2013.10.035
  15. Martin, C., Parkes, S., Zhang, Q., Zhang, X., McCabe, M. F., & Duarte, C. M. (2018). Use of unmanned aerial vehicles for efficient beach litter monitoring. Marine Pollution Bulletin, 131, 662–673. https://doi.org/10.1016/j.marpolbul.2018.04.045
  16. 16.0 16.1 Kylili, K., Kyriakides, I., Artusi, A., & Hadjistassou, C. (2019). Identifying floating plastic marine debris using a deep learning approach. Environmental Science and Pollution Research, 26(17), 17091–17099. https://doi.org/10.1007/s11356-019-05148-4
  17. 17.0 17.1 Gibbens, S. (2018, May 12). Plastic Bag Found at the Bottom of World’s Deepest Ocean Trench. Retrieved May 5, 2020, from https://www.nationalgeographic.co.uk/environment-and-conservation/2018/05/plastic-bag-found-bottom-worlds-deepest-ocean-trench
  18. Zhang, S. (2019, June 7). The Deep Sea Is Full of Plastic, Too. The Atlantic. Retrieved from https://www.theatlantic.com
  19. Nicholson, C. (n.d.). A Beginner’s Guide to Neural Networks and Deep Learning. Retrieved April 22, 2020, from https://pathmind.com/wiki/neural-network
  20. Cheung, K. C. (2020, April 17). 10 Use Cases of Neural Networks in Business. Retrieved April 22, 2020, from https://algorithmxlab.com/blog/10-use-cases-neural-networks/#What_are_Artificial_Neural_Networks_Used_for
  21. Amidi, Afshine , & Amidi, S. (n.d.). CS 230 - Recurrent Neural Networks Cheatsheet. Retrieved April 22, 2020, from https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-recurrent-neural-networks
  22. Hopfield Network - Javatpoint. (n.d.). Retrieved April 22, 2020, from https://www.javatpoint.com/artificial-neural-network-hopfield-network
  23. Hinton, G. E. (2007). Boltzmann Machines. Retrieved from https://www.cs.toronto.edu/~hinton/csc321/readings/boltz321.pdf
  24. Amidi, A., & Amidi, S. (n.d.). CS 230 - Convolutional Neural Networks Cheatsheet. Retrieved April 22, 2020, from https://stanford.edu/%7Eshervine/teaching/cs-230/cheatsheet-convolutional-neural-networks
  25. Yamashita, Rikiya & Nishio, Mizuho & Do, Richard & Togashi, Kaori. (2018). Convolutional neural networks: an overview and application in radiology. Insights into Imaging. 9. 10.1007/s13244-018-0639-9
  26. Qian, N. (1999, January 12). On the momentum term in gradient descent learning algorithms. - PubMed - NCBI. Retrieved April 22, 2020, from https://www.ncbi.nlm.nih.gov/pubmed/12662723
  27. Hinton, G., Srivastava, N., Swersky, K., Tieleman, T., & Mohamed , A. (2016, December 15). Neural Networks for Machine Learning: Overview of ways to improve generalization [Slides]. Retrieved from http://www.cs.toronto.edu/~hinton/coursera/lecture9/lec9.pdf
  28. Kingma, D. P., & Ba, J. (2015). Adam: A Method for Stochastic Optimization. Presented at the 3rd International Conference for Learning Representations, San Diego.
  29. Nesterov, Y. (1983). A method for unconstrained convex minimization problem with the rate of convergence o(1/k^2).
  30. Dozat, T. (2016). Incorporating Nesterov Momentum into Adam. Retrieved from https://openreview.net/pdf?id=OM0jvwB8jIp57ZJjtNEZ
  31. 31.0 31.1 31.2 Seif, G. (2018, January 21). Deep Learning for Image Recognition: why it’s challenging, where we’ve been, and what’s next. Retrieved April 22, 2020, from https://towardsdatascience.com/deep-learning-for-image-classification-why-its-challenging-where-we-ve-been-and-what-s-next-93b56948fcef
  32. Lee, G., & Fujita, H. (2020). Deep Learning in Medical Image Analysis. New York, United States: Springer Publishing.
  33. 33.0 33.1 33.2 Simonyan, K., & Zisserman, A. (2015, January 1). Very deep convolutional networks for large-scale image recognition. Retrieved April 22, 2020, from https://arxiv.org/pdf/1409.1556.pdf
  34. Culurciello, E. (2018, December 24). Navigating the Unsupervised Learning Landscape - Intuition Machine. Retrieved April 22, 2020, from https://medium.com/intuitionmachine/navigating-the-unsupervised-learning-landscape-951bd5842df9
  35. Bosse, S., Becker, S., Müller, K.-R., Samek, W., & Wiegand, T. (2019). Estimation of distortion sensitivity for visual quality prediction using a convolutional neural network. Digital Signal Processing, 91, 54–65. https://doi.org/10.1016/j.dsp.2018.12.005
  36. Brooks, R. (2018, July 15). [FoR&AI] Steps Toward Super Intelligence III, Hard Things Today – Rodney Brooks. Retrieved April 22, 2020, from http://rodneybrooks.com/forai-steps-toward-super-intelligence-iii-hard-things-today/
  37. Christensen, J. H., Mogensen, L. V., Galeazzi, R., & Andersen, J. C. (2018). Detection, Localization and Classification of Fish and Fish Species in Poor Conditions using Convolutional Neural Networks. 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV). https://doi.org/10.1109/auv.2018.8729798
  38. Castrillon-Santana , M., Lorenzo-Navarro, J., Gomez, M., Herrera, A., & Marín-Reyes, P. A. (2018, January 1). Automatic Counting and Classification of Microplastic Particles. Retrieved April 23, 2020, from https://www.scitepress.org/Papers/2018/67250/67250.pdf
  39. Lebreton. (2018, January 1). OSPAR Background document on pre-production Plastic Pellets. Retrieved May 3, 2020, from https://www.ospar.org/documents?d=39764
  40. 40.0 40.1 40.2 Schone Rivieren. (2019). Wat spoelt er aan op rivieroevers? Resultaten van twee jaar afvalmonitoring aan de oevers van de Maas en de Waal. Retrieved from https://www.schonerivieren.org/images/Schone_Rivieren_rapportage_2019.pdf
  41. Staatsbosbeheer. (2019, September 12). Dossier afval in de natuur. Retrieved May 3, 2020, from https://www.staatsbosbeheer.nl/over-staatsbosbeheer/dossiers/afval-in-de-natuur
  42. Scheepvaartreglement Gemeenschappelijke Maas. Retrieved from https://wetten.overheid.nl/BWBR0006618/2013-01-01
  43. Buffon X. (2019, May 20) Robotic Detection of Marine Litter Using Deep Visual Detection Models. Retrieved May 9, 2020, from https://ieeexplore.ieee.org/abstract/document/8793975
  44. Thung G. (2017, Apr 10) Dataset of images of trash Torch-based CNN for garbage image classification. Retrieved May 9, 2020, from https://github.com/garythung/trashnet
  45. Rijkswaterstaat. (2019, December 17). Beheer en ontwikkeling rijkswateren. Retrieved May 2, 2020, from https://www.rijkswaterstaat.nl/water/waterbeheer/beheer-en-ontwikkeling-rijkswateren/index.aspx
  46. Carrington, D. (2019, June 5). People eat at least 50,000 plastic particles a year, study finds. Retrieved May 2, 2020, from https://www.theguardian.com/environment/2019/jun/05/people-eat-at-least-50000-plastic-particles-a-year-study-finds
  47. Wikipedia contributors. (2019, October 3). Underwater thruster. Retrieved May 6, 2020, from https://en.wikipedia.org/wiki/Underwater_thruster
  48. Yakhot, V., & Orszag , S. A. (1986). Renormalization Group Analysis of Turbulence. Journal of Scientific Computing, 1, 3–51. Retrieved from https://link.springer.com/article/10.1007/BF01061452
  49. CS231n: Convolutional Neural Networks for Visual Recognition. (n.d.). Retrieved April 22, 2020, from https://cs231n.github.io/neural-networks-1/
  50. Leeper, A. (2020, April 24). Using Near-Field Stereo Vision for Robotic Grasping in Cluttered Envir. Retrieved April 24, 2020, from https://link.springer.com/chapter/10.1007/978-3-642-28572-1_18