PRE2023 3 Group10: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(78 intermediate revisions by 6 users not shown)
Line 1: Line 1:
== Group members ==
= <ref name=":44">Simulation Code - https://github.com/viliamDokov/0LAUK0-FireRescueSim</ref>'''Fire Reconnaissance Robot Simulation''' =
 
== '''Group members''' ==
{| class="wikitable"
{| class="wikitable"
|+
|+Group members
! Name !! Student number
! Name !! Student number
!Email
!Email
Line 10: Line 12:
|d.adaos@student.tue.nl
|d.adaos@student.tue.nl
|Computer Science and Engineering
|Computer Science and Engineering
|Simulation
|Simulation (fire simulation and mapping)
|-
|-
|Wiliam Dokov
|Wiliam Dokov
Line 16: Line 18:
|w.w.dokov@student.tue.nl
|w.w.dokov@student.tue.nl
|Computer Science and Engineering
|Computer Science and Engineering
|Design/hardware research
|Simulation (sensor virtualization and mapping)
|-
|-
|Kwan Wa Lam
|Kwan Wa Lam
Line 22: Line 24:
|k.w.lam@student.tue.nl
|k.w.lam@student.tue.nl
|Psychology and Technology
|Psychology and Technology
|Research/USE analysis
|Research/USE analysis and modelling
|-
|-
|Kamiel Muller
|Kamiel Muller
Line 28: Line 30:
|k.a.muller@student.tue.nl
|k.a.muller@student.tue.nl
|Chemical Engineering and Chemistry
|Chemical Engineering and Chemistry
|Research/USE analysis
|Research/USE analysis and modelling
|-
|-
|Georgi Nihrizov
|Georgi Nihrizov
Line 34: Line 36:
|g.nihrizov@student.tue.nl
|g.nihrizov@student.tue.nl
|Computer Science and Engineering
|Computer Science and Engineering
|Simulation
|Simulation (fire data and mapping)
|-
|-
|Twan Verhagen
|Twan Verhagen
Line 40: Line 42:
|t.verhagen@student.tue.nl
|t.verhagen@student.tue.nl
|Computer Science and Engineering
|Computer Science and Engineering
|Design/hardware research
|Research and Simulation (mapping)
|}
|}


== Introduction ==
== '''Introduction''' ==


=== Problem statement ===
=== Problem statement ===
Firefighting is a field where robotic technology can offer valuable assistance. The environment where human firefighters have to operate can be very harsh and challenging especially in closed spaces: low visibility due to smoke and lack of light, the presence of dangerous gases and substances, obstacles created by the fire that are not known a priori or change during the fire. In such scenarios, in order to help and save people that are trapped in a building and also to reduce the risks for the firefighters themselves, it is crucial to be able to determine the paths inside the building that are feasible to navigate and can lead to trapped or injured individuals.
[[File:Fire building.png|thumb|Firefighters have to operate in environments with low visibility, dangerous substances, extreme temperatures, unknown building layouts and obstacles<ref>Digitalhallway. (2007c, juli 22). ''Inside house fire''. iStock. https://www.istockphoto.com/nl/foto/inside-house-fire-gm172338474-3821332</ref>]]
Firefighting is a field where robotic technology can offer valuable assistance. The environment where firefighters have to operate can be very harsh and challenging especially in closed spaces. They have to deal with very low visibility due to smoke and lack of light. As well as the presence of dangerous gases or other substances and extreme temperatures. Additionally they are challenged by obstacles, unknown building layouts and even collapses off the building. In these kinds of scenarios with a lot of uncertainties and unknowns, in order to help and save people that are trapped in a building and also to reduce the risks for the firefighters themselves, technology could be used to lend a hand. It is crucial to be able to determine the layout and the associated temperatures of the inside of these buildings.


Our group will focus on the design of a firefighting robot that is able to navigate inside a building, identify and avoid the fire sources and the obstacles that create impassible routes and assist firefighters in their search and rescue operations.
For this purpose our group will focus on the design of a simulation of a fire reconnaissance robot that is able to navigate inside a building, build a map of the inside of the building without any prior knowledge of the layout and add a heat-map on top of it so firefighters can see what pathways are possible to traverse and where the source of the fire is.


=== Objectives ===
=== Objectives ===
Our objective is to design a robot that is able to operate inside a closed space to assist firefighters in their search and rescue operations.
Our objective is to design a simulation of a robot capable of mapping and heat-mapping the layout of a building without prior knowledge in a fire scenario with a sufficient level of realism. To realize these objectives we will focus on the following features for our simulation:
 
We will target the most important features of such a robot:


* Detection and localization of fire sources and obstacles
* An environment where such a robot would realistically be used
* Detection of victims
* A simulation of a fire scenario with a sufficient level of realism (what a sufficient level of realism consists off for the purposes of this project is expanded upon later)
* Discovery of feasible rescue paths
* A robot which can be remotely controlled
* Reliable communication
* A robot capable of making and displaying a mapping of its environment and its position in it (SLAM)
* Robust operation in an environment with low visibility and high temperatures
* A robot capable of adding heat information to a map (creating a heat-map)


=== Users ===
=== Users ===
Line 68: Line 69:
Interested parties for deploying the robot are firefighting authorities, that are tasked for responding to a fire incident and save lives and properties, insurance companies that can benefit from minimizing the loss of life and property and companies that own big buildings and can consider having the robot as part of their regular infrastructure.
Interested parties for deploying the robot are firefighting authorities, that are tasked for responding to a fire incident and save lives and properties, insurance companies that can benefit from minimizing the loss of life and property and companies that own big buildings and can consider having the robot as part of their regular infrastructure.


=== Requirements ===
=== Approach ===
From the initial analysis of the literature the following list of features for the robot that would be beneficial to assisting firefighters in search and rescue missions has been identified:
[[File:TUe.png|thumb|An in person interview was conducted with the TU/e fire department (see appendix)<ref>Schalij, N. (2022b, september 20). ''Fire department will not leave TU/e campus any time soon''. https://www.cursor.tue.nl/en/nieuws/2022/september/week-3/fire-department-will-not-leave-tu-e-campus-any-time-soon/</ref>]]
 
The project was started off with an in depth literature study on the subject matter and closely related topics. While this was happening we were making contact with the fire department at the TU/e so we could do an accurate user analysis based on both literature and our actual target users. This would be followed by research into what is the state of the art in firefighting robots right now so we could see what had been done before and what gaps could still be filled in this particular field. After all this information was collected we were ready to complete the user analysis and make a plan for what exactly we wanted to achieve during this project.
* Ability to detect obstacles
* Ability to build a map of obstacles inside a building
* Ability to determine a path for reaching a specific place inside a building
* Ability to detect fire
* Ability to build a map of the fire inside the building
* Ability to operate in the presence of smoke, limited visibility and high temperatures
* Increased mobility (not too heavy and able to bypass small obstacles)
* Robust communication (ability to communicate the obstacle and fire maps to the firefighting teams)
* Ability to identify victims trapped inside a building
* Increased autonomy
 
After the interview with a firefighter from the firefighting station on the TU/E campus, a MoSCoW list (see user analysis) was constructed. This list was made taking both the user preferences and time constraints of this project into mind.


=== Approach ===
Once it was decided that the focus of the project would be a simulation of a (heat) mapping robot the designing part of the project could finally begin. We first had to research the simulators that we could use (e.g. Netlogo, Webots, Gazebo, ROS, PyroSim) by evaluating their advantages and disadvantages, and what we needed the simulation to do. We also did research on what sensors could possibly be used in the type of scenario that the robot would be used for.  
We will study existing firefighting robot solutions and related literature to identify detailed requirements and solutions to the challenges in the design of a firefighting robot. As well as consult actual firefighters about their opinions on requirements.


For the design of the main features of such robot we will evaluate their quality by using one of the available simulators (e.g. Netlogo, Webots, Gazebo, ROS, PyroSim)
After we finally settled on a simulator the focus was now on implementing a SLAM algorithm, building realistic simulation environments, simulating a realistic fire and implementing a heat map. Our target was to propose a robot design with the focus on its software, a simulation environment that could easily be used in further research in similar topics and some additional hardware suggestions (the required sensors for our robot).


Our target is to propose a robot design (HW and SW) that can be manufactured as a product to assist in firefighting tasks.
Further details on the course of the project and the week to week developments can be found in the appendix under logbook and project summary.  


=== Planning ===
=== Planning ===
[[File:Project planning Group 10.png|thumb|860x860px|Gannt chart for initial project planning (Work in progress)|none]]
[[File:G10Planning.png|none|thumb|859x859px|Gannt chart for project planning.]]


== Research papers ==
== '''Research papers''' ==
To start off the project we did an in depth literature research on firefighting robots and other related relevant subjects. In the table below you can see the titles of the relevant research papers accompanied with a quick summary to give a general idea of the contents of the papers and labels to make it easier to find papers on specific subjects later on in the project. It should be noted that this does not include all sources used, the full list of references can be found at the bottom of the page.
{| class="wikitable"
{| class="wikitable"
|+
|+Research papers
!№
!№
!Title
!Title
Line 104: Line 93:
|1
|1
|A Victims Detection Approach for Burning Building Sites Using Convolutional Neural Networks<ref name=":0">Jaradat, F. B., & Valles, D. (2020). A Victims Detection Approach for Burning Building Sites Using Convolutional Neural Networks. ''2020 10th Annual Computing And Communication Workshop And Conference (CCWC)''. https://doi.org/10.1109/ccwc47524.2020.9031275</ref>
|A Victims Detection Approach for Burning Building Sites Using Convolutional Neural Networks<ref name=":0">Jaradat, F. B., & Valles, D. (2020). A Victims Detection Approach for Burning Building Sites Using Convolutional Neural Networks. ''2020 10th Annual Computing And Communication Workshop And Conference (CCWC)''. https://doi.org/10.1109/ccwc47524.2020.9031275</ref>
|Victim detection, Convolutional neural
|Victim detection, Convolutional neural network.
network.
|They trained a convolutional neural network to detect people and pets in thermal IR, images. They gathered their own dataset to train the network. The network results were pretty accurate (One-Step CNN 96.3%, Two-Step CNN 94.6%).<ref name=":0" />
|They trained a convolutional neural network to detect people and pets in thermal IR, images. They gathered their own dataset to train the network. The network results were pretty accurate (One-Step CNN 96.3%, Two-Step CNN 94.6%).<ref name=":0" />
|-
|-
Line 260: Line 248:
|31
|31
|Real time simulation of fire extinguishing scenarios<ref name=":30">Maschek, M. (2010). ''Real Time Simulation of Fire Extinguishing Scenarios'' [Technical university wien]. https://www.cg.tuwien.ac.at/research/publications/2010/maschek-2010-rts/maschek-2010-rts-Paper.pdf</ref>  
|Real time simulation of fire extinguishing scenarios<ref name=":30">Maschek, M. (2010). ''Real Time Simulation of Fire Extinguishing Scenarios'' [Technical university wien]. https://www.cg.tuwien.ac.at/research/publications/2010/maschek-2010-rts/maschek-2010-rts-Paper.pdf</ref>  
|
|Virtual model
|Describes software and methodology for simulating fire response scenarios. Demonstrates implementation of FDS with Unreal Engine to generate a fire scenario simulation.<ref name=":30" />
|Describes software and methodology for simulating fire response scenarios. Demonstrates implementation of FDS with Unreal Engine to generate a fire scenario simulation.<ref name=":30" />
|-
|-
Line 269: Line 257:
|}
|}


== User analysis ==
== '''State of the Art Robots''' ==
Because to this day firefighting is still a dangerous job, a lot of effort is put into the development of ways to minimize the risk for firefighters. In this age of technology this means that a lot of different fire fighting robots are being developed to try and help firefighters in this delicate task. To see what developments have already been made in this field we will now make an overview of current landscape of state of the art fire fighting robots. This will be done by first giving a couple of examples of a of robots that are being used today. This will paint a clear picture on what kind of technologies there are right now and what is already being done. This way we can try to find new and innovative angles on the matter of firefighting robots and prevent ourselves from designing a robot that is already being employed in the field.
 
=== LUF60 ===
 
====== Main Characteristics: ======
[[File:LUF60.png|thumb|The LUF 60 fire fighting robot<ref>''LUF 60 - Wireless remote control fire fighting machine — Steemit''. (z.d.). Steemit. https://steemit.com/steemhunt/@memesdaily/luf-60-wireless-remote-control-fire-fighting-machine</ref>]]
 
Diesel powered, Fire extinguishing, Remote controlled
 
====== Description: ======
This is a remote controlled firefighting robot that is designed to distinguish fires. It consists of an air blower that blows a beam of water droplets (to a distance of approximately 60 meters). This water jet can go up to 2400 liters of water per minute. If needed it could also blow a beam of foam (to a distance of approximately 35 meters). It is designed to operate in difficult conditions and can, by remote control, be send directly to the fire source. To accomplish this the robot is able to remove obstacles and climb stairs (up to a 30 degree angle)<ref name=":31" /><ref name=":32">''LUF 60 – LUF GmbH''. (z.d.). https://www.luf60.at/en/extinguishing-support/fire-fighting-robot-luf-60/</ref>. It should be noted that LUF produces more robots similar to this one see<ref>''Extinguishing Support – LUF GmbH''. (z.d.). https://www.luf60.at/en/extinguishing-support/</ref>.
 
====== Advantages: ======
 
* Minimizes risk for firefighters (remote controlled) and designed to also clear obstructions from a distance.
* Good extinguishing capabilities ( up to 2400 liters per minute).
 
====== Disadvantages: ======
 
* Its size exceeds that of a standard door restricting movement. Its dimensions are are 2.33m x 1.35m x 2.00-2.50m (length x width x height). A standard door is 0.80-0.90m x 2.00-2.10m (width x height).
* Its high weight of 2.200 kg, resulting in low mobility. The maximal speed is 4.5 km/h which is around the average walking speed<ref name=":32" />.
 
=== EHang EH216-F ===
[[File:EHangEH216F.jpg|thumb|EHang EH216-F firefighting drone <ref>''EHang Announced Completion of EH216F’s Technical Examination by NFFE'' (z.d.). EHang. https://www.ehang.com/news/798.html</ref>]]
 
======  Main Characteristics: ======
Battery powered, Fire extinguishing, Drone, Remote or directly controlled 
 
====== Description: ======
This is a firefighting robot that can be either remote or manually controlled by a pilot. The main focus of this robot is to be able to extinguish fires in high-rise buildings that are not possible to extinguish from ground level, due to the limited range of fire hoses<ref name=":3" />. The robot has a maximum flight altitude of 600 m, is able to carry a single person and has a maximum cruising speed of 130 km/h. The drone carries 100 l of firefighting liquid, with the spray lasting for 3.5 minutes and 6 fire extinguishing projectiles containing fire extinguishing powder and a window breaker. The drone is 7.33 m in length, 5.61 m in width and has a height of 2.2 m<ref name=":33">''EHang EH216-F (production model)''. (n.d.). https://evtol.news/ehang-eh216-f</ref>.
 
====== Advantages: ======
* Is able to reach and extinguish fires that grounded firefighting equipment cannot reach.
* Can extinguish fires from outside of the buildings, thus minimizing the risk for firefighters.
* Has very high mobility, meaning it can quickly respond to fires.
 
====== Disadvantages: ======
* Can only carry a set amount of fire extinguishing equipment, thus limiting it's capabilities.
* It's flight time is approximately 21 minutes, thus limiting deployment range and deployment time<ref name=":33" />.
 
=== '''THERMITE RS3''' ===
[[File:THERMITE RS3.png|thumb|The THERMITE RS3 firefighting robot<ref>''THERMITE®'' (z.d.). https://www.howeandhowe.com/civil/thermite</ref>]]
 
====== Main Characteristics: ======
Diesel powered, Fire extinguishing, Remote controlled
 
====== Description: ======
The THERMITE RS3 is a firefighting robot designed to extinguish fires from the outside of a building. It is a remote controlled diesel powered robot equipped with a water canon. It is capable of shooting a beam of water 100 meters horizontally and 50 meters vertically.  With the capability of using foam if needed. It has also (relatively recently in 2020) been adopted into the Los Angeles Fire Department<ref>''LAFD Debuts the RS3: First Robotic Firefighting Vehicle in the United States | Los Angeles Fire Department''. (z.d.). https://www.lafd.org/news/lafd-debuts-rs3-first-robotic-firefighting-vehicle-united-states</ref>. It can also be noted that Howe & Howe produces more robots with a similar functionality as this one<ref name=":34">''THERMITE®'' (z.d.-b). https://www.howeandhowe.com/civil/thermite</ref>.
 
Advantages:
* Good extinguishing capabilities ( up to 9464 liters per minute).
* Good mobility for its size (up to 13 km/h and ability to climb 35% slope while weighing 1588 kg).
 
====== Disadvantages: ======
 
* The robot was not designed to go inside of buildings so it has limited adaptability. Its dimensions are are 2.14m x 1.66m x 1.63m (length x width x height). A standard door is 0.80-0.90m x 2.00-2.10m (width x height)<ref name=":34" />.
=== '''COLOSSUS''' ===
[[File:COLOSSUS.png|thumb|The COLOSSUS fire fighting robot<ref name=":36">''DPG Media Privacy Gate''. (z.d.). https://www.ad.nl/binnenland/robot-colossus-bleef-de-notre-dame-koelen-van-binnenuit~aee4510a2/</ref>]]
 
====== Main Characteristics: ======
Battery powered, Fire extinguishing, Reconnaissance, Remote controlled
 
====== Description: ======
The COLOSSUS is a firefighting robot developed to fight fires in indoors and outdoors situations. It is an remotely controlled robot with some AI integration that provides driving assistance. The main purpose of this robot is to extinguish fires in places to dangerous for firefighters to go. The robot is battery powered (batteries last up to 12 hours) and has 9 interchangeable models that can be used for different situations. Because of these different models it is very adaptable and can be used from reconnaissance to victim extraction. A very notable instance of the robot being used was when they were deployed alongside the Paris firefighters during the Notre-Dame fire in 2019. Where they were used to cool down the insides of the cathedral where firefighters couldn't go because of falling debris<ref name=":36" />. Now the robot is being used in 15 different countries. Shark robotics produces more similar robots<ref name=":35">''Colossus advanced firefighting robot | Shark Robotics''. (z.d.-b). https://www.shark-robotics.com/robots/Colossus-firefighting-robot</ref><ref>''Shark Robotics - Leader in safety robotics''. (z.d.). https://www.shark-robotics.com/</ref>.
 
====== Advantages: ======
 
* Relatively long operation time despite being battery powered (up to 12 hours in operational situations).
* Capable to withstand high temperatures (also waterproof and dust-proof).
* Can be deployed out door and indoor (minimizes risk for firefighters). The robots dimensions are 1.60m x 0.78m x 0.76m (length x width x height). Can also climb slopes of up to 40 degrees.
* Different mountable modules for different situations (for instance 180 degree video turret or wounded people transport stretcher)
* Good extinguishing capabilities (3000 liters per minute)
 
====== Disadvantages: ======
 
* The robot has an extremely low speed of up to 3.5 km/h (which is slower then average walking speed)<ref name=":35" />.
 
=== '''X20 Quadruped Robot Dog''' ===
[[File:X30.png|thumb|The X20 Quadruped Robot Dog<ref name=":37">''Uncover Myriad Uses of Robotics across varied industries- DEEP Robotics''. (z.d.). https://www.deeprobotics.cn/en/index/industry.html</ref>]]
 
====== Main Characteristics: ======
Battery powered, Reconnaissance, Autonomous and Remote controlled, Quadruped
 
====== Description: ======
The X20 Quadruped Robot Dog is a fully autonomous robot (with remote control capabilities) made to traverse difficult terrain such as ruins, piles of rubble  and other complex terrains using its four legs. Furthermore the robot was developed to detect hazards using its different sensors and cameras. It comes equipped with a bi-spectrum PTZ camera, a dynamic infrared camera, gas sensor, sound pickup and LiDAR. Using a SLAM algorithm it gets the measurements of its environment and builds a 3D map of it. It also comes equipped with a light weighted robotic arm. The main job of this robot in a disaster area is to do reconnaissance which is directly send back to the digital system. It can collect sound from victims that it finds and make calls with them. Also calculating the best pathways to get to safety<ref name=":38">''X20 Hazard Detection & Rescue Solution''. (z.d.). Deeprobotics. https://deep-website.oss-cn-hangzhou.aliyuncs.com/file/X20%20Hazard%20Detection%20%26%20Rescue%20Solution.pdf</ref>. Deep Robotics has produced more similar robots and even a newer model (the X30) however the X20 was specifically marketed towards rescue operations as to why we chose to look further into this older model (2021)<ref name=":37" /><ref>''DEEP Robotics - Global Quadruped Robot Leader''. (z.d.-b). https://www.deeprobotics.cn/en/index.html</ref>.
 
====== Advantages: ======
 
* The robot is relatively light (53kg) and can go up to 15 km/h*.
* The robot can traverse difficult terrain (its able to traverse 20cm high obstacles and climb stairs and 30 degree slopes) and is also capable of operating indoors as well as outdoors. The robots dimensions are 0.95m x 0.47m x 0.70m (length x width x height).
* The robot is fully autonomous an can make a mapping of the area as well ass detect temperatures, toxic gases and victims. Making it a good reconnaissance robot and making the job more safe for firefighters.
 
====== Disadvantages: ======
 
* The robot is battery powered and only lasts 2-4 hours.
* Even though it can work under extreme conditions such as downpour, dust storm, frigid temperatures and hail. It wasn't specified whether it could work under extreme heat<ref name=":39">''X20: The Ultimate Quadruped Bot series for Industrial Use - DEEP Robotics''. (z.d.). https://www.deeprobotics.cn/en/index/product.html</ref>.
 
<nowiki>*</nowiki>It should be noted that some specifications of the robot varied pretty significantly on the manufacturers own website<ref name=":39" /><ref name=":38" />.
 
=== '''SkyRanger R70''' ===
[[File:SkyRanger R70.png|thumb|The SkyRanger R70<ref>''SkyRanger® R70 | Teledyne FLIR''. (z.d.). https://www.flir.eu/products/skyranger-r70/?vertical=uas&segment=uis</ref>]]
 
====== Main Characteristics: ======
Battery powered, Drone, Reconnaissance, Remote controlled, Semi-autonomous
 
====== Description: ======
The SkyRanger R70 comes was developed for a wide range of missions including but not restricted to fire scenes and search and rescue operations. The main goal of the drone is to give an unobstructed wider view of what is happening in such a situation. It comes equipped with a detailed thermal camera making it ideal for analyzing fires from a safe distance. The drone also carries a computer making it capable of using AI for object detection and classification. It isn't specified but it is implied that the drone is only meant for outside use meaning that no scouting inside of buildings can be done with this robot<ref name=":40">''Support for SkyRanger R70 | Teledyne FLIR''. (z.d.-b). https://www.flir.com/support/products/skyranger-r70/?vertical=uas&segment=uis#Documents</ref>. Teledyne FLIR makes more similar drones, thermal cameras and other products<ref>''Thermal Imaging, Night Vision and Infrared Camera Systems | Teledyne FLIR''. (z.d.). https://www.flir.eu/</ref>.
 
====== Advantages: ======
 
* Being a drone this robot is very mobile, with top speeds of 50 km/h.
* Can give a very detailed top down view, including thermal vision.
 
====== Disadvantages: ======
 
* The drone is only meant for reconnaissance meaning that it can only carry and deliver payloads up to 2 kg.
* Its battery only lasts up to 50 minutes.
* Can only operate in temperatures up to 50 degrees Celsius, meaning that it has to keep a safe distance from the fire<ref name=":40" />.
 
== '''User analysis''' ==
The end goal for this project is to deliver a simulation of a remote controlled mapping robot that is capable of entering buildings that are on fire and provide mapping information regarding the layout of the building and where obstacles (including fire) and possibly victims are located.   
The end goal for this project is to deliver a simulation of a remote controlled mapping robot that is capable of entering buildings that are on fire and provide mapping information regarding the layout of the building and where obstacles (including fire) and possibly victims are located.   


Line 301: Line 409:
* The ability to of complex communication and interaction beyond just the data sharing.
* The ability to of complex communication and interaction beyond just the data sharing.


== Reasoning behind chosen objectives ==
== '''Reasoning behind chosen objectives''' ==
For this project we wanted to deliver a proof of concept for (parts) of a mapping robot to be used by firefighters in buildings. We decided to do this by using a simulation for several reasons
 
* Our groups composition: as our group consists of 4 computer scientist, making a programmed simulation will cost less time learning new skills so we can focus more time on the proof of concept itself
* A simulation makes it easier to test & tweak the robot's software. As we don't have the resources to test in an actual burning building, a controlled fire in a lab is the best possible way to test a live robot. This still takes time to setup and is harder to repeatedly do. Testing a simulation model is easier, and so gives us more opportunity to test and tweak the robot more easily
* Making a simulation instead of a real model gives the benefit of being able to ignore the hardware side of the robot. When building a real model it must be made by using hardware and focus needs to be diverted to getting the right hardware in place. With a simulation we can focus on the parts of the robot we are going to give a proof of concept for.


== Simulator Selection ==
We decided to make this simulation in Unity, some of the options we had and their advantages and disadvantages are listed in the next chapter of this wiki. We ultimately decided between using unity and ROS and chose unity for these reasons
 
* ROS can only be used on Linux, which makes installing it and setting it up and using it a lot more difficult and time expensive
* Unity has a lot more documentation and ready made code then ROS, so in unity we have more resources available to help us build the simulation
* Even though ROS has more features ready aimed at robot simulation, unity has all tools we need to build our simulation
 
For our simulation we made a MoSCoW list that shows what is going to be included in the simulation.
 
== '''Simulator Selection''' ==


=== Main choices for simulation environments: ===
=== Main choices for simulation environments: ===
==== Core simulation environment: ====


* Multi-agent frameworks: NetLogo, RePast
* Multi-agent frameworks: NetLogo, RePast
* ROS
* ROS
* Unreal Engine
* Game Engine - Unreal Engine/Unity
* FDS combined with an agent model
* FDS combined with an agent model
* FDS combined with ROS/Unreal
* FDS combined with ROS/Unreal/Unity
 
==== Fire simulation: ====
 
* Fire Dynamics Simulator (FDS)
* Custom fire simulation


===== Discarded: =====
===== Discarded: =====
Line 317: Line 445:


=== NetLogo ===
=== NetLogo ===
[[File:NetLogo.png|thumb|NetLogo<ref>''NetLogo Models Library: GenEVo 2 Genetic Drift''. (z.d.). https://ccl.northwestern.edu/netlogo/models/GenEvo2GeneticDrift</ref>]]


====== Advantages: ======
====== Advantages: ======
Line 328: Line 457:
* No integrated fire/smoke simulation
* No integrated fire/smoke simulation
* Cannot integrate actual sensor behavior
* Cannot integrate actual sensor behavior
=== RePast ===
=== RePast ===
RePast (Recursive Porous Agent Simulation Toolkit) is an open-source agent-based modeling and simulation (ABMS) toolkit for Java. It is designed to support the construction of agent-based models.
[[File:RePast.png|thumb|RePast<ref>''Repast suite documentation''. (z.d.). https://repast.github.io/quick_start.html</ref>]]RePast (Recursive Porous Agent Simulation Toolkit) is an open-source agent-based modeling and simulation (ABMS) toolkit for Java. It is designed to support the construction of agent-based models.


====== Advantages: ======
====== Advantages: ======
Line 343: Line 471:


=== ROS/RViz ===
=== ROS/RViz ===
[[File:ROS-RViz.png|thumb|ROS with RViz<ref>''Fig. 7: RVIZ node panel for human-robot visual interface on ROS ecosystem''. (z.d.). ResearchGate. https://www.researchgate.net/figure/RVIZ-node-panel-for-human-robot-visual-interface-on-ROS-ecosystem_fig7_305730015</ref>]]
RViz is a 3D visualization tool for ROS (Robot Operating System), which is commonly used in robotics research and development. ROS is an open-source framework for building robot software, providing various libraries and tools for tasks such as hardware abstraction, communication between processes, pathfinding, mapping and more.
RViz is a 3D visualization tool for ROS (Robot Operating System), which is commonly used in robotics research and development. ROS is an open-source framework for building robot software, providing various libraries and tools for tasks such as hardware abstraction, communication between processes, pathfinding, mapping and more.


Line 355: Line 484:


* Fire/smoke simulation not supported - outside implementation needed
* Fire/smoke simulation not supported - outside implementation needed
* Unfamiliar and complex
* Requires Linux/ Difficult installation


=== FDS + (Custom)agent simulator ===
=== FDS + (Custom)agent simulator ===
[[File:FDS.png|thumb|FDS<ref>''What is FDS? - FDS Tutorial''. (2021, 25 februari). FDS Tutorial. https://fdstutorial.com/what-is-fds/</ref>]]
Fire Dynamics Simulator (FDS) is a computational fluid dynamics (CFD) model of fire-driven fluid flow. FDS is a program that reads input parameters from a text file, computes a numerical solution to the governing equations, and writes user-specified output data to files.
Fire Dynamics Simulator (FDS) is a computational fluid dynamics (CFD) model of fire-driven fluid flow. FDS is a program that reads input parameters from a text file, computes a numerical solution to the governing equations, and writes user-specified output data to files.


Line 376: Line 508:
PyroSym is a GUI which makes using FDS easy but it is paid unless a special offer is made for academic purposes
PyroSym is a GUI which makes using FDS easy but it is paid unless a special offer is made for academic purposes


=== Unreal Engine + FDS ===
=== Unreal Engine/Unity + FDS ===
Data from FDS can be extracted and loaded into Unreal Engine and Unreal can handle agent simulation.
[[File:Unity.png|thumb|Unity<ref>Technologies, U. (z.d.). ''Unity - Manual: The Scene view''. https://docs.unity3d.com/Manual/UsingTheSceneView.html</ref>]]
Data from FDS can be extracted and loaded into Unreal/Unity and they can handle agent simulation.


====== Advantages: ======
====== Advantages: ======


* Abundance of resources for Unreal Engine
* Abundance of resources for Unreal Engine/Unity
* High flexibility
* High flexibility
* Ease of use of Unreal Engine combined with accurate simulation from FDS
* Ease of use of Unreal Engine/Unity combined with accurate simulation from FDS
* Successful implementation in literature  [31]
* Successful implementation in literature  [31]


Line 396: Line 529:
====== Advantages: ======
====== Advantages: ======


* Both well established professional softwares for their respective uses
* Both well established professional software for their respective uses
* Very high control and customizability
* Very high control and customizability


Line 402: Line 535:


* Did not find resources on successful implementation but no reason why it shouldn’t be possible.
* Did not find resources on successful implementation but no reason why it shouldn’t be possible.
* Both complicated and unknown softwares (might be too much work)
* Both complicated and unknown software (might be too much work)
* Computationally expensive
* Computationally expensive


== State of the Art Robots ==
== '''Chosen Simulator explanation''' ==
Because to this day firefighting is still a dangerous job, a lot of effort is put into the development of ways to minimize the risk for firefighters. In this age of technology this means that a lot of different fire fighting robots are being developed to try and help firefighters in this delicate task. To see what developments have already been made in this field we will now make an overview of current landscape of state of the art fire fighting robots. This will be done by first giving a couple of examples of a of robots that are being used today. This will paint a clear picture on what kind of technologies there are right now and what is already being done. Followed by our more general findings on the state of modern firefighting robots overall.  
The final choice was Unity as the core environment combined with FDS for the fire simulation.


=== LUF60 ===
For the core simulation environment it was decided that multi-agent frameworks such as NetLogo and RePast are too simple and lack the capabilities for a physical simulation of Unreal, Unity or ROS. From the latter choices Unreal Engine and Unity hold the same value of being highly customizable, user-friendly and very well documented, while ROS had proper sensor and odometry implementations. The final choice was Unity due to its ease of use and customizability as well as the previous experience of 2 team members with the software.


====== Main Characteristics: ======
For the fire simulation FDS was chosen as it is an established software and provides highly accurate results. The downside of computational complexity was reduced by carefully selecting input and output parameters. A custom fire simulation would be just as or more complicated to use as FDS while providing inferior results.
[[File:LUF60.png|thumb|The LUF 60 fire fighting robot<ref>''LUF 60 - Wireless remote control fire fighting machine — Steemit''. (z.d.). Steemit. https://steemit.com/steemhunt/@memesdaily/luf-60-wireless-remote-control-fire-fighting-machine</ref>]]


Diesel powered, Fire extinguishing, Remote controlled
== '''Simulated robot specification''' ==
In order to achieve the objective of creating a robot that creates a 2D map of its environment we decided to have the following robot design:


====== Description: ======
# A rectangular chassis of the robot containing all of the microcontrollers, power supply and other electronics needed for the function of the robot. The specific electronics contained in the chassis were ignored in the simulation. The chassis of the robot was simulated as a rectangular cuboid with the Rigid-body<ref name=":41">Unity - Rigidbody https://docs.unity3d.com/ScriptReference/Rigidbody.html</ref> component which gives mass to the object and allows it to be simulated in the unity physics engine.
This is a remote controlled firefighting robot that is designed to distinguish fires. It consists of an air blower that blows a beam of water droplets (to a distance of approximately 60 meters). This water jet can go up to 2400 liters of water per minute. If needed it could also blow a beam of foam (to a distance of approximately 35 meters). It is designed to operate in difficult conditions and can, by remote control, be send directly to the fire source. To accomplish this the robot is able to remove obstacles and climb stairs (up to a 30 degree angle)<ref name=":31" /><ref name=":32">''LUF 60 – LUF GmbH''. (z.d.). https://www.luf60.at/en/extinguishing-support/fire-fighting-robot-luf-60/</ref>. It should be noted that LUF produces more robots similar to this one see<ref>''Extinguishing Support – LUF GmbH''. (z.d.). https://www.luf60.at/en/extinguishing-support/</ref>.
# Differential drive wheel system. The robot has two motors which are attached to the front wheels and a free moving wheel at the back.
# An air temperature sensor at the chassis of the robot.
# A 2mm wave radar sensor on top of the chassis. This sensor serves the same functionality as a LiDAR sensor with the added advantage that it is not affect by the extreme temperatures that can occur in a fire scenario.
# A camera that will be used to represent the point of view of the robot operator.  


====== Advantages: ======
These are the 4 main components of the robot, further details on how each of the components was implemented in the simulation can be found in the subsections below.


* Minimizes risk for firefighters (remote controlled) and designed to also clear obstructions from a distance.
=== FDS simulation ===
* Good extinguishing capabilities ( up to 2400 liters per minute).
To generate fire data, we created the models described in section Simulated environments in FDS. To avoid the major issue of computation complexity, we applied the following simplifications:


====== Disadvantages: ======
# We limited the acceleration of the fire to 10 seconds, i.e. we applied a fire accelerant to the fire for only the first 10 seconds of the simulation.
# We only produced fire data for a slice along the x-y dimensions, i.e. we output data was a plane parallel to plane formed by the x-y dimensions. This plane was 1m high form the origin along the z axis.


* Its size exceeds that of a standard door restricting movement. Its dimensions are are 2.33m x 1.35m x 2.00-2.50m (length x width x height). A standard door is 0.80-0.90m x 2.00-2.10m (width x height).
Following this procedure, we generated 5 minutes of data from 1 fire for each environment, with only the house having a second fire simulation. This gives a total of 5 minutes of simulation for each of the school and office environments, and 10 minutes for the house environment.
* Its high weight of 2.200 kg, resulting in low mobility. The maximal speed is 4.5 km/h which is around the average walking speed<ref name=":32" />.


=== EHang EH216-F ===
=== FDS data transfer ===
[[File:EHangEH216F.jpg|thumb|EHang EH216-F firefighting drone <ref>''EHang Announced Completion of EH216F’s Technical Examination by NFFE'' (z.d.). EHang. https://www.ehang.com/news/798.html</ref>]]
The files generated by FDS are read using the ReadFds.ipynb notebook which can be found at the Resources folder in the source code(link here). The python library fdsreader is used to read and transform the data to be printed as a csv in the following format:


======  Main Characteristics: ======
First line: Number of timestamps, Number of sample points along the X axis, Number of sample points along the Z axis
Battery powered, Fire extinguishing, Drone, Remote or directly controlled 


====== Description: ======
Second line: X dimension, Z dimension
This is a firefighting robot that can be either remote or manually controlled by a pilot. The main focus of this robot is to be able to extinguish fires in high-rise buildings that are not possible to extinguish from ground level, due to the limited range of fire hoses<ref name=":3" />. The robot has a maximum flight altitude of 600 m, is able to carry a single person and has a maximum cruising speed of 130 km/h. The drone carries 100 l of firefighting liquid, with the spray lasting for 3.5 minutes and 6 fire extinguishing projectiles containing fire extinguishing powder and a window breaker. The drone is 7.33 m in length, 5.61 m in width and has a height of 2.2 m<ref name=":33">''EHang EH216-F (production model)''. (n.d.). https://evtol.news/ehang-eh216-f</ref>.


====== Advantages: ======
Third line: Timestamps of snapshots
* Is able to reach and extinguish fires that grounded firefighting equipment cannot reach.
* Can extinguish fires from outside of the buildings, thus minimizing the risk for firefighters.
* Has very high mobility, meaning it can quickly respond to fires.


====== Disadvantages: ======
Lines 4 and after: Snapshots of the temperature data along the slice of space in the following format - value at position i * X + j corresponds to the datapoint at position i, j in the snapshot.
* Can only carry a set amount of fire extinguishing equipment, thus limiting it's capabilities.
* It's flight time is approximately 21 minutes, thus limiting deployment range and deployment time<ref name=":33" />.


=== '''THERMITE RS3''' ===
The data is then read by Unity in the ReadHeatData.cs<ref name=":44" /> script according to this format and stored in a 3 dimensional array with dimensions - time, x and z.
[[File:THERMITE RS3.png|thumb|The THERMITE RS3 firefighting robot<ref>''THERMITE®'' (z.d.). https://www.howeandhowe.com/civil/thermite</ref>]]


====== Main Characteristics: ======
=== Heat sensor ===
Diesel powered, Fire extinguishing, Remote controlled
The heat sensor simulation was designed to mimic a temperature probe at a height of 1 meter. The data generated from FDS is put into a 3 dimensional array with dimensions - time, x and z. The accurate world position of the heat sensor object on the robot is taken and converted to indexes using a linear transformation function which is derived by the difference in the scale of the environment of Unity and FDS. The time is taken from the start of the simulation and together with the translated coordinates is used to get a heat measurement.


====== Description: ======
The data is taken perfectly with no noise being introduced as from the interview it was concluded that the there is no need for a high amount of accuracy or precision. The important aspect is to determine the general temperature distribution and to determine dangerous areas. For that we did not need to simulate the inaccuracies of temperature probe.
The THERMITE RS3 is a firefighting robot designed to extinguish fires from the outside of a building. It is a remote controlled diesel powered robot equipped with a water canon. It is capable of shooting a beam of water 100 meters horizontally and 50 meters vertically. With the capability of using foam if needed. It has also (relatively recently in 2020) been adopted into the Los Angeles Fire Department<ref>''LAFD Debuts the RS3: First Robotic Firefighting Vehicle in the United States | Los Angeles Fire Department''. (z.d.). https://www.lafd.org/news/lafd-debuts-rs3-first-robotic-firefighting-vehicle-united-states</ref>. It can also be noted that Howe & Howe produces more robots with a similar functionality as this one<ref name=":34">''THERMITE®'' (z.d.-b). https://www.howeandhowe.com/civil/thermite</ref>.


Advantages:
=== Wheel system ===
* Good extinguishing capabilities ( up to 9464 liters per minute).
For this robot we selected a differential drive system with two front wheels connect to motors and a free moving caster wheel in the back of the chassis. The front wheels are represented in unity as solid cylindrical objects with the RigidBody<ref name=":41" /> component. The wheel connection to the robot chassis was handled by adding a HingeJoint <ref name=":42">Unity - Hinge Joint https://docs.unity3d.com/Manual/class-HingeJoint.html</ref>component. With the hinge joint we can restrict the movement of the wheels to move as if they were connected to the chassis of the robot. However to still allow the wheels to rotate freely we allowed rotation around the Y-axis of the hinge joint. Additionally, by default the hinge joints in unity have a maximum angle of rotation. This effect is not desirable for our use case since we want our wheels to have unrestricted rotations. We circumvented this by setting the "Min" and "Max" limit parameters to 0 of the HingeJoint to allow for continuous movement.  
* Good mobility for its size (up to 13 km/h and ability to climb 35% slope while weighing 1588 kg).


====== Disadvantages: ======
The back "free-spinning" wheel was designed to simulate a "ball-wheel"<ref>Polulu - Ball Wheel https://www.pololu.com/product/950</ref> . It was modeled using a sphere and RigidBody<ref name=":41" /> component. This wheel, unlike  the front wheels needs to rotate freely around every axis in order to allow the robot to freely turn. This extra requirement necessitated the use of more complex ConfigurableJoint <ref>Unity - ConfigurableJoint https://docs.unity3d.com/Manual/class-ConfigurableJoint.html</ref>. With this joint we connected the wheel to the main chassis and locked the movement of the wheel across every axis but we allowed for free rotation.


* The robot was not designed to go inside of buildings so it has limited adaptability. Its dimensions are are 2.14m x 1.66m x 1.63m (length x width x height). A standard door is 0.80-0.90m x 2.00-2.10m (width x height)<ref name=":34" />.
We also need to move our robot. To do this we simulated the front wheel movement using the "Motor" function of the HingeJoint<ref name=":42" /> component. The HingeJoint motor has two settings: Torque and Velocity. We set the torque of the motors to a high enough value so that our motors can move the and we connected the velocity setting with the user's keyboard input. With this we can control the movement of the robot using the WASD keys. The code that handles the movement of the robot can be found in the MoveWheels2.cs <ref name=":44" />script inside our UnityProject.
=== '''COLOSSUS''' ===
[[File:COLOSSUS.png|thumb|The COLOSSUS fire fighting robot<ref name=":36">''DPG Media Privacy Gate''. (z.d.). https://www.ad.nl/binnenland/robot-colossus-bleef-de-notre-dame-koelen-van-binnenuit~aee4510a2/</ref>]]


====== Main Characteristics: ======
An interesting issue that occurred during the development of the simulation was that the torque setting of the wheel motors was too high which caused the robot to flip upside down since the chassis was too light. This issue was exacerbate by the different friction values of the floor of the different environments. In the end we resolved this by increasing the weight of the chassis
Battery powered, Fire extinguishing, Reconnaissance, Remote controlled


====== Description: ======
=== 2mm Wave Radar Sensor ===
The COLOSSUS is a firefighting robot developed to fight fires in indoors and outdoors situations. It is an remotely controlled robot with some AI integration that provides driving assistance. The main purpose of this robot is to extinguish fires in places to dangerous for firefighters to go. The robot is battery powered (batteries last up to 12 hours) and has 9 interchangeable models that can be used for different situations. Because of these different models it is very adaptable and can be used from reconnaissance to victim extraction. A very notable instance of the robot being used was when they were deployed alongside the Paris firefighters during the Notre-Dame fire in 2019. Where they were used to cool down the insides of the cathedral where firefighters couldn't go because of falling debris<ref name=":36" />. Now the robot is being used in 15 different countries. Shark robotics produces more similar robots<ref name=":35">''Colossus advanced firefighting robot | Shark Robotics''. (z.d.-b). https://www.shark-robotics.com/robots/Colossus-firefighting-robot</ref><ref>''Shark Robotics - Leader in safety robotics''. (z.d.). https://www.shark-robotics.com/</ref>.
[[File:Screenshot 2024-04-10 224222.png|alt=View from main robot camera|thumb|View from main robot camera]]
Although we officially call this sensor the 2mm Wave Radar Sensor the unofficial term we used when developing the simulation was simply the "LiDAR sensor", because the for the purposes of the simulation the behaviour of both the radar and LiDAR is the same. We want our LiDAR/Radar sensor to generate 2d array scan point of the surrounding obstacles at a given frequency. We also want to simulate the physical nature of the LiDAR where it spins at a certain velocity and it takes measurements at certain frequencies. Finally we also want to take into account that the LiDAR sensor data is not perfect in the real world, so we also want to include these imperfect measurements in our simulation. All of this functionality of the LiDAR was implemented in the LidarScan.cs<ref name=":44" /> script inside our Unity Project. This script sends a predefined number of rays<ref>Unity - Raycast https://docs.unity3d.com/ScriptReference/Physics.Raycast.html</ref> in circle around the radar. The rays travel a predetermined distance and if they it an object that is closer than the given distance they report it back to the LiDAR. In the end we generate an Array of measurements where each entry in the array contains the angle at which this measurement is taken and the distance of the object that was detected. If no object was detected then there is no entry in the measurement array. Finally, we randomize each entry using random values sampled from a uniform random distribution. We can freely tweak the maximum scan distance of the LiDAR, the number of measurements taken per scan and the strength of the random noise that was added.  


====== Advantages: ======
=== Camera ===
[[File:Particle image.png|thumb|Fire particles in a burning room]]
In the real world, the robot will use an infrared camera for two main reasons. First, it can detect the temperature of solid surfaces and second, it can see through smoke which occurs quite often in a house fire scenario. However, we did not choose to simulate the infrared camera in Unity. This is because none of the current algorithms depended on the infrared camera input, only on the heat sensor and the LiDAR. Additionally, implementing the Infrared camera would greatly increase the computational complexity of the simulation since we need to use FDS to precalculate (with great detail) the surface temperature of the solids in the room and we also need to render these temperatures on the camera. Since the frame rate of the simulation was not very high (30 FPS) we decided to opt out of simulating a full blown infrared camera. Instead to still represent what the robot operating can see while controlling the robot we attached an object with a Camera component <ref>Camera component - https://docs.unity3d.com/Manual/class-Camera.html</ref>.


* Relatively long operation time despite being battery powered (up to 12 hours in operational situations).
To visualize the heat distribution during the simulation for demonstration purposes we use the Unity particle system. For each point (x, z) generated by FDS we instantiate an object at the real world coordinates corresponding to the data point. The object emits red semitransparent particles at a rate which is proportional to the heat data point at the given (x, z) coordinates and time. This is not accurate to real world fire but serves as a visual aid when working with and testing the simulation.  
* Capable to withstand high temperatures (also waterproof and dustproof).
* Can be deployed out door and indoor (minimizes risk for firefighters). The robots dimensions are 1.60m x 0.78m x 0.76m (length x width x height). Can also climb slopes of up to 40 degrees.
* Different mountable modules for different situations (for instance 180 degree video turret or wounded people transport stretcher)
* Good extinguishing capabilities (3000 liters per minute)


====== Disadvantages: ======
== '''Simulated environments''' ==
[[File:House.png|thumb|The house simulation environment]]
[[File:School.png|thumb|The school simulation environment]]
One of the biggest advantages of simulating the fires was that multiple environments could be build to test the robots performance under different circumstances. It was chosen to build three different environments for the project that could all challenge the robot in different ways. Furthermore a house, school and office building were chosen because these seemed like very plausible locations for an actual fire.


* The robot has an extremely low speed of up to 3.5 km/h (which is slower then average walking speed)<ref name=":35" />.
=== House ===
The house was the first and smallest environment that was constructed. It had a very small and simplistic layout and was chosen as the first environment to test the robot in the first stages of its development and to learn how to work with unity. It consists of a living room/kitchen, another smaller living room, a bath room and two bedrooms. It was a useful environment for testing especially early one and it relatively small size also meant that computation times were a lot shorter. It was also with this environment that it was concluded that 3d fire was not practical or necessary for this project because computation time was to long, even for this smaller environment. A house is also a very realistic place for the robot to be used because firefighters are less likely to have building plans of these then for larger buildings. Meaning that a mapping robot would be ideal.


=== '''X20 Quadruped Robot Dog''' ===
=== School ===
[[File:X30.png|thumb|The X20 Quadruped Robot Dog<ref name=":37">''Uncover Myriad Uses of Robotics across varied industries- DEEP Robotics''. (z.d.). https://www.deeprobotics.cn/en/index/industry.html</ref>]]
The school simulation environment was the second simulation environment that we made. The idea of this environment was to expand on the simplistic environment of the house and add more complexity. This complexity stems form the larger scale and more complex layout of the school. Firstly longer narrow hallways could challenge the capabilities of the mmWave Radar and of the mapping algorithms. Secondly, the layout was made so that some rooms lead to other rooms making the resulting map a lot more complex than in its predecessor. We were interested to see if our mapping algorithm would be able to handle these more complicated layouts. The environment consists of 7 class rooms, the main hallway/cafeteria, 3 toilets, the teachers room, 2 storage rooms and 2 smaller hallways.
=== Office ===
[[File:Office.png|thumb|The office simulation environment]]The office is the third and last simulation environment that was made for the project. An office building was chosen because all environments until now consisted of smaller rooms connected to each other. To be more precise, the house with the small rooms and the school with the relatively small classrooms and narrow well defined hallways. Therefore the goal of the office building was to challenge the robot by making larger open spaces which could be difficult to deal with because of the limited range of the mmWave Radar. It was also of interest because we wanted to see how different a fire would spread when there were larger open areas to spread through instead of all the relatively closed spaces of the other environments. In this environment the cafeteria is connected all the way to the work spaces making one long main space with smaller meeting rooms connected too it. Larger environments were possible in this part of the project because the choice of 2d fire was already made and this cut down a lot on the computation time which meant that the scale could be increased by a lot. The environment consisted of the larger cafeteria which was connected to the working spaces, 6 toilets, 3 separate meeting rooms, 1 kitchen and 2 storage rooms.


====== Main Characteristics: ======
=== Future environments ===
Battery powered, Reconnaissance, Autonomous and Remote controlled, Quadruped
More simulation environments could have been build, however for the purposes of this project the current ones were deemed enough. Yet it should be noted that with the foundation laid in this project it would be fairly easy to add more environments and simulate the fires if someone were to continue upon this project.


====== Description: ======
== '''Mapping algorithms''' ==
The X20 Quadruped Robot Dog is a fully autonomous robot (with remote control capabilities) made to traverse difficult terrain such as ruins, piles of rubble  and other complex terrains using its four legs. Furthermore the robot was developed to detect hazards using its different sensors and cameras. It comes equipped with a bi-spectrum PTZ camera, a dynamic infrared camera, gas sensor, sound pickup and LiDAR. Using a SLAM algorithm it gets the measurements of its environment and builds a 3D map of it. It also comes equipped with a light weighted robotic arm. The main job of this robot in a disaster area is to do reconnaissance which is directly send back to the digital system. It can collect sound from victims that it finds and make calls with them. Also calculating the best pathways to get to safety<ref name=":38">''X20 Hazard Detection & Rescue Solution''. (z.d.). Deeprobotics. https://deep-website.oss-cn-hangzhou.aliyuncs.com/file/X20%20Hazard%20Detection%20%26%20Rescue%20Solution.pdf</ref>. Deep Robotics has produced more similar robots and even a newer model (the X30) however the X20 was specifically marketed towards rescue operations as to why we chose to look further into this older model (2021)<ref name=":37" /><ref>''DEEP Robotics - Global Quadruped Robot Leader''. (z.d.-b). https://www.deeprobotics.cn/en/index.html</ref>.


====== Advantages: ======
=== SLAM ===
[[File:HeatMap.png|thumb|The map generated by the SLAM algorithm along with the heat data measure from the robot.]]
SLAM<ref>Riisgaard, S., & Blas, M. R. (2005). SLAM for dummies. In SLAM for Dummies [Book]. https://dspace.mit.edu/bitstream/handle/1721.1/119149/16-412j-spring-2005/contents/projects/1aslam_blas_repo.pdf</ref> (simultaneous localization and mapping) is a technique for constructing the map of the environment of the robot (i.e., where the obstacles and empty space around the robot are), and, at the same time, determine the location of the robot inside the environment.


* The robot is relatively light (53kg) and can go up to 15 km/h*.
In our simulation, we use data from an emulated LIDAR like sensor, that provides the input to the SLAM algorithm: a set of observations from the sensor that consist of the distance of an obstacle relative to the sensor and the angle used for obtaining the distance. This is provided to the algorithm at regular time steps. This information is combined with a series of controls (i.e., movement instructions) given to the robot and odometry information (i.e., how far the robot has moved based its own sensors). The SLAM algorithm, using this information, updates both the state of the robot (i.e., location) and the map of the environment using probabilistic techniques.  
* The robot can traverse difficult terrain (its able to traverse 20cm high obstacles and climb stairs and 30 degree slopes) and is also capable of operating indoors as well as outdoors. The robots dimensions are 0.95m x 0.47m x 0.70m (length x width x height).  
* The robot is fully autonomous an can make a mapping of the area as well ass detect temperatures, toxic gases and victims. Making it a good reconnaissance robot and making the job more safe for firefighters.


====== Disadvantages: ======
Two SLAM algorithms were ported to unity. CoreSLAM<ref>Steux, Bruno & El Hamzaoui, Oussama. CoreSLAM: a SLAM Algorithm in less than 200 lines of C code. https://www.researchgate.net/publication/228374722_CoreSLAM_a_SLAM_Algorithm_in_less_than_200_lines_of_C_code </ref>, which is an efficient and fast SLAM algorithm, more suitable for systems with limited resources and a port to C# of the HectorSLAM<ref>hector_slam - ROS Wiki. https://wiki.ros.org/hector_slam</ref> algorithm which is part of the functionality of the ROS.  


* The robot is battery powered and only lasts 2-4 hours.
The starting point for our implementation was a public code base<ref>Mikkleini. GitHub - mikkleini/slam.net: Simultaneous localization and mapping libraries for C#. GitHub. https://github.com/mikkleini/slam.net </ref>, that was adapted to compile and optimized for the .NET version used by Unity. In order for the SLAM code to be used in unity several adaptation needed to be made since the original CoreSLAM and HectorSlam implementation were made for a different version of .NET. To do this we ported the source code in our unity project (the SLAM code can be found in the Assets/SLAM folder of our project). After this we needed to adapt the SLAM algorithms to accept the input from our LiDAR sensor. We did this in for both SLAM implementation in the BaseSlam.cs<ref name=":44" /> an HectorSlam.cs<ref name=":44" /> scripts respectively. In these scripts we translate the output of the LiDAR sensor and we feed it into the SLAM algorithms which will generate an obstacle map of given resolution and size. The resolution and size can be controlled as parameters before starting the simulation. The map generated by the SLAM algorithms was translated from raw byte data (The likelihood of there being an obstacle) to an dynamic image appear on the screen. Initially this was done by calling the SetPixel<ref name=":43">Unity - SetPixel https://docs.unity3d.com/ScriptReference/Texture2D.SetPixel.html</ref> method of a texture<ref>Unity - Texture2D https://docs.unity3d.com/ScriptReference/Texture2D.html</ref> for an image<ref>Unity - RawImage https://docs.unity3d.com/2018.2/Documentation/ScriptReference/UI.RawImage.html </ref> in the simulation UI. However this method of rendering the map was extremely slow for larger map resolution since each pixel had to be drawn by the CPU . Because of this the frame rate of the simluation wa very low (5-10 fps). This was fixed by using a ComputeShader, <ref>Unity - ComputeShader https://docs.unity3d.com/ScriptReference/ComputeShader.html</ref>which would use the GPU to draw in parallel each pixel of the texture based on the predicted likelihood from the SLAM algorithm. The code for the shader is in the LidarDrawShader.compute<ref name=":44" /> in the Assets folder of our Unity project. This method drastically increased our fps to around 70 frames per second.  
* Even though it can work under extreme conditions such as downpour, dust storm, frigid temperatures and hail. It wasn't specified whether it could work under extreme heat<ref name=":39">''X20: The Ultimate Quadruped Bot series for Industrial Use - DEEP Robotics''. (z.d.). https://www.deeprobotics.cn/en/index/product.html</ref>.


<nowiki>*</nowiki>It should be noted that some specifications of the robot varied pretty significantly on the manufacturers own website<ref name=":39" /><ref name=":38" />.
Besides showing the obstacle data we also need to visualize the temperature measurements on our mini-map. This was done in two ways. First, we displayed the current temperature measurement as text on our user interface. This way the fire operator can clearly see if the robot is in danger of overheating. Second, we draw a "heat path" of the robot displaying the temperatures that were measured during the exploration of the environment. Older measurements were made to slowly fade out by increasing the transparency values of their colors. To implement this we also used the SetPixels <ref name=":43" />method for a 2d texture in Unity. The code for displaying the heat data can be found in LidarDrawer.cs<ref name=":44" /> (not the best name). The final result of the mini-map can be seen in the image on the left.  
 
=== '''SkyRanger R70''' ===
[[File:SkyRanger R70.png|thumb|The SkyRanger R70<ref>''SkyRanger® R70 | Teledyne FLIR''. (z.d.). https://www.flir.eu/products/skyranger-r70/?vertical=uas&segment=uis</ref>]]
 
====== Main Characteristics: ======
Battery powered, Drone, Reconnaissance, Remote controlled, Semi-autonomous
 
====== Description: ======
The SkyRanger R70 comes was developed for a wide range of missions including but not restricted to fire scenes and search and rescue operations. The main goal of the drone is to give an unobstructed wider view of what is happening in such a situation. It comes equipped with a detailed thermal camera making it ideal for analyzing fires from a safe distance. The drone also carries a computer making it capable of using AI for object detection and classification. It isn't specified but it is implied that the drone is only meant for outside use meaning that no scouting inside of buildings can be done with this robot<ref name=":40">''Support for SkyRanger R70 | Teledyne FLIR''. (z.d.-b). https://www.flir.com/support/products/skyranger-r70/?vertical=uas&segment=uis#Documents</ref>. Teledyne FLIR makes more similar drones, thermal cameras and other products<ref>''Thermal Imaging, Night Vision and Infrared Camera Systems | Teledyne FLIR''. (z.d.). https://www.flir.eu/</ref>.
 
====== Advantages: ======
 
* Being a drone this robot is very mobile, with top speeds of 50 km/h.
* Can give a very detailed top down view, including thermal vision.


====== Disadvantages: ======
== '''Further work''' ==
There are quite some things that we at first wanted to do, but due to time constraints couldn't do. Below are some features for the simulation and research that could be done to continue on our project. Firstly, the parts of the robot we ignored for this project can be looked into. Some of these things are: The movement of the robot, and with that the path-finding of the robot, The physical aspect of the robot: Currently the robot in the simulation is invincible and doesn't influence it's environment. In a real firefighting situation this will not be the case, so to get closer to the design of an actual scouting robot for firefighters, research needs to be done into the material and physical capabilities of such robot.


* The drone is only meant for reconnaissance meaning that it can only carry and deliver payloads up to 2 kg.
Secondly, there are a lot of features that could be added to the robot, that didn't fit in the scope of this course. The biggest feature we did not include in our simulation is victim detection. In our use analysis this feature was pointed at as one of the most important features next to mapping. Adding a victim detection algorithm to the simulation would be one step closer to realizing an actual robot. Another thing we encountered near the end of the course was the possibility to have the heat sensor data included into the SLAM algorithm, we didn't have time to include this in the simulation at that point, but using the heat data would possibly improve the mapping and so further improve the functionality of the robot. If all this is realized, one can look at what is ultimately the end goal of the research, actually building a fire reconnaissance robot to assist fire fighters in dangerous indoor fire, and help preserve lives of victims and firefighters.
* Its battery only lasts up to 50 minutes.
* Can only operate in temperatures up to 50 degrees Celsius, meaning that it has to keep a safe distance from the fire<ref name=":40" />.


== Appendix ==
== '''Appendix''' ==


===Appendix 1; Logbook===
===Appendix 1; Logbook===
Line 551: Line 662:
|Kamiel Muller
|Kamiel Muller
|Introductory Lecture (2h), Meeting (1h), Brainstorm (0.5h), Find papers(1h)
|Introductory Lecture (2h), Meeting (1h), Brainstorm (0.5h), Find papers(1h)
|
|4.5h
|-
|-
|Georgi Nihrizov
|Georgi Nihrizov
Line 560: Line 671:
|Twan Verhagen
|Twan Verhagen
|Introductory Lecture (2h), Meeting (1h), Brainstorm (0.5h), Find papers (1h)
|Introductory Lecture (2h), Meeting (1h), Brainstorm (0.5h), Find papers (1h)
|
|4.5h
|-
|-
| rowspan="6" |2
| rowspan="6" |2
Line 577: Line 688:
|Kamiel Muller
|Kamiel Muller
|Weekly evaluation (0.5h), Meeting (2h), Correspondence firefighting station (0.5h), Meeting (1h), Work on Wiki page (2h)
|Weekly evaluation (0.5h), Meeting (2h), Correspondence firefighting station (0.5h), Meeting (1h), Work on Wiki page (2h)
|
|6h
|-
|-
|Georgi Nihrizov
|Georgi Nihrizov
Line 585: Line 696:
|Twan Verhagen
|Twan Verhagen
|Weekly evaluation (0.5h), Meeting (2h), Meeting (1h), Reviewing Wiki(1h), Researching Literature(3h)
|Weekly evaluation (0.5h), Meeting (2h), Meeting (1h), Reviewing Wiki(1h), Researching Literature(3h)
|
|7,5h
|-
|-
| rowspan="6" |3
| rowspan="6" |3
Line 593: Line 704:
|-
|-
|Wiliam Dokov  
|Wiliam Dokov  
|Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h),  
|Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), Installing Ros (On windows) (1h), Researching viable Ros + Gazebo simulation methods for windows (2h), Trying to implement Gazebo simulation on Windows (2h), Doing ROS basic tutorial (2h), Doing Gazeebo basic tutorial and figuring out how plugins work (3h)
Installing Ros (On windows) (1h),  
Researching viable Ros + Gazebo simulation methods for windows (2h),  
Trying to implement Gazebo simulation on Windows (2h),
Doing ROS basic tutorial (2h)
Doing Gazeebo basic tutorial and figuring out how plugins work (3h)
|14h
|14h
|-
|-
Line 607: Line 713:
|Kamiel Muller
|Kamiel Muller
|Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), Researching Literature(5h), Adding Literature to Wiki and updating segments introduction of Wiki (4h)
|Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), Researching Literature(5h), Adding Literature to Wiki and updating segments introduction of Wiki (4h)
|13 h
|13h
|-
|-
|Georgi Nihrizov
|Georgi Nihrizov
Line 615: Line 721:
|Twan Verhagen
|Twan Verhagen
|Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), adding labels to wiki(2h), research into sensors (3h)
|Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), adding labels to wiki(2h), research into sensors (3h)
|
|9h
|-
|-
| rowspan="6" |4
| rowspan="6" |4
Line 644: Line 750:
| rowspan="6" |5
| rowspan="6" |5
|Dimitrios Adaos
|Dimitrios Adaos
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h)
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Explore alternate possibilities for fire simulation (3h), experimenting with FDS abstraction level (3h), FDS model creation/description (8h)
|
|17.5h
|-
|-
|Wiliam Dokov
|Wiliam Dokov
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h)
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h),  Learning how lidar's work (3h), Setting up lidar sensor in Unity (3h), Learning how differntail drive odometry works (3h), trying to port motor encoder sensors to Unity (unsucessfully) (5h)  
|
|17.5h
|-
|-
|Kwan Wa Lam
|Kwan Wa Lam
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h)
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Finding and modifying building plan (1h), Building environment in Unity and adding furniture (6h), Trying to retrieve progress (1h)  
|
|11.5h
|-
|-
|Kamiel Muller
|Kamiel Muller
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h)
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), doing further research into Unity environments(4h), getting started on making a school based environment (3h)  
|
|10.5h
|-
|-
|Georgi Nihrizov
|Georgi Nihrizov
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h)
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Combining all modules and adding finished scenes (8h), Improving code quality (3h)  
|
|14.5h
|-
|-
|Twan Verhagen
|Twan Verhagen
|Weekly evaluation (0.5h), Discussion (2h)
|Weekly evaluation (0,5h), Discussion(2h), Research into creating a map and testing in unity(6h), adding reasoning to the wiki(2h)
|
|10.5h
|-
|-
| rowspan="6" |6
| rowspan="6" |6
|Dimitrios Adaos
|Dimitrios Adaos
|
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), FDS model creation/description (2.5h), Studying SLAM (4h), Investigating SLAM implementations (4h), Porting SLAM implementation to Unity (6h)
|
|19h
|-
|-
|Wiliam Dokov
|Wiliam Dokov
|
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Working more on motor encoders in Unity (3h), Learning about SLAM (4h), Cleaning up Unity project code so we can test different algorithms (3h)
|
|13.5h
|-
|-
|Kwan Wa Lam
|Kwan Wa Lam
|
|Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Finding and modifying building plan for office building (1h), Installing and getting acquainted with GitHub (1h), Building the office environment (6h)
|
|11.5h
|-
|-
|Kamiel Muller
|Kamiel Muller
|
|Weekly evaluation (0.5h), Discussion (2h), figuring out how to import assets into Unity (2h), fully creating a school based environment (7h)
|
|11.5h
|-
|-
|Georgi Nihrizov
|Georgi Nihrizov
|
|Weekly evaluation (0.5h), Discussion (2h), Basic minimap using lidar (6h), Adding new scenes (4h), Combining heat sensor with slam (3h)
|
|15.5h
|-
|-
|Twan Verhagen
|Twan Verhagen
|
|Weekly evaluation (0,5h), Discussion(2h), meeting about the minimapping(1h), research into using a canvas for a minimap(3h), research into 2 methods to create minimap(5h)
|
|11.5h
|-
| rowspan="6" |7
|Dimitrios Adaos
|Weekly evaluation (0.5h), Discussion (2h), Fixing porting issues with SLAM (6h), Rework FDS models (1h)
|9.5h
|-
|Wiliam Dokov
|Weekly evaluation (0.5h), Discussion (2h), Looking how to do a SLAM minimap in Unity (3h), Porting workin Base SLAM into Unity (5h), Porting Working hector SLAM into unity (3h)
|13.5h
|-
|Kwan Wa Lam
|Weekly evaluation (0.5h), Discussion (2h), Fix second environment (1h), Workout question and summary (3h), Make PowerPoint (1h), Add to questions (2h), Work on wiki (3h)
|12.5h
|-
|-
|
|Kamiel Muller
|
|Weekly evaluation (0.5h), Discussion (2h), Finish third environment (2h), Workout question and summary (3h), work on wiki (3h)
|
|10.5h
|
|-
|Georgi Nihrizov
|Weekly evaluation (0.5h), Discussion (2h),  Standardizing and improving heat sensor(5h), Improving robot controller (2h), Cleaning up (4h)
|13.5h
|-
|Twan Verhagen
|Weekly evaluation (0.5h), Discussion (2h), edit the wiki(5h), prepare for presentation(3h)
|10,5h
|-
| rowspan="6" |8
|Dimitrios Adaos
|Discussion (2.5h), Meeting(1.5h), Preparing script for presentation (2h), Preparing slides for presentation (2h), Practicing for presentation (1h), Presentation(2h), Finalizing wiki (8h)
|19h
|-
|Wiliam Dokov
|Discussion (2.5h), Meeting(1.5h), Presentation(2h), Learning about compute shaders in order to improve frame rate of simulation (3h), Combining heatmap and slam map data into one (3h), Working on simulation UI (2h), Fixing various bugs before presentation (3.5h), Finalizing wiki page (10h)
|27.5h
|-
|Kwan Wa Lam
|Discussion (2.5h), Meeting(1.5h), Presentation(2h), Doing peer review (0.5h), Rewriting introduction (3h), Changing wiki layout (2h), Adding pictures + fixing references + adding descriptions (2h), Writing simulated environments House + School + Office (3h) + Finalizing wiki (4h)
|20.5h
|-
|Kamiel Muller
|Discussion (2.5h), Meeting(1.5h), Presentation(2h), Finalizing wiki (5h)
|11h
|-
|Georgi Nihrizov
|Discussion (2.5h), Meeting(1.5h), Presentation(2h), Finalizing wiki (6h), Finalizing simulation (3h), Adding particles for visualization (6h)
|21h
|-
|Twan Verhagen
|Discussion (2.5h), Meeting(1.5h), Presentation(2h), Finalizing wiki (5h)
|11h
|}
|}


=== Appendix 2; Firefighter Interview ===
=== Appendix 2; Project Summary ===
Question: What does a typical firefighting mission look like? How do you gather information? Do you work in a team? What are your objectives and priorities (Search for people, extinguish fire)?
 
==== Week 1 ====
In the first week every team member came up with multiple ideas for the project. After that there was a team meeting where the most interesting one was selected which happened to be fire fighting robots. It was also decided upon that a simulation was the most practical way to reach our set goals for the robot (mainly making an accurate mapping without prior knowledge and adding a heat map on top). After making these first big decisions the team started looking into literature on firefighting robots, what sensors they used, path planning algorithms, mapping algorithms, etc. These papers were summarized and noted in a way so that they could easily be used later in the project.
 
==== Week 2 ====
The most important thing that happened in the second week was the interview that was done with the firefighters. This helped a lot in building our vision on what a robot should be able to do to actually be useful for actual people that work in this profession. Next to that a large part of the wiki and its structure was made during this week. Some more research was done in on the general subject, and a good start was made on research in what simulation environment existed and which ones would be optimal for our project.
 
==== Week 3 ====
No definitive choice had been made yet on what simulation environment to use yet. The choices had been narrowed down a lot but there was being experimented with the different possible options to see which one would be best for this project. The main focus of research this week was on: Unreal Engine, FDS, ROS, Gazebo, sensors, and state of the art firefighting robots. The wiki was only updated with some of this research.
 
==== Week 4 ====
In this week it was finally decided upon to go with Unity and FDS for the simulation. This weeks an the coming weeks were mostly defined by actually writing the code for all the things that the robot needed to do and the environments it needed to be placed in. More effort was put in this then in actually noting the progress down in the wiki. Everyone that wasn't familiar with Unity yet did at least a basic tutorial and got familiar with it. Also further work was done in FDS, setting up the mmWave Radar in Unity, Building the first environment in Unity, looking into the robots movement, setting up a heat sensor in Unity, and setting up mini maps in Unity.
 
==== Week 5 ====
In this week the first environment was finished in Unity and the first form of the mmWave Radar was successfully tested in this environment, still at this point the mmWave Radar still had perfect accuracy and the minimap did not yet provide lasting data. Some testing was done with regards to how much accuracy was needed for the FDS data, as with near perfect accuracy currently used it would take far too long to actually run the simulation. Research was also done into how we can import the fire data from FDS into Unity. The main focus for the next weeks was to create fully functioning minimaps, combine the data from the mmWave Radar and heat sensor, create a SLAM algorithm and build some different environments in Unity.


Answer: We usually walk around the building to find the location of the fire and make sure that the fire does not spread. We also make sure that all doors and windows are closed to prevent any air currents from causing a back-draft that leads to an explosion. We then determine if we can take an aggressive approach to extinguishing the fire; by entering the building, or a passive/defensive approach; by trying to extinguish from outside. When we do enter the building our first priority is looking for people, then extinguishing. We do this to avoid risking that people will inhale the smoke from extinguishing the flames. The most important things to know are the characteristics of the building in which the fire is located, and to find out this information we usually ask the people in the area. We also generally need to know how much water the fire we are trying to extinguish needs. Generally 1 couch needs 1 hose of water to extinguish.
==== Week 6 ====
In this week the second Unity environment and an FDS model for the first Unity environment were created. A final decision regarding the accuracy of the FDS data was also made and it was decided upon to only take a 2D slice of the fire data in order to save computational power. While this does limit the accuracy of the fire data, as our current project mainly focusses on the robot detecting the fire, it is less relevant to have perfect fire data. Additionally, working minimaps for both the mmWave Radar and Heatmap were made and a start was made on the research into SLAM.


Question: What usually goes wrong during these missions?
==== Week 7 ====
Answer: We use portable phones for communication inside the buildings and so communication can be an issue. It is also difficult to find out which way we need to go due to smoke/visibility issues.
This week marks the point where projects that either started in the previous weeks or this week, with the exception of SLAM, were finalized. These projects include: creation of the third and final Unity environment, FDS models with their respective fire data for both the second and third Unity models and a fusion was made between the data provided by the mmWave Radar (and its minimap) and the heatmap. The only things that still needed to be done was to fully combine all the separate projects and models into a final product and finish the research into SLAM.


Question: What are the main causes of failures during a firefighting mission?
==== Week 8 ====
In the final week, the data from the FDS models was successfully transported over to the Unity environments. In addition, the SLAM algorithm was finished. The rest of this week was mostly focused on preparing for the final presentation and actually giving said presentation. The only thing stat still needs to be done after the presentation is cleaning up the wiki and adding all of the source code and documentation onto the wiki.


Answer: Flammable materials that can cause explosions are the most dangerous and we need to make sure we are nowhere near them when we discover them. There is also the danger of running out of water to extinguish, at which point we have to give up on extinguishing the fire. Also, for metal and concrete buildings that contract due to high temperatures are likely to collapse after about an hour of being aflame.
=== Appendix 3; Firefighter Interview ===
'''Question''': What does a typical firefighting mission look like? How do you gather information? Do you work in a team? What are your objectives and priorities (Search for people, extinguish fire)?


Question: Do you have any ideas on how to prevent these issues?
'''Answer''': We usually walk around the building to find the location of the fire and make sure that the fire does not spread. We also make sure that all doors and windows are closed to prevent any air currents from causing a back-draft that leads to an explosion. We then determine if we can take an aggressive approach to extinguishing the fire; by entering the building, or a passive/defensive approach; by trying to extinguish from outside. When we do enter the building our first priority is looking for people, then extinguishing. We do this to avoid risking that people will inhale the smoke from extinguishing the flames. The most important things to know are the characteristics of the building in which the fire is located, and to find out this information we usually ask the people in the area. We also generally need to know how much water the fire we are trying to extinguish needs. Generally 1 couch needs 1 hose of water to extinguish.


Answer: When we encounter containers of flammable materials we try to remove them from the building while cooling them with water. We then place them behind walls to keep ourselves and others safe. If we know that a metal or concrete building has been burning for a while then we simply do not enter and try to passively/defensively extinguish the fire.
'''Question''': What usually goes wrong during these missions?


Question: What are the current firefighting tools at your disposal? Do you think that you need something more?
'''Answer''': We use portable phones for communication inside the buildings and so communication can be an issue. It is also difficult to find out which way we need to go due to smoke/visibility issues.


Answer: We have an infrared camera, a CO<sub>2</sub> meter to detect dangerous substances, oxygen masks and some tools for opening doors safely.
'''Question''': What are the main causes of failures during a firefighting mission?


Question: In the ideal case what functions would you want the robot to perform?
'''Answer''': Flammable materials that can cause explosions are the most dangerous and we need to make sure we are nowhere near them when we discover them. There is also the danger of running out of water to extinguish, at which point we have to give up on extinguishing the fire. Also, for metal and concrete buildings that contract due to high temperatures are likely to collapse after about an hour of being aflame.


Answer: Most important would be information about heat/a heatmap and information about where people are located inside burning buildings. We would then need information about the structure of the building, how big the fire is and if possible the location of any obstacles inside the building.
'''Question''': Do you have any ideas on how to prevent these issues?


Question: What level of autonomy would be best? With autonomy defined as what kinds of permissions the robot has to do without human intervention.
'''Answer''': When we encounter containers of flammable materials we try to remove them from the building while cooling them with water. We then place them behind walls to keep ourselves and others safe. If we know that a metal or concrete building has been burning for a while then we simply do not enter and try to passively/defensively extinguish the fire.


Answer: For us the robot should be able to work on its own, but have a simple interface so that our chief/director can direct it from a tablet in our firetruck if needed.
'''Question''': What are the current firefighting tools at your disposal? Do you think that you need something more?


Question: What information are you missing and would like to know when there is a fire in a building?
'''Answer''': We have an infrared camera, a CO<sub>2</sub> meter to detect dangerous substances, oxygen masks and some tools for opening doors safely.


Answer: In order of priority:
'''Question''': In the ideal case what functions would you want the robot to perform?
 
'''Answer''': Most important would be information about heat/a heatmap and information about where people are located inside burning buildings. We would then need information about the structure of the building, how big the fire is and if possible the location of any obstacles inside the building.
 
'''Question''': What level of autonomy would be best? With autonomy defined as what kinds of permissions the robot has to do without human intervention.
 
'''Answer''': For us the robot should be able to work on its own, but have a simple interface so that our chief/director can direct it from a tablet in our firetruck if needed.
 
'''Question''': What information are you missing and would like to know when there is a fire in a building?
 
'''Answer''': In order of priority:


# Information about people in the building
# Information about people in the building
Line 735: Line 913:
# Basic obstacle map
# Basic obstacle map


Question: Suppose there is a hard-to-reach or inaccessible area, how influential would it be if a robot would be able to reach it fairly easily?
'''Question''': Suppose there is a hard-to-reach or inaccessible area, how influential would it be if a robot would be able to reach it fairly easily?


Answer: This really depends more on the area, how many people are inside, if there is a fire in that location, or maybe any dangerous/explosive substances.
'''Answer''': This really depends more on the area, how many people are inside, if there is a fire in that location, or maybe any dangerous/explosive substances.


Question: Have you used robots during your work. If yes what was your experience with them? Have you had any issues with them?
'''Question''': Have you used robots during your work. If yes what was your experience with them? Have you had any issues with them?


Answer: I have not used any robots in my firefighting career.
'''Answer''': I have not used any robots in my firefighting career.


Question: Rank the following features based on importance (omitted here since they are present in the answer)
'''Question''': Rank the following features based on importance (omitted here since they are present in the answer)


Answer: In order of priority (with an addendum for some features at the end of the answer)
'''Answer''': In order of priority (with an addendum for some features at the end of the answer)


# Heat resistance
# Heat resistance
Line 755: Line 933:
* Speed: Answer: Depends on size of building, most buildings are not that large
* Speed: Answer: Depends on size of building, most buildings are not that large
* Accuracy: Answer: relevant for larger scale (need to distinguish 200°C from 400°C), but high accuracy not important at smaller scale (160°C vs 180°C)
* Accuracy: Answer: relevant for larger scale (need to distinguish 200°C from 400°C), but high accuracy not important at smaller scale (160°C vs 180°C)
Question: Do you have something more to add on this topic?
'''Question''': Do you have something more to add on this topic?
Answer: Finding an entrance point for the robot could be dangerous, we can't really open any doors or windows easily to let the robot go in due to the risk of a back-draft.
 
'''Answer''': Finding an entrance point for the robot could be dangerous, we can't really open any doors or windows easily to let the robot go in due to the risk of a back-draft.
 
'''Question''': How long does a fire take to extinguish for an average house?
 
'''Answer''': For normal houses the entire process of extinguishing a fire takes about 10-15 minutes.


Question: How long does a fire take to extinguish for an average house?
=== Appendix 4; Questions ===
Below is a list of some questions (and answers) that was mostly made in preparation for the presentation, but can still provide some useful insights in our decision making process in case it is not covered in any other section.
* Why a simulation (instead of building a testing environment in real life)?
** Because it is both better suited to the composition of our group and it is a lot easier to run repeated tests at shorter time intervals (setting some environment safely on fire in a ‘real’ setting takes quite a bit of work). It also creates more opportunities for testing fires originating in different locations of the set and building multiple large scale environments becomes possible which would have been extremely hard and time consuming if done otherwise. Also no real fire is needed because if at least a certain level of realism is kept the fire doesn't have to be hyper realistic for the purposes of our project, a simulated even if somewhat simplified is accurate enough.
* Why this amount of different environments?
** We wanted to test the robot in a varying set of environments and this number was the most feasible within the timeframe while still offering a good amount of variety in the environments.
* Why these specific environments?
** We wanted to test the robot in a varying set of environments and these three are some of the most common types of environments. The environments that were chosen now consist of a house, a school and an office. These varying environments are useful in testing how the robot handles different kinds of environments. From the smaller scale house, to the school with a little more complicated layout but still pretty restricted rooms, to the office with larger open areas.
* Why are there no stairs in the environment?
** Right now the robot doesn't have stair climbing capabilities meaning that in the current simulations stairs would not add any value. The stairs can be seen with the infrared camera on the robot and will be avoided by the operator of the robot.
* Why is the robot grounded?
** Because a flying robot could potentially cause backdrafts and thus cause the fire to spread faster and in a more unpredictable fashion. One of the first things firefighters do (source: interview) is to close any window they can see/find.
* What would happen if the floor isn’t completely flat?
** The robot also has an infrared camera mounted on the front so that next to the mapping the operator can also actually see the environment. For now it is assumed that the floor will be flat as this holds for almost every building, however if there would be holes under the level of what mmWave Radar can detect then it is up to the operator to spot these and to avoid them.
* How does the robot deal with obstacles lower or higher than what the mmWave Radar detects?
** Obstacles at a higher elevation than the robot wouldn’t really be a problem, as the Radar is already the highest point of the robot and lower obstacles are not that prevalent and the robot is still remote controlled, so the operator would also know if something went wrong and can still see the environment through the infrared camera.
* Why do you assume all windows are closed/ where are the windows?
** One of the first things firefighters do (see interview) is to close any window they can see/find. So this is a fairly safe assumption to make.
* Why does the mmWave Radar only take a slice (2d instead of 3d)?
** Because we want to provide a readable map to the firefighters, we made it a 2D map because its the easiest way to visualize all the important information (heat map and layout of the building) in a readable manner. So in this case it is not that necessary to detect in 3D.
* Why a slice at that specific height?
** Because measuring at the top of the robot ensures that if obstacles are encountered it can be ruled out that the obstacle blocking the top of the robot. It's also a height might miss smaller insignificant obstacles but does map obstacles that are higher and also harder to traverse for the firefighter
* Why only simulate fire 2d?
** It saves us a lot of computational power to simulate the fire in 2d instead of 3d. The fire still spreads in a realistic way that is not too over complicated and because we mainly focus on fire detection, mapping and heat mapping by the robot, a perfectly accurate fire simulation is not required. As long as the spread of the fire is realistic (so that the scenario is realistic) only this level of simulation is needed to test the heat mapping capabilities. Because our map is 2d, 3d fire physics would be irrelevant.
* Why not simulate smoke?
** Because we are working with a mmWave Radar that sees through smoke, a temperature sensor that isn’t affected by smoke, and a thermal sensor that also sees through smoke. This means that none of our sensors are influenced by smoke and adding it to the simulation doesn’t add any value and requires even more computational power.
* Why remote controlled and not autonomous?
** Making it autonomous would have been a very nice addition to the robot, however it wasn’t feasible in the current time frame so it wasn’t realized for now.
* Why no automated function to protect the robot from extreme heat?
** Adding this feature would have been really nice, however in the current timeframe it wasn’t possible to be realized. However we argue for now that the robot will be built to be able to withstand extreme temperatures, so this feature wasn’t needed for now. Also keep in mind that the operator would have access to the heat map until now and a infrared camera, meaning that even if the robot would encounter heat levels that it can't handle the operator could simply lead the robot around that area.
* Why no victim detection?
** Making it detect victims would have been a very nice addition to the robot, however it wasn’t feasible in the current time frame so it wasn’t realized for now. However there are known sensors that can do victim detection even through smoke (see our research literature with the tag Victim detection).
* Why use mmWave Radar and not another sensor?
** A mmWave Radar has a similar function to a lidar and can operate through smoke and is thus essentially the best option for general mapping purposes.
* Why a temperature sensor and not for example infrared camera for heat mapping?
** An infrared camera can only measure temperature at surfaces, in which case the data wouldn't be fully accurate.
* Why this form of robot (three wheeled)?
** It is currently simply a placeholder for this simulation, but for as long as the height of the robot doesn't change any shape should still work
* In what way is making the heatmap useful for a firefighter?
** In a lot of smaller scale buildings a general floor plan is not something that is available, so in that case both the mapping feature of the robot and its ability to detect heat provide a lot of valuable information. In the interview conducted with the firefighting station on campus, it turned out that information about a heatmap is something that is generally missing and quite important to a firefighter.


Answer: For normal houses the entire process of extinguishing a fire takes about 10-15 minutes.
* In what way is this innovative and/or contributes something new?
** We researched most of the most prominent state of the art fire fighting and rescue robots that are on the market right now, and after some thorough research did not find that robots that build a map and heatmap without any prior information in these extreme conditions are used at the moment.
* Why this size of robot?
** This size was chosen because a small robot like this could easily go around obstacles. Making the robot very large would not add any extra value for the purpose that it has.


== Sources ==
== '''Sources''' ==
<references />
<references />

Latest revision as of 23:21, 11 April 2024

[1]Fire Reconnaissance Robot Simulation

Group members

Group members
Name Student number Email Study Responsibility
Dimitrios Adaos 1712926 d.adaos@student.tue.nl Computer Science and Engineering Simulation (fire simulation and mapping)
Wiliam Dokov 1666037 w.w.dokov@student.tue.nl Computer Science and Engineering Simulation (sensor virtualization and mapping)
Kwan Wa Lam 1608681 k.w.lam@student.tue.nl Psychology and Technology Research/USE analysis and modelling
Kamiel Muller 1825941 k.a.muller@student.tue.nl Chemical Engineering and Chemistry Research/USE analysis and modelling
Georgi Nihrizov 1693395 g.nihrizov@student.tue.nl Computer Science and Engineering Simulation (fire data and mapping)
Twan Verhagen 1832735 t.verhagen@student.tue.nl Computer Science and Engineering Research and Simulation (mapping)

Introduction

Problem statement

Firefighters have to operate in environments with low visibility, dangerous substances, extreme temperatures, unknown building layouts and obstacles[2]

Firefighting is a field where robotic technology can offer valuable assistance. The environment where firefighters have to operate can be very harsh and challenging especially in closed spaces. They have to deal with very low visibility due to smoke and lack of light. As well as the presence of dangerous gases or other substances and extreme temperatures. Additionally they are challenged by obstacles, unknown building layouts and even collapses off the building. In these kinds of scenarios with a lot of uncertainties and unknowns, in order to help and save people that are trapped in a building and also to reduce the risks for the firefighters themselves, technology could be used to lend a hand. It is crucial to be able to determine the layout and the associated temperatures of the inside of these buildings.

For this purpose our group will focus on the design of a simulation of a fire reconnaissance robot that is able to navigate inside a building, build a map of the inside of the building without any prior knowledge of the layout and add a heat-map on top of it so firefighters can see what pathways are possible to traverse and where the source of the fire is.

Objectives

Our objective is to design a simulation of a robot capable of mapping and heat-mapping the layout of a building without prior knowledge in a fire scenario with a sufficient level of realism. To realize these objectives we will focus on the following features for our simulation:

  • An environment where such a robot would realistically be used
  • A simulation of a fire scenario with a sufficient level of realism (what a sufficient level of realism consists off for the purposes of this project is expanded upon later)
  • A robot which can be remotely controlled
  • A robot capable of making and displaying a mapping of its environment and its position in it (SLAM)
  • A robot capable of adding heat information to a map (creating a heat-map)

Users

Firefighters and first responders would be the primary users of the robot. These are the people that need to interact and deploy the robot in the first place. This means that the robot should be easy and quick to use and set up for in emergency situations where time is of the essence. It'd also be valuable to know their insights and experiences for the robot to work the most effectively in their field of expertise. It's also important that the robot can properly communicate with the firefighters in the emergency situation and relay the information about; fire sources, obstacles, victims, and feasible rescue paths.

The secondary user of a firefighting/rescue robot would be the victims and civilians. The robot is made to help them and come to their aid. It might be needed to find a way to communicate with the victims so they can be assisted most effectively. This might pose a challenge because of the low visibility and low audibility during a fire.

Interested parties for deploying the robot are firefighting authorities, that are tasked for responding to a fire incident and save lives and properties, insurance companies that can benefit from minimizing the loss of life and property and companies that own big buildings and can consider having the robot as part of their regular infrastructure.

Approach

An in person interview was conducted with the TU/e fire department (see appendix)[3]

The project was started off with an in depth literature study on the subject matter and closely related topics. While this was happening we were making contact with the fire department at the TU/e so we could do an accurate user analysis based on both literature and our actual target users. This would be followed by research into what is the state of the art in firefighting robots right now so we could see what had been done before and what gaps could still be filled in this particular field. After all this information was collected we were ready to complete the user analysis and make a plan for what exactly we wanted to achieve during this project.

Once it was decided that the focus of the project would be a simulation of a (heat) mapping robot the designing part of the project could finally begin. We first had to research the simulators that we could use (e.g. Netlogo, Webots, Gazebo, ROS, PyroSim) by evaluating their advantages and disadvantages, and what we needed the simulation to do. We also did research on what sensors could possibly be used in the type of scenario that the robot would be used for.

After we finally settled on a simulator the focus was now on implementing a SLAM algorithm, building realistic simulation environments, simulating a realistic fire and implementing a heat map. Our target was to propose a robot design with the focus on its software, a simulation environment that could easily be used in further research in similar topics and some additional hardware suggestions (the required sensors for our robot).

Further details on the course of the project and the week to week developments can be found in the appendix under logbook and project summary.

Planning

Gannt chart for project planning.

Research papers

To start off the project we did an in depth literature research on firefighting robots and other related relevant subjects. In the table below you can see the titles of the relevant research papers accompanied with a quick summary to give a general idea of the contents of the papers and labels to make it easier to find papers on specific subjects later on in the project. It should be noted that this does not include all sources used, the full list of references can be found at the bottom of the page.

Research papers
Title Labels Summary
1 A Victims Detection Approach for Burning Building Sites Using Convolutional Neural Networks[4] Victim detection, Convolutional neural network. They trained a convolutional neural network to detect people and pets in thermal IR, images. They gathered their own dataset to train the network. The network results were pretty accurate (One-Step CNN 96.3%, Two-Step CNN 94.6%).[4]
2 Early Warning Embedded System of Dangerous Temperature Using Single exponential smoothing for Firefighters Safety[5] Heat detection, Firefighter assistance. Proposes to add a temperature sensor to a firefighter's suit which will warn firefighters that they are in a very hot place > 200 °C.[5]
3 A method to accelerate the rescue of fire-stricken victims[6] Victim search method. This paper describes an approach for locating victims and areas of danger in burning buildings. A floor plan of the burning building is translated into a grid so that the robot can navigate the building. A graph with nodes representing each of the rooms of the building is then generated from the grid to simplify the calculations needed for pathing. The algorithm used relies on crowdsourcing information normalized using fuzzy logic and the temperature of a region as detected by the thermal sensors of the robot to estimate the probability that a victim is present in a room. The authors of the paper found that their approach was significantly faster at locating survivors than strategies currently employed by firefighter and strategies devised by other researchers.

Note: This paper uses the software PyroSim for their simulation. PyroSim offers a 30 day free trial, so it might be possible to use it for our own simulation. Needs further research into PyroSim.[6]

4 The role of robots in firefighting[7] Overview current robots. This paper describes the State of the Art in terrestrial and aerial robots for firefighting. At the same time the paper indicates that there is a general difficulty in the autonomy of such robots, mainly due to difficulties in visualizing the operation environment. There are, however, several projects aiming to address this issue and allow such robots to operate with more autonomy.[7]
5 SLAM for Firefighting Robots: A Review of Potential Solutions to Environmental Issues[8] Simultaneous localization and mapping. This paper aims to address some of the unfavorable conditions of fire scenes, like high temperatures, smoke, and a lack of a stable light source. It reviews solutions to similar problems in other fields and analyzes their characteristics from some previous  publications.

Based on the analysis of this paper, to address the effect of smoke, a combination of laser based and radar based methods is considered more robust. For darkness effects, the combination of Laser based methods combined with image capture and processing is considered the best approach.  Thermal imaging technology is also suggested for addressing high temperatures.[8]

6 A fire reconnaissance robot based on slam position, thermal imaging technologies, and AR display[9] Reconnaissance robot, Firefighter assistance, Thermal imaging, Simultaneous localization and mapping, Augmented reality. Presents design of a fire reconnaissance robot (mainly focusing on fire inspection. Its function is on passing important fire information to fire fighters but not direct fire suppression) It can be used to assist the detection and rescuing processes under fire conditions. It adopts an infrared thermal image technology to detect the fire environment, uses SLAM (simultaneous localization and mapping)technology to construct the real-time map of the environment, and utilizes A* and D* mixed algorithms for path planning and obstacle avoidance. The obtained information such as videos are transferred simultaneously to an AR (Augmented Reality) goggle worn by the firefighters to ensure that they can focus on the rescue tasks by freeing their hands.[9]
7 Design of intelligent fire-fighting robot based on multi-sensor fusion and experimental study on fire scene patrol[10] Firefighting robot, Path planning, Fire source detection, Thermal imaging, Binocular vision camera. This paper presents the design of an intelligent Fire Fighting Robot based on multi-sensor fusion technology. The robot is capable of autonomous patrolling and fire-fighting functions. In this paper, the path planning and fire source identification functions are mainly studied, which are important aspects of robotic operation. A path-planning mechanism based on an improved version of the ACO(Ant Colony Optimization) is presented to solve that basic ACO is easy to converge in the local solution. It proposes a method to reduce the number of inflection points during movement to improve the motion and speed of the robot

It uses a method for fire source detection, utilizing the combined operation of a binocular vision camera and and infrared thermal imager to detect and locate the fire source.

It also uses ROS (Robot Operating System) based simulation to evaluate the algorithms for path planning.[10]

8 Firefighting robot with deep learning and machine vision[11] Firefighting robot, Deep learning. In this paper they made a fire fighting robot which is capable of extinguishing fires caused by electric appliances using a deep learning and machine vision. Fires are identified using a combination of AlexNet and ImageNet, resulting in a high accuracy (98.25% and 92% respectively).[11]
9 An autonomous firefighting robot[12] Firefighting robot, Fire detection, Infrared sensor, Ultrasonic sensor. They made an autonomous firefighting robot which used infrared and ultrasonic sensors to navigate and a flame sensor to detect fires.[12]
10 Real Time Victim Detection in Smoky Environments with Mobile Robot and Multi-sensor Unit Using Deep Learning[13] Victim detection, Thermal imaging, Remote controlled. A low resolution thermal camera is mounted on a remote controlled robot. The robot is trained to detect victims. The victim detection model has a moderately high detection rate of 75% in dense smoke.[13]
11 Thermal, Multispectral, and RGB Vision Systems Analysis for Victim Detection in SAR Robotics[14] Victim detection robot, multispectral imaging; primarily infrared and RGB The effectiveness of three different cameras for victim detection. Namely a; RGB, thermal and multispectral camera.[14]
12 Sensor fusion based seek-and-find fire algorithm for intelligent firefighting robot[15] Fire detection, Algorithm Introduces an algorithm for a firefighting robot that finds fires using long wave infrared camera, ultraviolet radiation sensor and LIDAR.[15]
13 On the Enhancement of Firefighting Robots using Path-Planning Algorithms[16] Path planning algorithm Tests performance of several path-plannig algorithms to allow a firefighting robot to move more efficiently.[16]
14 An Indoor Autonomous Inspection and Firefighting Robot Based on SLAM and Flame Image Recognition[17] Autonomous firefighing robot, Deep learning algorithm Made a firefighting robot that maps the area using an algorithm and uses a deep-learning-based flame detection technology utilizing a LIDAR.[17]
15 Human Presence Detection using Ultra Wide Band Signal for Fire Extinguishing Robot[18] Victim detection, Remote control A remotely controlled robot using ultra-wide band radar detects humans while fire and smoke are present based on the persons respiration movement.[18]
16 Firefighting Robot Stereo Infrared Vision and Radar Sensor Fusion for Imaging through Smoke[19] Real time object identification, Sensor fusion Sensor fusion of stereo IR and FMCW radar was developed in order to improve the accuracy of object identification. This improvement ensures that the imagery shown is far more accurate while still maintaining real-time updates of the environment.[19]
17 Global Path Planning for Fire-Fighting Robot Based on Advanced Bi-RRT Algorithm[20] Path planning algorithm Introduces a bidirectional fast search algorithm based on violent matching and regression analysis. Violent matching allows for direct path search when there are few obstacles, the other segments ensure that the total path search is more efficient and less computationally heavy.[20]
18 Round-robin study of a priori modelling predictions of the Dalmarnock Fire Test One[21] Fire modelling comparison Compares the results of different types of fire simulation models, with a real-world experiment.[21]
19 Summary of recommendations from the National Institute for Occupational Safety and Health Fire Fighter Fatality Investigation and Prevention Program[22] Most common causes of death of firefighters Summary of the most common causes of death for firefighters. Cases were separated by nature and cause of death. They were also separated into 10 total categories as well 2 major categories - medical/trauma.[22]
20 The current state and future outlook of rescue robotics[23] Overview current robots and what needs to be improved upon Discusses the main requirements and challenges that need to be solved by search and rescue robots. Generally applicable to our firefighting robot. The most important aspects of search and rescue robots are: ease of use, autonomy, information gathering and use as tools.[23]
21 Smart Fire Alarm System with Person Detection and Thermal Camera[24] Fire alarm, Person detection in fire, Heat detection Discusses a smart system for fire alarms that distinguished between heat when people are present and when people aren't present.[24]
22 See through smoke: robust indoor mapping with low-cost mmWave radar[25] Millimeter wave radar; Indoor mapping; Emergency response; Mobile robotics Utilizing, a Generative adversarial neural network can reliably reconstruct a grid map of a room.[25]
23 Analysis and design of human-robot swarm interaction in firefighting[26] Human- robot interaction, human robot swarm approach of firefighting, Existing robot human interaction in firefighting Describes the cooperation between robots and firefighters during a firefighting mission, including mission planning and execution. The premise is that robots can add sensing capabilities to improve awareness and efficiency in obscured environments.[26]
24 Using directional antennas as sensors to assist fire-fighting robots in large scale fires[27] Establish communications via robots, Disasters and firefighting Describes how to establish communication networks between robots in disastrous fire situations using directional antennas so robots can be deployed to extinguish fires and reach places which firefighters can't easily reach..[27]
25 Design And Implementation Of Autonomous Fire Fighting Robot[28] Fire prevention, Heat sensor, Extinguishing robot Describes a robot that can be used to go into fires and reach places normal fire fighters would normally be unable to reach safely.[28]
26 NL-based communication with firefighting robots[29] Overview existing robots, Communication robot-human, Natural language Describes different methods of working together between firefighters and robots during fires and a robot that is meant for helping fire fighters during a fire[29]
27 Experimental and computational study of smoke dynamics from multiple fire sources inside a large-volume building[30] Smoke dynamics, Statistical analysis, Smoke effects Summarizes results from a fire simulation of 4 fire sources using the computational fluid dynamics code FDS (Fire Dynamics Simulator, v6.7.1) and compares those results to a single-source simulation, demonstrating the importance of the number of and position of fire sources in a simulation.[30]
28 Numerical Analysis of Smoke Spreading in a Medium-High Building under Different Ventilation Conditions[31] Smoke dynamics, Statistical analysis Uses simulation to compare smoke spreading in medium-high buildings under different ventilation conditions and draws conclusions on important points to consider in the design of a ventilation system for such buildings such as smoke inlets and outlets and high pressure zones.[31]
29 A REVIEW OF RECENT RESEARCH IN INDOOR MODELLING & MAPPING[32] Indoor, Mapping, Modelling, Navigation Summarizes the last 10 years of reasearch on indoor modelling and mapping. Describes a variety of used technologies, including lasers scanners, cameras and indoor data models such  as IFC, CityGML and IndoorGML. It also provides insight into recent navigation and routing algorithms with emphasis on dynamic environments.[32]
30 Developing a simulator of a mobile indoor navigation application as a tool for cartographic research[33] Virtual model, Indoor navigation, Indoor mapping Documents the process of creating of a proof of concept of a virtual indoor environment using Unreal Engine aimed at improving the indoor cartographic process. While still a prototype, the paper can be used to derive useful methods for building simulation and navigation.[33]
31 Real time simulation of fire extinguishing scenarios[34] Virtual model Describes software and methodology for simulating fire response scenarios. Demonstrates implementation of FDS with Unreal Engine to generate a fire scenario simulation.[34]
32 Fire Fighting Mobile Robot: State of the Art and Recent Development[35] Overview current robots This paper gives an overview of some of the state of the art fire fighting robots that are used today.[35]

State of the Art Robots

Because to this day firefighting is still a dangerous job, a lot of effort is put into the development of ways to minimize the risk for firefighters. In this age of technology this means that a lot of different fire fighting robots are being developed to try and help firefighters in this delicate task. To see what developments have already been made in this field we will now make an overview of current landscape of state of the art fire fighting robots. This will be done by first giving a couple of examples of a of robots that are being used today. This will paint a clear picture on what kind of technologies there are right now and what is already being done. This way we can try to find new and innovative angles on the matter of firefighting robots and prevent ourselves from designing a robot that is already being employed in the field.

LUF60

Main Characteristics:
The LUF 60 fire fighting robot[36]

Diesel powered, Fire extinguishing, Remote controlled

Description:

This is a remote controlled firefighting robot that is designed to distinguish fires. It consists of an air blower that blows a beam of water droplets (to a distance of approximately 60 meters). This water jet can go up to 2400 liters of water per minute. If needed it could also blow a beam of foam (to a distance of approximately 35 meters). It is designed to operate in difficult conditions and can, by remote control, be send directly to the fire source. To accomplish this the robot is able to remove obstacles and climb stairs (up to a 30 degree angle)[35][37]. It should be noted that LUF produces more robots similar to this one see[38].

Advantages:
  • Minimizes risk for firefighters (remote controlled) and designed to also clear obstructions from a distance.
  • Good extinguishing capabilities ( up to 2400 liters per minute).
Disadvantages:
  • Its size exceeds that of a standard door restricting movement. Its dimensions are are 2.33m x 1.35m x 2.00-2.50m (length x width x height). A standard door is 0.80-0.90m x 2.00-2.10m (width x height).
  • Its high weight of 2.200 kg, resulting in low mobility. The maximal speed is 4.5 km/h which is around the average walking speed[37].

EHang EH216-F

EHang EH216-F firefighting drone [39]
Main Characteristics:

Battery powered, Fire extinguishing, Drone, Remote or directly controlled

Description:

This is a firefighting robot that can be either remote or manually controlled by a pilot. The main focus of this robot is to be able to extinguish fires in high-rise buildings that are not possible to extinguish from ground level, due to the limited range of fire hoses[7]. The robot has a maximum flight altitude of 600 m, is able to carry a single person and has a maximum cruising speed of 130 km/h. The drone carries 100 l of firefighting liquid, with the spray lasting for 3.5 minutes and 6 fire extinguishing projectiles containing fire extinguishing powder and a window breaker. The drone is 7.33 m in length, 5.61 m in width and has a height of 2.2 m[40].

Advantages:
  • Is able to reach and extinguish fires that grounded firefighting equipment cannot reach.
  • Can extinguish fires from outside of the buildings, thus minimizing the risk for firefighters.
  • Has very high mobility, meaning it can quickly respond to fires.
Disadvantages:
  • Can only carry a set amount of fire extinguishing equipment, thus limiting it's capabilities.
  • It's flight time is approximately 21 minutes, thus limiting deployment range and deployment time[40].

THERMITE RS3

The THERMITE RS3 firefighting robot[41]
Main Characteristics:

Diesel powered, Fire extinguishing, Remote controlled

Description:

The THERMITE RS3 is a firefighting robot designed to extinguish fires from the outside of a building. It is a remote controlled diesel powered robot equipped with a water canon. It is capable of shooting a beam of water 100 meters horizontally and 50 meters vertically. With the capability of using foam if needed. It has also (relatively recently in 2020) been adopted into the Los Angeles Fire Department[42]. It can also be noted that Howe & Howe produces more robots with a similar functionality as this one[43].

Advantages:

  • Good extinguishing capabilities ( up to 9464 liters per minute).
  • Good mobility for its size (up to 13 km/h and ability to climb 35% slope while weighing 1588 kg).
Disadvantages:
  • The robot was not designed to go inside of buildings so it has limited adaptability. Its dimensions are are 2.14m x 1.66m x 1.63m (length x width x height). A standard door is 0.80-0.90m x 2.00-2.10m (width x height)[43].

COLOSSUS

The COLOSSUS fire fighting robot[44]
Main Characteristics:

Battery powered, Fire extinguishing, Reconnaissance, Remote controlled

Description:

The COLOSSUS is a firefighting robot developed to fight fires in indoors and outdoors situations. It is an remotely controlled robot with some AI integration that provides driving assistance. The main purpose of this robot is to extinguish fires in places to dangerous for firefighters to go. The robot is battery powered (batteries last up to 12 hours) and has 9 interchangeable models that can be used for different situations. Because of these different models it is very adaptable and can be used from reconnaissance to victim extraction. A very notable instance of the robot being used was when they were deployed alongside the Paris firefighters during the Notre-Dame fire in 2019. Where they were used to cool down the insides of the cathedral where firefighters couldn't go because of falling debris[44]. Now the robot is being used in 15 different countries. Shark robotics produces more similar robots[45][46].

Advantages:
  • Relatively long operation time despite being battery powered (up to 12 hours in operational situations).
  • Capable to withstand high temperatures (also waterproof and dust-proof).
  • Can be deployed out door and indoor (minimizes risk for firefighters). The robots dimensions are 1.60m x 0.78m x 0.76m (length x width x height). Can also climb slopes of up to 40 degrees.
  • Different mountable modules for different situations (for instance 180 degree video turret or wounded people transport stretcher)
  • Good extinguishing capabilities (3000 liters per minute)
Disadvantages:
  • The robot has an extremely low speed of up to 3.5 km/h (which is slower then average walking speed)[45].

X20 Quadruped Robot Dog

The X20 Quadruped Robot Dog[47]
Main Characteristics:

Battery powered, Reconnaissance, Autonomous and Remote controlled, Quadruped

Description:

The X20 Quadruped Robot Dog is a fully autonomous robot (with remote control capabilities) made to traverse difficult terrain such as ruins, piles of rubble and other complex terrains using its four legs. Furthermore the robot was developed to detect hazards using its different sensors and cameras. It comes equipped with a bi-spectrum PTZ camera, a dynamic infrared camera, gas sensor, sound pickup and LiDAR. Using a SLAM algorithm it gets the measurements of its environment and builds a 3D map of it. It also comes equipped with a light weighted robotic arm. The main job of this robot in a disaster area is to do reconnaissance which is directly send back to the digital system. It can collect sound from victims that it finds and make calls with them. Also calculating the best pathways to get to safety[48]. Deep Robotics has produced more similar robots and even a newer model (the X30) however the X20 was specifically marketed towards rescue operations as to why we chose to look further into this older model (2021)[47][49].

Advantages:
  • The robot is relatively light (53kg) and can go up to 15 km/h*.
  • The robot can traverse difficult terrain (its able to traverse 20cm high obstacles and climb stairs and 30 degree slopes) and is also capable of operating indoors as well as outdoors. The robots dimensions are 0.95m x 0.47m x 0.70m (length x width x height).
  • The robot is fully autonomous an can make a mapping of the area as well ass detect temperatures, toxic gases and victims. Making it a good reconnaissance robot and making the job more safe for firefighters.
Disadvantages:
  • The robot is battery powered and only lasts 2-4 hours.
  • Even though it can work under extreme conditions such as downpour, dust storm, frigid temperatures and hail. It wasn't specified whether it could work under extreme heat[50].

*It should be noted that some specifications of the robot varied pretty significantly on the manufacturers own website[50][48].

SkyRanger R70

The SkyRanger R70[51]
Main Characteristics:

Battery powered, Drone, Reconnaissance, Remote controlled, Semi-autonomous

Description:

The SkyRanger R70 comes was developed for a wide range of missions including but not restricted to fire scenes and search and rescue operations. The main goal of the drone is to give an unobstructed wider view of what is happening in such a situation. It comes equipped with a detailed thermal camera making it ideal for analyzing fires from a safe distance. The drone also carries a computer making it capable of using AI for object detection and classification. It isn't specified but it is implied that the drone is only meant for outside use meaning that no scouting inside of buildings can be done with this robot[52]. Teledyne FLIR makes more similar drones, thermal cameras and other products[53].

Advantages:
  • Being a drone this robot is very mobile, with top speeds of 50 km/h.
  • Can give a very detailed top down view, including thermal vision.
Disadvantages:
  • The drone is only meant for reconnaissance meaning that it can only carry and deliver payloads up to 2 kg.
  • Its battery only lasts up to 50 minutes.
  • Can only operate in temperatures up to 50 degrees Celsius, meaning that it has to keep a safe distance from the fire[52].

User analysis

The end goal for this project is to deliver a simulation of a remote controlled mapping robot that is capable of entering buildings that are on fire and provide mapping information regarding the layout of the building and where obstacles (including fire) and possibly victims are located.

The chosen programming environment for this task will be Unity.

For the realization for this simulation the following MoSCoW list has been constructed:

Must have:

  • A simulation of a remote controlled firefighting robot in a fire scenario.
  • Simulate the ability for a robot to turn the information from the 3D environment into a 2D map.
  • Realistic fire and physics mechanics in the simulation.
  • A simulation of the ability to detect fires and add them to the map (making a heat map).

Should have:

  • A simulation of the ability to detect victims and add them to the map.
  • A simulation of the ability to detect obstacles and add this information to the map.

Could have:

  • The ability to communicate with or help victims.
  • The ability to work (fully) autonomously.

Will not have:

  • The ability to detect smoke and make a smoke map.
  • The ability to open doors.
  • The ability to rescue/extract victims.
  • The ability to fight fires.
  • The ability to of complex communication and interaction beyond just the data sharing.

Reasoning behind chosen objectives

For this project we wanted to deliver a proof of concept for (parts) of a mapping robot to be used by firefighters in buildings. We decided to do this by using a simulation for several reasons

  • Our groups composition: as our group consists of 4 computer scientist, making a programmed simulation will cost less time learning new skills so we can focus more time on the proof of concept itself
  • A simulation makes it easier to test & tweak the robot's software. As we don't have the resources to test in an actual burning building, a controlled fire in a lab is the best possible way to test a live robot. This still takes time to setup and is harder to repeatedly do. Testing a simulation model is easier, and so gives us more opportunity to test and tweak the robot more easily
  • Making a simulation instead of a real model gives the benefit of being able to ignore the hardware side of the robot. When building a real model it must be made by using hardware and focus needs to be diverted to getting the right hardware in place. With a simulation we can focus on the parts of the robot we are going to give a proof of concept for.

We decided to make this simulation in Unity, some of the options we had and their advantages and disadvantages are listed in the next chapter of this wiki. We ultimately decided between using unity and ROS and chose unity for these reasons

  • ROS can only be used on Linux, which makes installing it and setting it up and using it a lot more difficult and time expensive
  • Unity has a lot more documentation and ready made code then ROS, so in unity we have more resources available to help us build the simulation
  • Even though ROS has more features ready aimed at robot simulation, unity has all tools we need to build our simulation

For our simulation we made a MoSCoW list that shows what is going to be included in the simulation.

Simulator Selection

Main choices for simulation environments:

Core simulation environment:

  • Multi-agent frameworks: NetLogo, RePast
  • ROS
  • Game Engine - Unreal Engine/Unity
  • FDS combined with an agent model
  • FDS combined with ROS/Unreal/Unity

Fire simulation:

  • Fire Dynamics Simulator (FDS)
  • Custom fire simulation
Discarded:

Swarm - outdated and inferior to NetLogo and RePast

NetLogo[54]
Advantages:
  • Simple and very versatile
  • Manual control possible
  • Well established in agent simulation
Disadvantages:
  • No integrated fire/smoke simulation
  • Cannot integrate actual sensor behavior

RePast

RePast[55]

RePast (Recursive Porous Agent Simulation Toolkit) is an open-source agent-based modeling and simulation (ABMS) toolkit for Java. It is designed to support the construction of agent-based models.

Advantages:
  • Can create very complex models
  • Allows the use of Java libraries
Disadvantages:
  • More complicated than Netlogo
  • Bloated code (because Java)

ROS/RViz

ROS with RViz[56]

RViz is a 3D visualization tool for ROS (Robot Operating System), which is commonly used in robotics research and development. ROS is an open-source framework for building robot software, providing various libraries and tools for tasks such as hardware abstraction, communication between processes, pathfinding, mapping and more.

Advantages:
  • Can simulate sensor data
  • Good 3D capabilities
  • Well-established in professional robot development
  • A variety of tools and plugins
Disadvantages:
  • Fire/smoke simulation not supported - outside implementation needed
  • Unfamiliar and complex
  • Requires Linux/ Difficult installation

FDS + (Custom)agent simulator

FDS[57]

Fire Dynamics Simulator (FDS) is a computational fluid dynamics (CFD) model of fire-driven fluid flow. FDS is a program that reads input parameters from a text file, computes a numerical solution to the governing equations, and writes user-specified output data to files.

FDS is primarily used to model smoke handling systems and sprinkler/detector activation studies, as well as for constructing residential and industrial fire reconstructions.

Advantages:
  • Professional system; Well established in the industry
  • Very accurate fire simulation
  • 3D
Disadvantages:
  • Pretty complicated to use
  • Computationally expensive
  • Couldn’t easily find resources for custom-agent behavior
Note:

PyroSym is a GUI which makes using FDS easy but it is paid unless a special offer is made for academic purposes

Unreal Engine/Unity + FDS

Unity[58]

Data from FDS can be extracted and loaded into Unreal/Unity and they can handle agent simulation.

Advantages:
  • Abundance of resources for Unreal Engine/Unity
  • High flexibility
  • Ease of use of Unreal Engine/Unity combined with accurate simulation from FDS
  • Successful implementation in literature [31]
Disadvantages:
  • Computationally expensive
  • Data exporting might not be a simple process
  • Complicated to use FDS

ROS/RViz + FDS

Advantages:
  • Both well established professional software for their respective uses
  • Very high control and customizability
Disadvantages:
  • Did not find resources on successful implementation but no reason why it shouldn’t be possible.
  • Both complicated and unknown software (might be too much work)
  • Computationally expensive

Chosen Simulator explanation

The final choice was Unity as the core environment combined with FDS for the fire simulation.

For the core simulation environment it was decided that multi-agent frameworks such as NetLogo and RePast are too simple and lack the capabilities for a physical simulation of Unreal, Unity or ROS. From the latter choices Unreal Engine and Unity hold the same value of being highly customizable, user-friendly and very well documented, while ROS had proper sensor and odometry implementations. The final choice was Unity due to its ease of use and customizability as well as the previous experience of 2 team members with the software.

For the fire simulation FDS was chosen as it is an established software and provides highly accurate results. The downside of computational complexity was reduced by carefully selecting input and output parameters. A custom fire simulation would be just as or more complicated to use as FDS while providing inferior results.

Simulated robot specification

In order to achieve the objective of creating a robot that creates a 2D map of its environment we decided to have the following robot design:

  1. A rectangular chassis of the robot containing all of the microcontrollers, power supply and other electronics needed for the function of the robot. The specific electronics contained in the chassis were ignored in the simulation. The chassis of the robot was simulated as a rectangular cuboid with the Rigid-body[59] component which gives mass to the object and allows it to be simulated in the unity physics engine.
  2. Differential drive wheel system. The robot has two motors which are attached to the front wheels and a free moving wheel at the back.
  3. An air temperature sensor at the chassis of the robot.
  4. A 2mm wave radar sensor on top of the chassis. This sensor serves the same functionality as a LiDAR sensor with the added advantage that it is not affect by the extreme temperatures that can occur in a fire scenario.
  5. A camera that will be used to represent the point of view of the robot operator.

These are the 4 main components of the robot, further details on how each of the components was implemented in the simulation can be found in the subsections below.

FDS simulation

To generate fire data, we created the models described in section Simulated environments in FDS. To avoid the major issue of computation complexity, we applied the following simplifications:

  1. We limited the acceleration of the fire to 10 seconds, i.e. we applied a fire accelerant to the fire for only the first 10 seconds of the simulation.
  2. We only produced fire data for a slice along the x-y dimensions, i.e. we output data was a plane parallel to plane formed by the x-y dimensions. This plane was 1m high form the origin along the z axis.

Following this procedure, we generated 5 minutes of data from 1 fire for each environment, with only the house having a second fire simulation. This gives a total of 5 minutes of simulation for each of the school and office environments, and 10 minutes for the house environment.

FDS data transfer

The files generated by FDS are read using the ReadFds.ipynb notebook which can be found at the Resources folder in the source code(link here). The python library fdsreader is used to read and transform the data to be printed as a csv in the following format:

First line: Number of timestamps, Number of sample points along the X axis, Number of sample points along the Z axis

Second line: X dimension, Z dimension

Third line: Timestamps of snapshots

Lines 4 and after: Snapshots of the temperature data along the slice of space in the following format - value at position i * X + j corresponds to the datapoint at position i, j in the snapshot.

The data is then read by Unity in the ReadHeatData.cs[1] script according to this format and stored in a 3 dimensional array with dimensions - time, x and z.

Heat sensor

The heat sensor simulation was designed to mimic a temperature probe at a height of 1 meter. The data generated from FDS is put into a 3 dimensional array with dimensions - time, x and z. The accurate world position of the heat sensor object on the robot is taken and converted to indexes using a linear transformation function which is derived by the difference in the scale of the environment of Unity and FDS. The time is taken from the start of the simulation and together with the translated coordinates is used to get a heat measurement.

The data is taken perfectly with no noise being introduced as from the interview it was concluded that the there is no need for a high amount of accuracy or precision. The important aspect is to determine the general temperature distribution and to determine dangerous areas. For that we did not need to simulate the inaccuracies of temperature probe.

Wheel system

For this robot we selected a differential drive system with two front wheels connect to motors and a free moving caster wheel in the back of the chassis. The front wheels are represented in unity as solid cylindrical objects with the RigidBody[59] component. The wheel connection to the robot chassis was handled by adding a HingeJoint [60]component. With the hinge joint we can restrict the movement of the wheels to move as if they were connected to the chassis of the robot. However to still allow the wheels to rotate freely we allowed rotation around the Y-axis of the hinge joint. Additionally, by default the hinge joints in unity have a maximum angle of rotation. This effect is not desirable for our use case since we want our wheels to have unrestricted rotations. We circumvented this by setting the "Min" and "Max" limit parameters to 0 of the HingeJoint to allow for continuous movement.

The back "free-spinning" wheel was designed to simulate a "ball-wheel"[61] . It was modeled using a sphere and RigidBody[59] component. This wheel, unlike the front wheels needs to rotate freely around every axis in order to allow the robot to freely turn. This extra requirement necessitated the use of more complex ConfigurableJoint [62]. With this joint we connected the wheel to the main chassis and locked the movement of the wheel across every axis but we allowed for free rotation.

We also need to move our robot. To do this we simulated the front wheel movement using the "Motor" function of the HingeJoint[60] component. The HingeJoint motor has two settings: Torque and Velocity. We set the torque of the motors to a high enough value so that our motors can move the and we connected the velocity setting with the user's keyboard input. With this we can control the movement of the robot using the WASD keys. The code that handles the movement of the robot can be found in the MoveWheels2.cs [1]script inside our UnityProject.

An interesting issue that occurred during the development of the simulation was that the torque setting of the wheel motors was too high which caused the robot to flip upside down since the chassis was too light. This issue was exacerbate by the different friction values of the floor of the different environments. In the end we resolved this by increasing the weight of the chassis

2mm Wave Radar Sensor

View from main robot camera
View from main robot camera

Although we officially call this sensor the 2mm Wave Radar Sensor the unofficial term we used when developing the simulation was simply the "LiDAR sensor", because the for the purposes of the simulation the behaviour of both the radar and LiDAR is the same. We want our LiDAR/Radar sensor to generate 2d array scan point of the surrounding obstacles at a given frequency. We also want to simulate the physical nature of the LiDAR where it spins at a certain velocity and it takes measurements at certain frequencies. Finally we also want to take into account that the LiDAR sensor data is not perfect in the real world, so we also want to include these imperfect measurements in our simulation. All of this functionality of the LiDAR was implemented in the LidarScan.cs[1] script inside our Unity Project. This script sends a predefined number of rays[63] in circle around the radar. The rays travel a predetermined distance and if they it an object that is closer than the given distance they report it back to the LiDAR. In the end we generate an Array of measurements where each entry in the array contains the angle at which this measurement is taken and the distance of the object that was detected. If no object was detected then there is no entry in the measurement array. Finally, we randomize each entry using random values sampled from a uniform random distribution. We can freely tweak the maximum scan distance of the LiDAR, the number of measurements taken per scan and the strength of the random noise that was added.

Camera

Fire particles in a burning room

In the real world, the robot will use an infrared camera for two main reasons. First, it can detect the temperature of solid surfaces and second, it can see through smoke which occurs quite often in a house fire scenario. However, we did not choose to simulate the infrared camera in Unity. This is because none of the current algorithms depended on the infrared camera input, only on the heat sensor and the LiDAR. Additionally, implementing the Infrared camera would greatly increase the computational complexity of the simulation since we need to use FDS to precalculate (with great detail) the surface temperature of the solids in the room and we also need to render these temperatures on the camera. Since the frame rate of the simulation was not very high (30 FPS) we decided to opt out of simulating a full blown infrared camera. Instead to still represent what the robot operating can see while controlling the robot we attached an object with a Camera component [64].

To visualize the heat distribution during the simulation for demonstration purposes we use the Unity particle system. For each point (x, z) generated by FDS we instantiate an object at the real world coordinates corresponding to the data point. The object emits red semitransparent particles at a rate which is proportional to the heat data point at the given (x, z) coordinates and time. This is not accurate to real world fire but serves as a visual aid when working with and testing the simulation.

Simulated environments

The house simulation environment
The school simulation environment

One of the biggest advantages of simulating the fires was that multiple environments could be build to test the robots performance under different circumstances. It was chosen to build three different environments for the project that could all challenge the robot in different ways. Furthermore a house, school and office building were chosen because these seemed like very plausible locations for an actual fire.

House

The house was the first and smallest environment that was constructed. It had a very small and simplistic layout and was chosen as the first environment to test the robot in the first stages of its development and to learn how to work with unity. It consists of a living room/kitchen, another smaller living room, a bath room and two bedrooms. It was a useful environment for testing especially early one and it relatively small size also meant that computation times were a lot shorter. It was also with this environment that it was concluded that 3d fire was not practical or necessary for this project because computation time was to long, even for this smaller environment. A house is also a very realistic place for the robot to be used because firefighters are less likely to have building plans of these then for larger buildings. Meaning that a mapping robot would be ideal.

School

The school simulation environment was the second simulation environment that we made. The idea of this environment was to expand on the simplistic environment of the house and add more complexity. This complexity stems form the larger scale and more complex layout of the school. Firstly longer narrow hallways could challenge the capabilities of the mmWave Radar and of the mapping algorithms. Secondly, the layout was made so that some rooms lead to other rooms making the resulting map a lot more complex than in its predecessor. We were interested to see if our mapping algorithm would be able to handle these more complicated layouts. The environment consists of 7 class rooms, the main hallway/cafeteria, 3 toilets, the teachers room, 2 storage rooms and 2 smaller hallways.

Office

The office simulation environment

The office is the third and last simulation environment that was made for the project. An office building was chosen because all environments until now consisted of smaller rooms connected to each other. To be more precise, the house with the small rooms and the school with the relatively small classrooms and narrow well defined hallways. Therefore the goal of the office building was to challenge the robot by making larger open spaces which could be difficult to deal with because of the limited range of the mmWave Radar. It was also of interest because we wanted to see how different a fire would spread when there were larger open areas to spread through instead of all the relatively closed spaces of the other environments. In this environment the cafeteria is connected all the way to the work spaces making one long main space with smaller meeting rooms connected too it. Larger environments were possible in this part of the project because the choice of 2d fire was already made and this cut down a lot on the computation time which meant that the scale could be increased by a lot. The environment consisted of the larger cafeteria which was connected to the working spaces, 6 toilets, 3 separate meeting rooms, 1 kitchen and 2 storage rooms.

Future environments

More simulation environments could have been build, however for the purposes of this project the current ones were deemed enough. Yet it should be noted that with the foundation laid in this project it would be fairly easy to add more environments and simulate the fires if someone were to continue upon this project.

Mapping algorithms

SLAM

The map generated by the SLAM algorithm along with the heat data measure from the robot.

SLAM[65] (simultaneous localization and mapping) is a technique for constructing the map of the environment of the robot (i.e., where the obstacles and empty space around the robot are), and, at the same time, determine the location of the robot inside the environment.

In our simulation, we use data from an emulated LIDAR like sensor, that provides the input to the SLAM algorithm: a set of observations from the sensor that consist of the distance of an obstacle relative to the sensor and the angle used for obtaining the distance. This is provided to the algorithm at regular time steps. This information is combined with a series of controls (i.e., movement instructions) given to the robot and odometry information (i.e., how far the robot has moved based its own sensors). The SLAM algorithm, using this information, updates both the state of the robot (i.e., location) and the map of the environment using probabilistic techniques.

Two SLAM algorithms were ported to unity. CoreSLAM[66], which is an efficient and fast SLAM algorithm, more suitable for systems with limited resources and a port to C# of the HectorSLAM[67] algorithm which is part of the functionality of the ROS.  

The starting point for our implementation was a public code base[68], that was adapted to compile and optimized for the .NET version used by Unity. In order for the SLAM code to be used in unity several adaptation needed to be made since the original CoreSLAM and HectorSlam implementation were made for a different version of .NET. To do this we ported the source code in our unity project (the SLAM code can be found in the Assets/SLAM folder of our project). After this we needed to adapt the SLAM algorithms to accept the input from our LiDAR sensor. We did this in for both SLAM implementation in the BaseSlam.cs[1] an HectorSlam.cs[1] scripts respectively. In these scripts we translate the output of the LiDAR sensor and we feed it into the SLAM algorithms which will generate an obstacle map of given resolution and size. The resolution and size can be controlled as parameters before starting the simulation. The map generated by the SLAM algorithms was translated from raw byte data (The likelihood of there being an obstacle) to an dynamic image appear on the screen. Initially this was done by calling the SetPixel[69] method of a texture[70] for an image[71] in the simulation UI. However this method of rendering the map was extremely slow for larger map resolution since each pixel had to be drawn by the CPU . Because of this the frame rate of the simluation wa very low (5-10 fps). This was fixed by using a ComputeShader, [72]which would use the GPU to draw in parallel each pixel of the texture based on the predicted likelihood from the SLAM algorithm. The code for the shader is in the LidarDrawShader.compute[1] in the Assets folder of our Unity project. This method drastically increased our fps to around 70 frames per second.

Besides showing the obstacle data we also need to visualize the temperature measurements on our mini-map. This was done in two ways. First, we displayed the current temperature measurement as text on our user interface. This way the fire operator can clearly see if the robot is in danger of overheating. Second, we draw a "heat path" of the robot displaying the temperatures that were measured during the exploration of the environment. Older measurements were made to slowly fade out by increasing the transparency values of their colors. To implement this we also used the SetPixels [69]method for a 2d texture in Unity. The code for displaying the heat data can be found in LidarDrawer.cs[1] (not the best name). The final result of the mini-map can be seen in the image on the left.

Further work

There are quite some things that we at first wanted to do, but due to time constraints couldn't do. Below are some features for the simulation and research that could be done to continue on our project. Firstly, the parts of the robot we ignored for this project can be looked into. Some of these things are: The movement of the robot, and with that the path-finding of the robot, The physical aspect of the robot: Currently the robot in the simulation is invincible and doesn't influence it's environment. In a real firefighting situation this will not be the case, so to get closer to the design of an actual scouting robot for firefighters, research needs to be done into the material and physical capabilities of such robot.

Secondly, there are a lot of features that could be added to the robot, that didn't fit in the scope of this course. The biggest feature we did not include in our simulation is victim detection. In our use analysis this feature was pointed at as one of the most important features next to mapping. Adding a victim detection algorithm to the simulation would be one step closer to realizing an actual robot. Another thing we encountered near the end of the course was the possibility to have the heat sensor data included into the SLAM algorithm, we didn't have time to include this in the simulation at that point, but using the heat data would possibly improve the mapping and so further improve the functionality of the robot. If all this is realized, one can look at what is ultimately the end goal of the research, actually building a fire reconnaissance robot to assist fire fighters in dangerous indoor fire, and help preserve lives of victims and firefighters.

Appendix

Appendix 1; Logbook

Logbook
Week Name Hours spent Total hours
1 Dimitrios Adaos Introductory Lecture (2h), Meeting (1h), Brainstorm (0.5h),

Find papers (2h), Read and summarize papers (8h) Wrote Introduction (6h)

19.5h
Wiliam Dokov Introductory Lecture (2h), Meeting (1h), Brainstorm (0.5h), Find papers (3h), Summarry (7h) 13.5h
Kwan Wa Lam Introductory Lecture (2h), Meeting (1h), Brainstorm (0.5h), Find papers(1h), Read and summarize papers (7h) 11.5h
Kamiel Muller Introductory Lecture (2h), Meeting (1h), Brainstorm (0.5h), Find papers(1h) 4.5h
Georgi Nihrizov Introductory Lecture (2h), Meeting (1h), Brainstorm (0.5h), Find papers(2h),

Read and summarize papers (8h)

13.5h
Twan Verhagen Introductory Lecture (2h), Meeting (1h), Brainstorm (0.5h), Find papers (1h) 4.5h
2 Dimitrios Adaos Weekly evaluation (0.5h), Meeting (2h), Meeting (1h), Interview with firefighter (2h), Processing interview questions (2h) 7.5h
Wiliam Dokov Weekly evaluation (0.5h), Meeting (2h), Meeting (1h), Interview with firefighter (2h), Researching simulation options (1h) 6.5h
Kwan Wa Lam Meeting (1h), Work on Wiki page (2h), Literature Research (3h), User Analysis (1.5h), Reviewing Wiki(1h) 8.5h
Kamiel Muller Weekly evaluation (0.5h), Meeting (2h), Correspondence firefighting station (0.5h), Meeting (1h), Work on Wiki page (2h) 6h
Georgi Nihrizov Weekly evaluation (0.5h), Meeting (2h), Meeting (1h), Research Simulation Environments (8h) 11.5h
Twan Verhagen Weekly evaluation (0.5h), Meeting (2h), Meeting (1h), Reviewing Wiki(1h), Researching Literature(3h) 7,5h
3 Dimitrios Adaos Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), Experimenting with Unreal Engine 5 (2h), Installing FDS and converting the data from it to a usable format (6h), Installing ROS and RViz (Windows) (0.5h) 12.5h
Wiliam Dokov Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), Installing Ros (On windows) (1h), Researching viable Ros + Gazebo simulation methods for windows (2h), Trying to implement Gazebo simulation on Windows (2h), Doing ROS basic tutorial (2h), Doing Gazeebo basic tutorial and figuring out how plugins work (3h) 14h
Kwan Wa Lam Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), Researching Literature(6h), Adding Literature to Wiki(4h) 14h
Kamiel Muller Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), Researching Literature(5h), Adding Literature to Wiki and updating segments introduction of Wiki (4h) 13h
Georgi Nihrizov Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), FDS research (8h) 12h
Twan Verhagen Weekly evaluation (0.5h), Meeting (2h), Meeting (1.5h), adding labels to wiki(2h), research into sensors (3h) 9h
4 Dimitrios Adaos Weekly evaluation (0.5h),Discussion (1h), Thursday Meeting (1h), Familiarization (Watching tutorials etc.) with Unity (4h), Python programming for translation of FDS data (2h), Programming in Unity (4h) 12.5h
Wiliam Dokov Weekly evaluation (0.5h),Discussion (1h), Thursday Meeting (1h), Setting up Basic scene in unity (1h), Setting up basic robot (3h), Setting up controls (3h), Setting up Lidar (6h) 15.5
Kwan Wa Lam Weekly evaluation (0.5h), Discussion (1h), Thursday Meeting (1h), Setting up and learning Unity (5h), Finding a building plan and making virtual environment (4h) 11.5h
Kamiel Muller Weekly evaluation (0.5h), Discussion (1h), Thursday Meeting (1h), Setting up Unity and getting acquainted with it (5h), Making a start on making robot movement (3h) 10.5 h
Georgi Nihrizov Weekly evaluation (0.5h), Discussion (1h), Heat Sensor Research (4h), Heat Sensor simulation (6h) 11.5h
Twan Verhagen Weekly evaluation(0,5h), discussion(1h), Thursday meeting (1h), setting up the correct version of unity and exploring it(5h), exploring minimaps and ways to show results on it on unity(4h) 11,5h
5 Dimitrios Adaos Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Explore alternate possibilities for fire simulation (3h), experimenting with FDS abstraction level (3h), FDS model creation/description (8h) 17.5h
Wiliam Dokov Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Learning how lidar's work (3h), Setting up lidar sensor in Unity (3h), Learning how differntail drive odometry works (3h), trying to port motor encoder sensors to Unity (unsucessfully) (5h) 17.5h
Kwan Wa Lam Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Finding and modifying building plan (1h), Building environment in Unity and adding furniture (6h), Trying to retrieve progress (1h) 11.5h
Kamiel Muller Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), doing further research into Unity environments(4h), getting started on making a school based environment (3h) 10.5h
Georgi Nihrizov Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Combining all modules and adding finished scenes (8h), Improving code quality (3h) 14.5h
Twan Verhagen Weekly evaluation (0,5h), Discussion(2h), Research into creating a map and testing in unity(6h), adding reasoning to the wiki(2h) 10.5h
6 Dimitrios Adaos Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), FDS model creation/description (2.5h), Studying SLAM (4h), Investigating SLAM implementations (4h), Porting SLAM implementation to Unity (6h) 19h
Wiliam Dokov Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Working more on motor encoders in Unity (3h), Learning about SLAM (4h), Cleaning up Unity project code so we can test different algorithms (3h) 13.5h
Kwan Wa Lam Weekly evaluation (0.5h), Discussion (2h), Sunday Meeting (1h), Finding and modifying building plan for office building (1h), Installing and getting acquainted with GitHub (1h), Building the office environment (6h) 11.5h
Kamiel Muller Weekly evaluation (0.5h), Discussion (2h), figuring out how to import assets into Unity (2h), fully creating a school based environment (7h) 11.5h
Georgi Nihrizov Weekly evaluation (0.5h), Discussion (2h), Basic minimap using lidar (6h), Adding new scenes (4h), Combining heat sensor with slam (3h) 15.5h
Twan Verhagen Weekly evaluation (0,5h), Discussion(2h), meeting about the minimapping(1h), research into using a canvas for a minimap(3h), research into 2 methods to create minimap(5h) 11.5h
7 Dimitrios Adaos Weekly evaluation (0.5h), Discussion (2h), Fixing porting issues with SLAM (6h), Rework FDS models (1h) 9.5h
Wiliam Dokov Weekly evaluation (0.5h), Discussion (2h), Looking how to do a SLAM minimap in Unity (3h), Porting workin Base SLAM into Unity (5h), Porting Working hector SLAM into unity (3h) 13.5h
Kwan Wa Lam Weekly evaluation (0.5h), Discussion (2h), Fix second environment (1h), Workout question and summary (3h), Make PowerPoint (1h), Add to questions (2h), Work on wiki (3h) 12.5h
Kamiel Muller Weekly evaluation (0.5h), Discussion (2h), Finish third environment (2h), Workout question and summary (3h), work on wiki (3h) 10.5h
Georgi Nihrizov Weekly evaluation (0.5h), Discussion (2h), Standardizing and improving heat sensor(5h), Improving robot controller (2h), Cleaning up (4h) 13.5h
Twan Verhagen Weekly evaluation (0.5h), Discussion (2h), edit the wiki(5h), prepare for presentation(3h) 10,5h
8 Dimitrios Adaos Discussion (2.5h), Meeting(1.5h), Preparing script for presentation (2h), Preparing slides for presentation (2h), Practicing for presentation (1h), Presentation(2h), Finalizing wiki (8h) 19h
Wiliam Dokov Discussion (2.5h), Meeting(1.5h), Presentation(2h), Learning about compute shaders in order to improve frame rate of simulation (3h), Combining heatmap and slam map data into one (3h), Working on simulation UI (2h), Fixing various bugs before presentation (3.5h), Finalizing wiki page (10h) 27.5h
Kwan Wa Lam Discussion (2.5h), Meeting(1.5h), Presentation(2h), Doing peer review (0.5h), Rewriting introduction (3h), Changing wiki layout (2h), Adding pictures + fixing references + adding descriptions (2h), Writing simulated environments House + School + Office (3h) + Finalizing wiki (4h) 20.5h
Kamiel Muller Discussion (2.5h), Meeting(1.5h), Presentation(2h), Finalizing wiki (5h) 11h
Georgi Nihrizov Discussion (2.5h), Meeting(1.5h), Presentation(2h), Finalizing wiki (6h), Finalizing simulation (3h), Adding particles for visualization (6h) 21h
Twan Verhagen Discussion (2.5h), Meeting(1.5h), Presentation(2h), Finalizing wiki (5h) 11h

Appendix 2; Project Summary

Week 1

In the first week every team member came up with multiple ideas for the project. After that there was a team meeting where the most interesting one was selected which happened to be fire fighting robots. It was also decided upon that a simulation was the most practical way to reach our set goals for the robot (mainly making an accurate mapping without prior knowledge and adding a heat map on top). After making these first big decisions the team started looking into literature on firefighting robots, what sensors they used, path planning algorithms, mapping algorithms, etc. These papers were summarized and noted in a way so that they could easily be used later in the project.

Week 2

The most important thing that happened in the second week was the interview that was done with the firefighters. This helped a lot in building our vision on what a robot should be able to do to actually be useful for actual people that work in this profession. Next to that a large part of the wiki and its structure was made during this week. Some more research was done in on the general subject, and a good start was made on research in what simulation environment existed and which ones would be optimal for our project.

Week 3

No definitive choice had been made yet on what simulation environment to use yet. The choices had been narrowed down a lot but there was being experimented with the different possible options to see which one would be best for this project. The main focus of research this week was on: Unreal Engine, FDS, ROS, Gazebo, sensors, and state of the art firefighting robots. The wiki was only updated with some of this research.

Week 4

In this week it was finally decided upon to go with Unity and FDS for the simulation. This weeks an the coming weeks were mostly defined by actually writing the code for all the things that the robot needed to do and the environments it needed to be placed in. More effort was put in this then in actually noting the progress down in the wiki. Everyone that wasn't familiar with Unity yet did at least a basic tutorial and got familiar with it. Also further work was done in FDS, setting up the mmWave Radar in Unity, Building the first environment in Unity, looking into the robots movement, setting up a heat sensor in Unity, and setting up mini maps in Unity.

Week 5

In this week the first environment was finished in Unity and the first form of the mmWave Radar was successfully tested in this environment, still at this point the mmWave Radar still had perfect accuracy and the minimap did not yet provide lasting data. Some testing was done with regards to how much accuracy was needed for the FDS data, as with near perfect accuracy currently used it would take far too long to actually run the simulation. Research was also done into how we can import the fire data from FDS into Unity. The main focus for the next weeks was to create fully functioning minimaps, combine the data from the mmWave Radar and heat sensor, create a SLAM algorithm and build some different environments in Unity.

Week 6

In this week the second Unity environment and an FDS model for the first Unity environment were created. A final decision regarding the accuracy of the FDS data was also made and it was decided upon to only take a 2D slice of the fire data in order to save computational power. While this does limit the accuracy of the fire data, as our current project mainly focusses on the robot detecting the fire, it is less relevant to have perfect fire data. Additionally, working minimaps for both the mmWave Radar and Heatmap were made and a start was made on the research into SLAM.

Week 7

This week marks the point where projects that either started in the previous weeks or this week, with the exception of SLAM, were finalized. These projects include: creation of the third and final Unity environment, FDS models with their respective fire data for both the second and third Unity models and a fusion was made between the data provided by the mmWave Radar (and its minimap) and the heatmap. The only things that still needed to be done was to fully combine all the separate projects and models into a final product and finish the research into SLAM.

Week 8

In the final week, the data from the FDS models was successfully transported over to the Unity environments. In addition, the SLAM algorithm was finished. The rest of this week was mostly focused on preparing for the final presentation and actually giving said presentation. The only thing stat still needs to be done after the presentation is cleaning up the wiki and adding all of the source code and documentation onto the wiki.

Appendix 3; Firefighter Interview

Question: What does a typical firefighting mission look like? How do you gather information? Do you work in a team? What are your objectives and priorities (Search for people, extinguish fire)?

Answer: We usually walk around the building to find the location of the fire and make sure that the fire does not spread. We also make sure that all doors and windows are closed to prevent any air currents from causing a back-draft that leads to an explosion. We then determine if we can take an aggressive approach to extinguishing the fire; by entering the building, or a passive/defensive approach; by trying to extinguish from outside. When we do enter the building our first priority is looking for people, then extinguishing. We do this to avoid risking that people will inhale the smoke from extinguishing the flames. The most important things to know are the characteristics of the building in which the fire is located, and to find out this information we usually ask the people in the area. We also generally need to know how much water the fire we are trying to extinguish needs. Generally 1 couch needs 1 hose of water to extinguish.

Question: What usually goes wrong during these missions?

Answer: We use portable phones for communication inside the buildings and so communication can be an issue. It is also difficult to find out which way we need to go due to smoke/visibility issues.

Question: What are the main causes of failures during a firefighting mission?

Answer: Flammable materials that can cause explosions are the most dangerous and we need to make sure we are nowhere near them when we discover them. There is also the danger of running out of water to extinguish, at which point we have to give up on extinguishing the fire. Also, for metal and concrete buildings that contract due to high temperatures are likely to collapse after about an hour of being aflame.

Question: Do you have any ideas on how to prevent these issues?

Answer: When we encounter containers of flammable materials we try to remove them from the building while cooling them with water. We then place them behind walls to keep ourselves and others safe. If we know that a metal or concrete building has been burning for a while then we simply do not enter and try to passively/defensively extinguish the fire.

Question: What are the current firefighting tools at your disposal? Do you think that you need something more?

Answer: We have an infrared camera, a CO2 meter to detect dangerous substances, oxygen masks and some tools for opening doors safely.

Question: In the ideal case what functions would you want the robot to perform?

Answer: Most important would be information about heat/a heatmap and information about where people are located inside burning buildings. We would then need information about the structure of the building, how big the fire is and if possible the location of any obstacles inside the building.

Question: What level of autonomy would be best? With autonomy defined as what kinds of permissions the robot has to do without human intervention.

Answer: For us the robot should be able to work on its own, but have a simple interface so that our chief/director can direct it from a tablet in our firetruck if needed.

Question: What information are you missing and would like to know when there is a fire in a building?

Answer: In order of priority:

  1. Information about people in the building
  2. Heatmap
  3. Smoke map
  4. Basic obstacle map

Question: Suppose there is a hard-to-reach or inaccessible area, how influential would it be if a robot would be able to reach it fairly easily?

Answer: This really depends more on the area, how many people are inside, if there is a fire in that location, or maybe any dangerous/explosive substances.

Question: Have you used robots during your work. If yes what was your experience with them? Have you had any issues with them?

Answer: I have not used any robots in my firefighting career.

Question: Rank the following features based on importance (omitted here since they are present in the answer)

Answer: In order of priority (with an addendum for some features at the end of the answer)

  1. Heat resistance
  2. Ability to find people
  3. Clarity of sensory picture
  4. Mobility
  • Level of autonomy: Answer: Needs to be autonomous but with the capacity to direct it if needed
  • Speed: Answer: Depends on size of building, most buildings are not that large
  • Accuracy: Answer: relevant for larger scale (need to distinguish 200°C from 400°C), but high accuracy not important at smaller scale (160°C vs 180°C)

Question: Do you have something more to add on this topic?

Answer: Finding an entrance point for the robot could be dangerous, we can't really open any doors or windows easily to let the robot go in due to the risk of a back-draft.

Question: How long does a fire take to extinguish for an average house?

Answer: For normal houses the entire process of extinguishing a fire takes about 10-15 minutes.

Appendix 4; Questions

Below is a list of some questions (and answers) that was mostly made in preparation for the presentation, but can still provide some useful insights in our decision making process in case it is not covered in any other section.

  • Why a simulation (instead of building a testing environment in real life)?
    • Because it is both better suited to the composition of our group and it is a lot easier to run repeated tests at shorter time intervals (setting some environment safely on fire in a ‘real’ setting takes quite a bit of work). It also creates more opportunities for testing fires originating in different locations of the set and building multiple large scale environments becomes possible which would have been extremely hard and time consuming if done otherwise. Also no real fire is needed because if at least a certain level of realism is kept the fire doesn't have to be hyper realistic for the purposes of our project, a simulated even if somewhat simplified is accurate enough.
  • Why this amount of different environments?
    • We wanted to test the robot in a varying set of environments and this number was the most feasible within the timeframe while still offering a good amount of variety in the environments.
  • Why these specific environments?
    • We wanted to test the robot in a varying set of environments and these three are some of the most common types of environments. The environments that were chosen now consist of a house, a school and an office. These varying environments are useful in testing how the robot handles different kinds of environments. From the smaller scale house, to the school with a little more complicated layout but still pretty restricted rooms, to the office with larger open areas.
  • Why are there no stairs in the environment?
    • Right now the robot doesn't have stair climbing capabilities meaning that in the current simulations stairs would not add any value. The stairs can be seen with the infrared camera on the robot and will be avoided by the operator of the robot.
  • Why is the robot grounded?
    • Because a flying robot could potentially cause backdrafts and thus cause the fire to spread faster and in a more unpredictable fashion. One of the first things firefighters do (source: interview) is to close any window they can see/find.
  • What would happen if the floor isn’t completely flat?
    • The robot also has an infrared camera mounted on the front so that next to the mapping the operator can also actually see the environment. For now it is assumed that the floor will be flat as this holds for almost every building, however if there would be holes under the level of what mmWave Radar can detect then it is up to the operator to spot these and to avoid them.
  • How does the robot deal with obstacles lower or higher than what the mmWave Radar detects?
    • Obstacles at a higher elevation than the robot wouldn’t really be a problem, as the Radar is already the highest point of the robot and lower obstacles are not that prevalent and the robot is still remote controlled, so the operator would also know if something went wrong and can still see the environment through the infrared camera.
  • Why do you assume all windows are closed/ where are the windows?
    • One of the first things firefighters do (see interview) is to close any window they can see/find. So this is a fairly safe assumption to make.
  • Why does the mmWave Radar only take a slice (2d instead of 3d)?
    • Because we want to provide a readable map to the firefighters, we made it a 2D map because its the easiest way to visualize all the important information (heat map and layout of the building) in a readable manner. So in this case it is not that necessary to detect in 3D.
  • Why a slice at that specific height?
    • Because measuring at the top of the robot ensures that if obstacles are encountered it can be ruled out that the obstacle blocking the top of the robot. It's also a height might miss smaller insignificant obstacles but does map obstacles that are higher and also harder to traverse for the firefighter
  • Why only simulate fire 2d?
    • It saves us a lot of computational power to simulate the fire in 2d instead of 3d. The fire still spreads in a realistic way that is not too over complicated and because we mainly focus on fire detection, mapping and heat mapping by the robot, a perfectly accurate fire simulation is not required. As long as the spread of the fire is realistic (so that the scenario is realistic) only this level of simulation is needed to test the heat mapping capabilities. Because our map is 2d, 3d fire physics would be irrelevant.
  • Why not simulate smoke?
    • Because we are working with a mmWave Radar that sees through smoke, a temperature sensor that isn’t affected by smoke, and a thermal sensor that also sees through smoke. This means that none of our sensors are influenced by smoke and adding it to the simulation doesn’t add any value and requires even more computational power.
  • Why remote controlled and not autonomous?
    • Making it autonomous would have been a very nice addition to the robot, however it wasn’t feasible in the current time frame so it wasn’t realized for now.
  • Why no automated function to protect the robot from extreme heat?
    • Adding this feature would have been really nice, however in the current timeframe it wasn’t possible to be realized. However we argue for now that the robot will be built to be able to withstand extreme temperatures, so this feature wasn’t needed for now. Also keep in mind that the operator would have access to the heat map until now and a infrared camera, meaning that even if the robot would encounter heat levels that it can't handle the operator could simply lead the robot around that area.
  • Why no victim detection?
    • Making it detect victims would have been a very nice addition to the robot, however it wasn’t feasible in the current time frame so it wasn’t realized for now. However there are known sensors that can do victim detection even through smoke (see our research literature with the tag Victim detection).
  • Why use mmWave Radar and not another sensor?
    • A mmWave Radar has a similar function to a lidar and can operate through smoke and is thus essentially the best option for general mapping purposes.
  • Why a temperature sensor and not for example infrared camera for heat mapping?
    • An infrared camera can only measure temperature at surfaces, in which case the data wouldn't be fully accurate.
  • Why this form of robot (three wheeled)?
    • It is currently simply a placeholder for this simulation, but for as long as the height of the robot doesn't change any shape should still work
  • In what way is making the heatmap useful for a firefighter?
    • In a lot of smaller scale buildings a general floor plan is not something that is available, so in that case both the mapping feature of the robot and its ability to detect heat provide a lot of valuable information. In the interview conducted with the firefighting station on campus, it turned out that information about a heatmap is something that is generally missing and quite important to a firefighter.
  • In what way is this innovative and/or contributes something new?
    • We researched most of the most prominent state of the art fire fighting and rescue robots that are on the market right now, and after some thorough research did not find that robots that build a map and heatmap without any prior information in these extreme conditions are used at the moment.
  • Why this size of robot?
    • This size was chosen because a small robot like this could easily go around obstacles. Making the robot very large would not add any extra value for the purpose that it has.

Sources

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Simulation Code - https://github.com/viliamDokov/0LAUK0-FireRescueSim
  2. Digitalhallway. (2007c, juli 22). Inside house fire. iStock. https://www.istockphoto.com/nl/foto/inside-house-fire-gm172338474-3821332
  3. Schalij, N. (2022b, september 20). Fire department will not leave TU/e campus any time soon. https://www.cursor.tue.nl/en/nieuws/2022/september/week-3/fire-department-will-not-leave-tu-e-campus-any-time-soon/
  4. 4.0 4.1 Jaradat, F. B., & Valles, D. (2020). A Victims Detection Approach for Burning Building Sites Using Convolutional Neural Networks. 2020 10th Annual Computing And Communication Workshop And Conference (CCWC). https://doi.org/10.1109/ccwc47524.2020.9031275
  5. 5.0 5.1 Jaradat, F., & Valles, D. (2018). Early Warning Embedded System of Dangerous Temperature Using Single exponential smoothing for Firefighters. . . ResearchGate. https://www.researchgate.net/publication/327209779_Early_Warning_Embedded_System_of_Dangerous_Temperature_Using_Single_exponential_smoothing_for_Firefighters_Safety
  6. 6.0 6.1 Lin, Z., & Tsai, P. (2024). A method to accelerate the rescue of fire-stricken victims. Expert Systems With Applications, 238, 122186. https://doi.org/10.1016/j.eswa.2023.122186
  7. 7.0 7.1 7.2 Bogue, R. (2021). The role of robots in firefighting. Industrial Robot-an International Journal, 48(2), 174–178. https://doi.org/10.1108/ir-10-2020-0222
  8. 8.0 8.1 Hong, Y. (2022). SLAM for Firefighting Robots: A Review of Potential Solutions to Environmental Issues. 2022 5th World Conference On Mechanical Engineering And Intelligent Manufacturing (WCMEIM). https://doi.org/10.1109/wcmeim56910.2022.10021457
  9. 9.0 9.1 Li, S., Feng, C., Niu, Y., Shi, L., Wu, Z., & Song, H. W. (2019). A Fire Reconnaissance Robot Based on SLAM Position, Thermal Imaging Technologies, and AR Display. Sensors, 19(22), 5036. https://doi.org/10.3390/s19225036
  10. 10.0 10.1 Zhang, S., Yao, J., Wang, R., Liu, Z., Ma, C., Wang, Y., & Zhao, Y. (2022). Design of intelligent fire-fighting robot based on multi-sensor fusion and experimental study on fire scene patrol. Robotics And Autonomous Systems, 154, 104122. https://doi.org/10.1016/j.robot.2022.104122
  11. 11.0 11.1 Dhiman, A., Shah, N., Adhikari, P., Kumbhar, S., Dhanjal, I. S., & Mehendale, N. (2021). Firefighting robot with deep learning and machine vision. Neural Computing And Applications, 34(4), 2831–2839. https://doi.org/10.1007/s00521-021-06537-y
  12. 12.0 12.1 Hassanein, A., Elhawary, M., Jaber, N., & El-Abd, M. (2015). An autonomous firefighting robot. IEEE. https://doi.org/10.1109/icar.2015.7251507
  13. 13.0 13.1 Gelfert, S. (2023). Real Time Victim Detection in Smoky Environments with Mobile Robot and Multi-sensor Unit Using Deep Learning. In Lecture notes in networks and systems (pp. 351–364). https://doi.org/10.1007/978-3-031-26889-2_32
  14. 14.0 14.1 Ulloa, C. C., Orbea, D., Del Cerro, J., & Barrientos, A. (2024). Thermal, Multispectral, and RGB Vision Systems Analysis for Victim Detection in SAR Robotics. Applied Sciences, 14(2), 766. https://doi.org/10.3390/app14020766
  15. 15.0 15.1 Kim, J., Keller, B., & Lattimer, B. Y. (2013). Sensor fusion based seek-and-find fire algorithm for intelligent firefighting robot. IEEE. https://doi.org/10.1109/aim.2013.6584304
  16. 16.0 16.1 Ramasubramanian, S., & Muthukumaraswamy, S. A. (2021). On the Enhancement of Firefighting Robots using Path-Planning Algorithms. SN Computer Science, 2(3). https://doi.org/10.1007/s42979-021-00578-9
  17. 17.0 17.1 Li, S., Yun, J., Feng, C., Gao, Y., Yang, J., Sun, G., & Zhang, D. (2023). An Indoor Autonomous Inspection and Firefighting Robot Based on SLAM and Flame Image Recognition. Fire, 6(3), 93. https://doi.org/10.3390/fire6030093
  18. 18.0 18.1 Bandala, A. A., Sybingco, E., Maningo, J. M. Z., Dadios, E. P., Isidro, G. I., Jurilla, R. D., & Lai, C. (2020). Human Presence Detection using Ultra Wide Band Signal for Fire Extinguishing Robot. IEEE. https://doi.org/10.1109/tencon50793.2020.9293893
  19. 19.0 19.1 Kim, J., Starr, J. W., & Lattimer, B. Y. (2014). Firefighting Robot Stereo Infrared Vision and Radar Sensor Fusion for Imaging through Smoke. Fire Technology, 51(4), 823–845. https://doi.org/10.1007/s10694-014-0413-6
  20. 20.0 20.1 Tong, T., Guo, F., Wu, X., Dong, H., Liu, O., & Yu, L. (2021). Global Path Planning for Fire-Fighting Robot Based on Advanced Bi-RRT Algorithm*. IEEE. https://doi.org/10.1109/iciea51954.2021.9516153
  21. 21.0 21.1 Rein, G., Torero, J. L., Jahn, W., Stern-Gottfried, J., Ryder, N. L., Desanghere, S., Lázaro, M., Mowrer, F. W., Coles, A., Joyeux, D., Alvear, D., Capote, J., Jowsey, A., Abecassis-Empis, C., & Reszka, P. (2009). Round-robin study of a priori modelling predictions of the Dalmarnock Fire Test One. Fire Safety Journal, 44(4), 590–602. https://doi.org/10.1016/j.firesaf.2008.12.008
  22. 22.0 22.1 Hard, D. L., Marsh, S. M., Merinar, T. R., Bowyer, M. E., Miles, S. T., Loflin, M. E., & Moore, P. W. (2019). Summary of recommendations from the National Institute for Occupational Safety and Health Fire Fighter Fatality Investigation and Prevention Program, 2006–2014. Journal Of Safety Research, 68, 21–25. https://doi.org/10.1016/j.jsr.2018.10.013
  23. 23.0 23.1 Delmerico, J. A., Mintchev, S., Giusti, A., Gromov, B., Melo, K., Havaš, L., Cadena, C., Hutter, M., Ijspeert, A. J., Floreano, D., Gambardella, L. M., Siegwart, R., & Scaramuzza, D. (2019). The current state and future outlook of rescue robotics. Journal Of Field Robotics, 36(7), 1171–1191. https://doi.org/10.1002/rob.21887
  24. 24.0 24.1 Ma, Y., Feng, X., Jiao, J., Peng, Z., Qian, S., Xue, H., & Li, H. (2020). Smart Fire Alarm System with Person Detection and Thermal Camera. In Lecture Notes in Computer Science (pp. 353–366). https://doi.org/10.1007/978-3-030-50436-6_26
  25. 25.0 25.1 Lu, C. X., La Rosa, S., Zhao, P., Wang, B., Chen, C., Stankovic, J. A., Trigoni, N., & Markham, A. (2019). See Through Smoke: Robust Indoor Mapping with Low-cost mmWave Radar. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1911.00398
  26. 26.0 26.1 Naghsh, A. M., Gancet, J., Tanoto, A., & Roast, C. (2008). Analysis and design of human-robot swarm interaction in firefighting. IEEE. https://doi.org/10.1109/roman.2008.4600675
  27. 27.0 27.1 Min, B., Matson, E. T., Smith, A., & Dietz, J. E. (2014). Using directional antennas as sensors to assist fire-fighting robots in large scale fires. IEEE. https://doi.org/10.1109/sas.2014.6798976
  28. 28.0 28.1 Reddy, M. S. (2021). Design and implementation of autonomous fire fighting robot. Turkish Journal Of Computer And Mathematics Education (TURCOMAT), 12(12), 2437–2441. https://turcomat.org/index.php/turkbilmat/article/view/7836
  29. 29.0 29.1 Hong, J. H., Min, B., Taylor, J. M., Raskin, V., & Matson, E. T. (2012). NL-based communication with firefighting robots. IEEE. https://doi.org/10.1109/icsmc.2012.6377941
  30. 30.0 30.1 Vigne, G., Węgrzyński, W., Cantizano, A., Ayala, P., Rein, G., & Gutiérrez-Montes, C. (2020). Experimental and computational study of smoke dynamics from multiple fire sources inside a large-volume building. Building Simulation, 14(4), 1147–1161. https://doi.org/10.1007/s12273-020-0715-1
  31. 31.0 31.1 Salamonowicz, Z., Majder–Łopatka, M., Dmochowska, A., Piechota-Polańczyk, A., & Polańczyk, A. (2021). Numerical Analysis of Smoke Spreading in a Medium-High Building under Different Ventilation Conditions. Atmosphere, 12(6), 705. https://doi.org/10.3390/atmos12060705
  32. 32.0 32.1 Gündüz, M. Z., Işıkdağ, Ü., & Başaraner, M. (2016). A REVIEW OF RECENT RESEARCH IN INDOOR MODELLING & MAPPING. The International Archives Of The Photogrammetry, Remote Sensing And Spatial Information Sciences, XLI-B4, 289–294. https://doi.org/10.5194/isprs-archives-xli-b4-289-2016
  33. 33.0 33.1 Łobodecki, J., & Gotlib, D. (2022). Developing a simulator of a mobile indoor navigation application as a tool for cartographic research. Polish Cartographical Review, 54(1), 108–122. https://doi.org/10.2478/pcr-2022-0008
  34. 34.0 34.1 Maschek, M. (2010). Real Time Simulation of Fire Extinguishing Scenarios [Technical university wien]. https://www.cg.tuwien.ac.at/research/publications/2010/maschek-2010-rts/maschek-2010-rts-Paper.pdf
  35. 35.0 35.1 35.2 Tan, C. F., Liew, S., Alkahari, M. R., Ranjit, S., Said, Chen, W., Rauterberg, G., & Sivakumar, D. (2013). Fire Fighting Mobile Robot: State of the Art and Recent Development. Australian Journal Of Basic And Applied Sciences, 7(10), 220–230. http://www.idemployee.id.tue.nl/g.w.m.rauterberg/publications/AJBAS2013journal-a.pdf
  36. LUF 60 - Wireless remote control fire fighting machine — Steemit. (z.d.). Steemit. https://steemit.com/steemhunt/@memesdaily/luf-60-wireless-remote-control-fire-fighting-machine
  37. 37.0 37.1 LUF 60 – LUF GmbH. (z.d.). https://www.luf60.at/en/extinguishing-support/fire-fighting-robot-luf-60/
  38. Extinguishing Support – LUF GmbH. (z.d.). https://www.luf60.at/en/extinguishing-support/
  39. EHang Announced Completion of EH216F’s Technical Examination by NFFE (z.d.). EHang. https://www.ehang.com/news/798.html
  40. 40.0 40.1 EHang EH216-F (production model). (n.d.). https://evtol.news/ehang-eh216-f
  41. THERMITE® (z.d.). https://www.howeandhowe.com/civil/thermite
  42. LAFD Debuts the RS3: First Robotic Firefighting Vehicle in the United States | Los Angeles Fire Department. (z.d.). https://www.lafd.org/news/lafd-debuts-rs3-first-robotic-firefighting-vehicle-united-states
  43. 43.0 43.1 THERMITE® (z.d.-b). https://www.howeandhowe.com/civil/thermite
  44. 44.0 44.1 DPG Media Privacy Gate. (z.d.). https://www.ad.nl/binnenland/robot-colossus-bleef-de-notre-dame-koelen-van-binnenuit~aee4510a2/
  45. 45.0 45.1 Colossus advanced firefighting robot | Shark Robotics. (z.d.-b). https://www.shark-robotics.com/robots/Colossus-firefighting-robot
  46. Shark Robotics - Leader in safety robotics. (z.d.). https://www.shark-robotics.com/
  47. 47.0 47.1 Uncover Myriad Uses of Robotics across varied industries- DEEP Robotics. (z.d.). https://www.deeprobotics.cn/en/index/industry.html
  48. 48.0 48.1 X20 Hazard Detection & Rescue Solution. (z.d.). Deeprobotics. https://deep-website.oss-cn-hangzhou.aliyuncs.com/file/X20%20Hazard%20Detection%20%26%20Rescue%20Solution.pdf
  49. DEEP Robotics - Global Quadruped Robot Leader. (z.d.-b). https://www.deeprobotics.cn/en/index.html
  50. 50.0 50.1 X20: The Ultimate Quadruped Bot series for Industrial Use - DEEP Robotics. (z.d.). https://www.deeprobotics.cn/en/index/product.html
  51. SkyRanger® R70 | Teledyne FLIR. (z.d.). https://www.flir.eu/products/skyranger-r70/?vertical=uas&segment=uis
  52. 52.0 52.1 Support for SkyRanger R70 | Teledyne FLIR. (z.d.-b). https://www.flir.com/support/products/skyranger-r70/?vertical=uas&segment=uis#Documents
  53. Thermal Imaging, Night Vision and Infrared Camera Systems | Teledyne FLIR. (z.d.). https://www.flir.eu/
  54. NetLogo Models Library: GenEVo 2 Genetic Drift. (z.d.). https://ccl.northwestern.edu/netlogo/models/GenEvo2GeneticDrift
  55. Repast suite documentation. (z.d.). https://repast.github.io/quick_start.html
  56. Fig. 7: RVIZ node panel for human-robot visual interface on ROS ecosystem. (z.d.). ResearchGate. https://www.researchgate.net/figure/RVIZ-node-panel-for-human-robot-visual-interface-on-ROS-ecosystem_fig7_305730015
  57. What is FDS? - FDS Tutorial. (2021, 25 februari). FDS Tutorial. https://fdstutorial.com/what-is-fds/
  58. Technologies, U. (z.d.). Unity - Manual: The Scene view. https://docs.unity3d.com/Manual/UsingTheSceneView.html
  59. 59.0 59.1 59.2 Unity - Rigidbody https://docs.unity3d.com/ScriptReference/Rigidbody.html
  60. 60.0 60.1 Unity - Hinge Joint https://docs.unity3d.com/Manual/class-HingeJoint.html
  61. Polulu - Ball Wheel https://www.pololu.com/product/950
  62. Unity - ConfigurableJoint https://docs.unity3d.com/Manual/class-ConfigurableJoint.html
  63. Unity - Raycast https://docs.unity3d.com/ScriptReference/Physics.Raycast.html
  64. Camera component - https://docs.unity3d.com/Manual/class-Camera.html
  65. Riisgaard, S., & Blas, M. R. (2005). SLAM for dummies. In SLAM for Dummies [Book]. https://dspace.mit.edu/bitstream/handle/1721.1/119149/16-412j-spring-2005/contents/projects/1aslam_blas_repo.pdf
  66. Steux, Bruno & El Hamzaoui, Oussama. CoreSLAM: a SLAM Algorithm in less than 200 lines of C code. https://www.researchgate.net/publication/228374722_CoreSLAM_a_SLAM_Algorithm_in_less_than_200_lines_of_C_code
  67. hector_slam - ROS Wiki. https://wiki.ros.org/hector_slam
  68. Mikkleini. GitHub - mikkleini/slam.net: Simultaneous localization and mapping libraries for C#. GitHub. https://github.com/mikkleini/slam.net
  69. 69.0 69.1 Unity - SetPixel https://docs.unity3d.com/ScriptReference/Texture2D.SetPixel.html
  70. Unity - Texture2D https://docs.unity3d.com/ScriptReference/Texture2D.html
  71. Unity - RawImage https://docs.unity3d.com/2018.2/Documentation/ScriptReference/UI.RawImage.html
  72. Unity - ComputeShader https://docs.unity3d.com/ScriptReference/ComputeShader.html