PRE2017 3 Groep12
- Anne Kolmans
- Dylan ter Veen
- Jarno Brils
- Renée van Hijfte
- Thomas Wiepking
Globally there are about 285 million visually impaired people, of which 39 million are totally blind. Furthermore, there is a shortage of guiding dogs to support all visually impaired persons. For instance in Korea alone, there are about 65 guiding dogs in total and about 45.000 visually impaired. Next to the shortage of guiding dogs, there are also some limitations to their use. For instance, some persons are alergic to some dog species or they have a social aversion for them, which makes them unqualified to be guided by a guide dog. Furthermore, guiding dogs propose some extra tasks for the user, namely they need to be fed, walked etc. This can be particular difficult for instance for visually impaired that suffer from a kind of dementia. Lastly, the training of guiding dogs for instance is very difficult and still only 70% of the trained dogs will eventually be qualified to guide the visually impaired. Due to the shortage of support tools and guide dogs for the visually impaired, there is a need of innovative ways in supporting the visually impaired. We propose a guiding robot to help solve this shortage and which might solve some of the limitations that arise with using the guiding dogs.
We want to accomplish that the robot can navigate itself through an environment, possibly filled with either moving or solid obstacles. This process should be fully autonomously, hence without the input of any user during the navigation process. This goal is important since visually impaired people will not be able to guide the robot through its navigation process. If this was necessary, the complete purpose of the robot is abandoned. Therefore, the only actor involved in this process is the robot itself. The robot must be able to navigate itself and the user on paved area. This type of area is also the kind of terrain an average person walks over the most.
The scanning setup will consist of one or more camera's. The robot will not have advanced radar equipment. Also, the robot will not be able to travel on paths with a high height differences, like stairways. The robot must be able to guide exactly one person autonomously through an environment, matching our environmental constraints. This task will be accomplished when a person gets from point A to B safely, e.g. without hitting obstacles on its path. This goal can be accomplished by implementing obstacle recognition software for the attached camera's. Together with an avoidance algorithm, the robot will be able to navigate around obstacles on its path. By using constraints such as no advanced radar equipment and a restricted area type, this goal is realizable.
This goal should be realized within 6 weeks. If the robot is not completely autonomous at this point, there will be no time to make changes in the software and/or hardware of the robot.
Easy to use
The guiding robot must be easy to use for the user. This goal is important since the user's capability of seeing the robot is limited, and any confusion regarding the functioning of the robot must be prevented at all times, since this could lead to dangerous situations for the user. This goal involves both the robot and its user.
The interface of the robot can consist of voice recognition or physical buttons. At all times, the feedback of the robot must be clear to the user. Since the user is visually impaired, the feedback of the robot cannot consist of any visual elements. At all times, it must be clear to the user what the robot is doing and what its status is (e.g. battery status, or any other kind of error). By keeping the user interface simple, it will be realistic to implement. If we choose voice recognition, we can use existing voice-control software packages.
This user interface that will be easy to use must be defined in week 3 and be implemented in week 7. When the type of user interface is defined, we can already search for methods to implement it in our robot.
At all times, we want the user interacting with the robot to be safe. The purpose of the robot is to prevent accidents regarding their user. If the robot is not programmed in such a way that all risky situations are prevented, it would have no purpose. This goal involves both the user and the robot itself. In each situation the user must be in a position such that the robot can take this position into account regarding safety decisions. For example, the user can be standing behind a robot, holding a handgrip so that the user will not be anywhere else than behind the robot. Also, the robot must stop and give a signal when the user releases the handgrip with the robot so dangerous situations can be prevented.
This goal can be measured by simulating several scenarios having a dangerous situation, and check whether the robot prevents the user from getting any harm. When the robot passes all scenarios, this goal is reached.
Of course, not all possibly dangerous situations can be modelled and tested. Therefore we limit this goal to prevention of a list of dangerous situations created by us, that covers the most common scenarios in our restricted area. This limitation is required to make this goal realizable for this course. This list of dangerous scenarios and how to tackle them must be finished in week 3, and implemented in week 7.
The robot that we want to develop and are describing in this wiki is a robot that will replace the guiding dog. This robot will be used by the visually impaired the way they are now using guiding dogs. This means that they will use the guiding robot to walk inside, outside and to find there way to a store, bus stop etc. So the robot will not only have an impact on the visually impaired that is directly using the guiding robot, it will also have an impact on the people in the direct surroundings of the visually impaired. For example when a visually impaired person, who is using the guiding robot wants to cross a street, the drivers of the cars must rely on the guiding robot to not cross the street if this isn’t possible. So this gives us the visually impaired who are the primary users. The people in the surrounding of a visually impaired person, who is guided by the guiding robot as secondary users. And then we have the developers who are the tertiary users.
The users and there different needs of the guiding robot
Visually impaired (primary users)
- Making it save to walk outside
- Making it easier to make a walk outside
- Navigate them to different destinations
The surrounding (secondary users)
- That the guiding robots detects cars, bicycles and pedestrians
- The guiding robot walks around obstacles
- The guiding robot walks where it's aloud to walks and is save to walk
Developers (tertiary users)
- The guiding robot is better and more reliable then guiding dogs
- That it's easy to adapt the software
- That the guiding robot has as little maintenance as possible
State of the Art
- Which robot we will use to implement the software.
- Perceiving of the environment and recognizing obstacles.
- Obstacle avoidance.
- Adapting to the environment (tension on the rope).
- GPS navigation.
- Voice recognition.
|Discuss initial project||All||1h||Week 1|
|Subject (wiki)||Anne||2h||Week 1|
|Users (wiki)||Renée||2h||Week 1|
|SMART objectives (wiki)||Thomas||3h||Week 1|
|Approach (wiki)||Dylan||1h||Week 1|
|Deliverables (wiki)||Dylan||1h||Week 1|
|Milestones (wiki)||Jarno||1h||Week 1|
|Planning (wiki)||Jarno||1h||Week 1|
|Discuss week 1 tasks||All||2h||Week 1|
|State of the art (wiki)|
|- Perceiving the environment||Dylan||2h||Week 1|
|- Obstacle avoidance||Renée||2h||Week 1|
|- GPS navigation and voice recognition||Thomas||2h||Week 1|
|- Robotic design||Jarno||2h||Week 1|
|- Guiding dogs||Anne||2h||Week 1|
|Determine specific deliverables||All||4h||Week 2|
|Add details to planning||All||3h||Week 2|
|Meeting preparation||Thomas||1h||Week 2|
|Meeting preparation||1h||Week 3|
|Meeting preparation||1h||Week 4|
|Meeting preparation||1h||Week 5|
|Meeting preparation||1h||Week 6|
|Meeting preparation||1h||Week 7|
|Presentation preparation||All||20h||Week 8|
During this project, the following milestones have been determined. They may be expanded once we have a better understanding of how we are going to tackle the project. Especially the decision on whether to use an existing robot or creating a robot will heavily influence these milestones and their deadlines. Note that the planning also lacks details, which will be filled in in week 2.
|Research is complete||Week 1|
|Hardware is available (either full robot or necessary parts)||Week 3|
|Robot can scan the environment||Week 5|
|Robot can keep the user in mind||Week 6|
|Robot is fully autonomous||Week 6|
|Robot can guide the user in a restricted area||Week 8|
- Cho, K. B., & Lee, B. H. (2012). Intelligent lead: A novel HRI sensor for guide robots. Sensors (Switzerland), 12(6), 8301–8318. https://doi.org/10.3390/s120608301
- Bray, E. E., Sammel, M. D., Seyfarth, R. M., Serpell, J. A., & Cheney, D. L. (2017). Temperament and problem solving in a population of adolescent guide dogs. Animal Cognition, 20(5), 923–939. https://doi.org/10.1007/s10071-017-1112-8