Group 12: State of the Art
The use of dogs to guide the visually impaired comes with some limitations, which have been tried to be solved by developing artificial guidance systems (Kang, Kim, Lee & Bien, 2001). There are two different types of guidance systems, a wear type (e.g. A cane) and a mobile robot type, which mimickes the behaviour of a guide dog. The second type has its own mobility and is separate from the user, but it can be attached to the user in some way. Its mobility gives the ability of active guidance.
Kang et al. (2001) proposed an active guidance method for a guide mobile robot. When the robot is used as a guidance tool it should use a pattern of behaviours, which enables the user to follow the robot easily. Using a fuzzy grid-type local map, to estimate the intentions of the objects and the use of multiobjective decision making are very useful for the robot to accomplish its task. But the current fuzzy grid-type isn’t robust enough, so it should be developed further.
According to (Hersh & Johnson, 2010a) most robotic guides for the visually impaired work on the principle, that the robot changes the direction when an obstacle is detected in its path, the change is communicated to the user by having enough mass for the user to feel the movement through the handle (haptically). The current guides are all wheeled, which are easier to design than legged robots and they are more stable. However, legged robots have the ability to move up and down stairs and walk on uneven terrain. Appearance of the robot is another important characteristic, because user acceptance much depends on it. They investigated which functions the users like to see in the robotic guide, they found that they wanted all the proposed functions, namely: obstacle avoidance, location, navigation, location of goods and reading street names. For the appearance of the robot, it was suggested that it should be as invisible as possible and that it should not attract any attention. But it should be robust, small, lightweight and elegant.
Among travel aids, the guide dog is a popular device for obstacle avoidance, however most travel aids have not yet gone beyond the prototype stage (Hersh & Johnson, 2010b). Localisation is important to determine the robot’s pose (heading direction and coordinates). There are three problems that need to be solved: updating the robot’s position form an initially known pose, from an initially unknown pose and localisation when the robot disappears to a random pose. The two main approaches for modelling the indoor environments are grid based and topological. Most types of robotic guides use the combination of haptics and speech to communicate with the end-users. With haptics they communicate their path of velocity to the users.
Hersh and Johnson (2010b) investigated specifications and the desires of respondents considering them. They found that the battery should be at least 16 hours, but several days was preferred. Furthermore, the robot should recharge without the use of vision and maintenance should be minimal. The robot should be easy to use and it should need little training. The robot should be robust and reliable, so it should be able to cope with different types of weather, water, knocks and uneven terrain. The interface should be accessible and the appearance should be customisable. For the user to be able to feel the movement of the robot a long handle is required.
With use of the guide robot in an assistive mode, the visually impaired was able to find obstacle free moving direction, detect stairs/steps and obtain information of the environment (Capi, Kitani ad Ueki, 2013). When the robot was in a guiding mode it can navigate through non-stationary and dynamic indoor environments by using neural controllers. Navigation in urban environments proposes more challenges than in indoor environments, because the pathway characteristics are varying between narrow to wide pedestrian walkways, squares, junctions etc. To guide the visually impaired to robot has to move with a moderate constant speed (e.g. 0.6 m/s). Furthermore, the motion should be smooth without immediate changes in robot speed.
There are different robots that have been studied as a guiding tool for the visually impaired (e.g. Guidecrane, roji and ROVI) (Cho & Lee, 2012). Force that is transmitted through a stick is used by the user to avoid obstacles, but this causes that the stick also transmits all bumps directly to the user. There has also been a study in which the user follows the robot with a dog leash, then the shock isn’t transmitted due to the flexibility of the leash. However, this results in the robot being unable to acquire information about the location of the user and the robot relative to each other. A robot is needed that follows a cooperative relationship with the user and does not only follow commands.
Using sound localization system is a possible way to detect the environment. This technique uses multiple microphones to determine the angles at which the sound source is located. This technique can be used even to detect if an object is in between the sensors and the sound source.
There are several techniques to determine a model of the environment using different sensors, but all of them have pros and cons. Most of the focus is being put on static objects, like roads and traffic lights, since these remain stationary. However, in the case of traffic lights there are many different colours that can be emitted by a traffic light, making detection harder. All the techniques described have at most 83% precision rate for detection of traffic lights.
Learning a dynamic environment is not possible, since it is dynamic. What is possible is to determine possible configurations with respect to the dynamic objects themselves, making maps of possible configurations of the environment. This is mainly useful for low-dynamic environments, since the objects that make up the possible configurations should not be so many that the state space explodes. Furthermore, the state of the environment should be observed at different times to determine what the dynamic objects are and how the configuration maps should be made.
After we detected and defined all the obstacles we know the surrounding of the guiding robot. When this is done we can start determining the path the guiding robot can walk to get from the initial place to the goal place of the visually impaired. Finding this path is also called the motion planning problem in robotics. In this problem we have a object with a staring position, an goal and objects in the workspace. In our case this means that the visually impaired with the guiding robot is the object with the staring position, the goals position. And te surroundings are the workspace with the obstacles. One approach to solve this motion planning problem is to divide the problem into two sub-problems, the ‘Findspace’ and ‘Findpath’ problem . The ‘Findspace’ problem is already explained in the environment perception. So now we still have the ‘Findpath’ problem, which means that we have to find a continuous path through the obstacles form the starting position to the goal position. To find this continues path the generic algorithms (GAs)are have gained popularity. This algorithms generically find random paths, than finds new random path by crossing and matching the old random paths. After repeating this and checking if the final path matches all the conditions this path will be the path that is executed . This algorithm is mainly based on random search so for our own project we will try to use path finding algorithms. This is also used regularly to find a path, but its harder to just just the shortest path algortihm for example because we are in a real world situation.
- Danica Janglová, (2004), Neural Networks in Mobile Robot Motion, International Journal of Advanced Robotic Systems, 15-22, http://journals.sagepub.com.dianus.libr.tue.nl/doi/abs/10.5772/5615
- Bo-Yeong Kang, Miao Xu, Jaesung Lee and Dae-Won Kim, (2014), on a PBIL Algorithm ROBIL: Robot Path Planning Based on PBIL Algorithm, International Journal of Advanced Robotic Systems, 1, http://journals.sagepub.com/doi/abs/10.5772/58872