PRE2017 1 Groep3

From Control Systems Technology Group
Jump to navigation Jump to search
Members of group 3
Karlijn van Rijen0956798
Gijs Derks0940505
Tjacco Koskamp0905569
Luka Smeets0934530
Jeroen Hagman0917201


Introduction

The technology of robotics is an unavoidable rapidly evolving technology which could bring a lot of improvements for the modern world as we know it nowaday. The challenge is however to invest in the kind of robotics that will make its investments worthwhile, instead of investing in research that will never be able to pay its investments back. This report is going to investigate a robotics technology with the goal of solving the initial problem statement. This chapter will describe the problem that is chosen, the objective of our project and the approach to show how the solution will take its form.

Problem Definition & Approach

When you travel by train on a regular basis you might have noticed that when people in a wheelchair need to exit or enter the train it goes rather slow. Before they can get on or off the train, the train personnel is needed first to get some sort of ramp to let the disabled people board or exit the train. When someone in a wheelchair wants to exit or wants to board the train, the train might even be delayed because of this. As we know, trains in the Netherlands tend to be late sometimes and therefore every obstacle that is getting in the way of the schedule, should be taken care of. The boarding wheelchair is definitely one of the obstacles, because they tend to cause delays. But the perspective of the handicapped person is also important. For them the feeling of constantly being dependent on others is the worst part of living with a handicap. This dependence raises the threshold for these people to travel by train. The disabled in general lose a part of their long distance mobility when they stop using the train. This might have an impact on their social well-being (Oishi, 2010); it might be a cause for loneliness or depression, as disabled are not able to sustain distant relations (Steptoe et al., 2013). In a survey conducted by the SP 154 handicapped persons shared their complaints. Laurens Ivens and Agnes Kant translated these complaints into thirteen recommendations. Among these recommendations they state that the height difference between the train and platform should be reduced or bridged easier. They state that there should be a travel tracking system for handicapped and at last the accessibility has to be increased (Ivens and Kant, 2004). This project will research improvements for disabled people in wheelchairs travelling by train in the Netherlands.

This project will first determine the problems wheelchair-bound people face when travelling by train. Then, we look at different stakeholders and possible solutions. After that, questionnaires will be held with stakeholders to determine the needs. After this, a final design for our helping robot will be made and a prototype will show some of the working principals that need to be proven in order to give credibility to the final design.

USE-analysis

To get a better view of the design criteria the design should comply with, the USE aspect of the problem statement and the objective will be considered in this section. Moreover, the topic will be discussed from the perspective of several other stakeholders, such as society and and train companies (i.e. the Nederlandse Spoorwegen).

Who are the users?

The first step is determining who the users are. The main users are, of course, disabled people who travel by train and will actually use the robot. This project will focus on the user needs by means of a questionnaire. To avoid a technology-push, it is very important to get a thorough view on the user's perspective on the current situation, and the needs of the user. This questionnaire focusses on the way they are helped right now and what the advantages/disadvantages of the current state are. Also their initial feeling with being helped by a robot in the future is an important part and in what way they would prefer to be helped. Questions like those are incorporated in the questionnaire.

What is the NS' perspective?

The NS is a very important stakeholder in this project, for they are the ones that will eventually need to pay for the research and manufacturing of the robot. Moreover, the NS staff is also the current user; in the current situation, they help the disabled people board the train by means of a ramp. It is therefore very important to get a better view of what the operating NS staff thinks of our idea. For this purpose, a questionnaire for the NS personnel is made. In this questionnaire the view of the train personnel is asked regarding the current assistance and what they think that could/should be improved. Also their view on the idea of a robot helping the disabled people is important.

The questionnaires

Summarizing the above, there are multiple reasons for which the questionnaires are designed:

  • To gain insight into the current situation with regard to traveling by train when being disabled. How does the current system work? How do people experience the current system?
  • In order to finetune our RPC’s for the robot, the aim is to gain insight in the wants and needs of the actual users: NS staff and disabled people. What do they believe is necessary in order for the system to work efficiently? What do they miss in the current situation?
  • To improve the system for all its users, not necessarily only disabled people. Therefore we are also curious to know how operating NS staff experiences the system? What can be improved for them to improve their work efficiency and pleasure?

To find participants for this study, several steps were taken: a call on Facebook was posted for the target groups, personal networks were contacted and NS staff on Eindhoven station were approached in person. The questionnaires could be filled in online or on paper. We aimed to get 5 participants for the disabled target group, and 2 to 3 for the NS staff. These amounts are based on what is reasonable for the scope of this project; due to time constraints it is not an option to find large groups of participants. The fact that it is hard to find wwheelchair bound people that actually use the train was further confirmed by the low response rate on our call for questionnaires. The questionnaires were written in Dutch and answered in Dutch.

Link to paper questionnaire for the disabled: Media: Enquete mindervaliden.pdf

URL to online questionnaire for the disabled: https://www.survio.com/survey/d/G1O9M1J3Y2Q3T3L0A

Link to paper questionnaire for NS staff: Media: enquete staff.pdf

URL to online questionnaire for NS staff: https://www.survio.com/survey/d/I8K8O0G2F7V4E9U6O

The questionnaire for the disabled

In this section all aspects of the designed questionnaire will be discussed. The questionnaire was designed to get insight into several aspects of the project:

The first two questions explore the current situation:

  • 1. How often do you travel by train?
  • 2. How much time does it take you to plan your train journey?

After this, several questions test the subjective experience of the current situation.

  • 3. How do you experience the current NS travel assistance service?
  • 4. What do you think could be improved in the current situation?
  • 5. Are you capable of getting on to the ramp without help?
  • 6. Do you experience difficulties in planning your train journey with regard to the NS travel assistance service?
  • 7. How much time do you generally need when changing trains?
  • 8. How do you experience travelling by train from 1 to 10, with 1 not pleasant and 10 very pleasant
  • 9. Can you clarify your answer for question 8?

After this set of questions, the new concept is introduced and tested:

  • 10. For this project, we aim to develop an automated system that functions as a ramp. By pressing a button on the platform, the robot will drive towards the train entrance and fold out to form a ramp. What would you think, of being aided with entering the train by a robot or automated system?
  • 11. What are important aspect of good service for you?
  • 12. What type of help with boarding the train would you appreciate most? (For example: ramp, lift, etc.)

In the final question we leave space for the participant to write down remarks or tips:

  • 13. Do you have any tips or remarks with regard to the current or new system?

The questionnaire for NS staff

This questionnaire focuses more on the specific experience of NS staff working with the system and how it could be improved in their view.

The first questions explore the current situation:

  • 1. What is your specific function at the NS?
  • 2. How often do you help disabled people boarding or leaving the train? (1x per week, 1x per month, 1x per year, etc.)
  • 3. In what way do you help disabled people boarding and leaving the train?

The next set of questions explores the subjective experience with the current system.

  • 4. What are the advantages of the current system?
  • 5. What are the disadvantages of the current system?
  • 6. How would you rate the system with regard to NS travel assistance from 1 to 10, with 1 very negative and 10 very positive?
  • 7. What could be improved in the current situation, to make your working experience more pleasant?

The next questions introduce the new concept.

  • 8. For this project, we aim to develop an automated system that functions as a ramp. By pressing a button on the platform, the robot will drive towards the train entrance and fold out to form a ramp. What is your first reaction to a system like this?
  • 9. How do you experience the current time needed to help a disabled person board or leave the train? Too long/too short?

The final question leave room for the participants to write down any thoughts on the topic:

  • 10. Do you have any tips or remarks with regard to the current or new system?

Thematic analysis of the results of the questionnaires

In the thematic analysis, we gather all codes from both questionnaires, and by combining the codes we will create themes.

An overview of the codes used can be found here: Media: Overview of codes used.pdf

An overview of the filled-in questionnaires and their coding can be found here:

Questionnaires for disabled people: Media: Enquete ingevuld disabled.pdf

Questionnaires for NS staff: Media: Enquete ingevuld staff.pdf

Method

As described above, participants for this study were found by employing social media, contacting our personal networks and approaching people at train stations.

After the questionnaires were filled in, they were coded. These codes were subsequently evaluated to create themes.

Results

Improvements

Combining codes from the questionnaires related to improvements for the current system yielded the following: first of all, one of the disabled people wanted it to be possible to change trains faster. Moreover, better accessibility of all stations, at every time of the day, is desirable. The staff wanted a better operability of the current bridge, and a better communication with the taxi company.

(In the current situation, NS travel assistance is only possible at the larger, manned stations. If a disabled person wants to travel to a smaller station, he has to contact an NS-connected taxi company. Those taxi drivers namely have access to the bridge, and are appointed to help disabled people get off of the train. Unfortunately, especially in case of disruptions in the train schedule, clear communication with the taxi company is lacking.)

Current situation

This theme encompasses all codes related to the current situation; from the questionnaires we yielded the info that there are multiple parties involved: the disabled person, NS service staff, NS conductors, and, like mentioned above, appointed taxi companies. From the codes we have established the service staff escorts the disabled person on the bigger stations, taxi companies on the smaller stations, and conductors are mainly involved in maintaining safety at all times. The conductor mentioned helping a disabled person about 3 times a day, whereas the service staff helps over 20 times a day.

Advantages and disadvantages of current system

In this theme we can merge all codes related to advantages and disadvantages of the current system. Staff reported as advantages the following: the time needed for reporting is short, the time needed for helping is short, localization of the disabled person in the train is clearly communicated. A disabled person reported arriving on time at the desired station as an advantage.

The disabled people reported having little room in the train as a disadvantage. Staff reported the system is failing in case of train disruptions, and it takes too much time.

New concept

By aggregating all codes, in this theme we can take a look at the opinions on the new concept and important aspects of the system. Disabled people reacted positive, whereas NS was generally negative towards the concept. The staff made clear they thought they might be losing their job to the robot, and they considered it impossible for an automated system to work because of crowdedness. The disabled people mentioned an extending shelf and lift as possible idea. Aspects that the system should have according to them are: it should be fast, it should give the disabled person influence on the situation and it should be suitable for different users with different disabilities. Of course, the above could be combined with the improvements for the current system.

Conclusion

The above themes can be aggregated and examined to discover new relations between themes. The main result of the questionnaires is a better view on the current situation, and the identification of user requirements for the new concept.

Current situation

In the theme Current Situation we have established how exactly the current situation works. New information for us is the involvement of taxi companies.

User requirements

Multiple themes above can be used to identify user requirements. The new concept should namely build on the Advantages of the current system (it should in the least not take away those advantages), it should avoid the Disadvantages of the current system, it should incorporate the Improvements mentioned, and it should be aware of the aspects mentioned in New concept. If we combine this, we can make a list of all user requirements:

  • The new concept should make changing trains as a disabled person faster
  • The new concept should grant accessibility to all train stations and all platforms at all times
  • The operability of the new system should be good for staff
  • The system should communicate with appointed taxi companies, depending on what their role is (in the ideal situation taxi drivers no longer need to operate the bridge)
  • The new system should not have a longer reporting time than in the current situation
  • The new system should not take longer in helping the disabled person than in the current situation
  • It should be clear where in the train the disabled person is located
  • The new system should have enough room for the disabled person to sit within the train
  • The system should work at all times, also in case of train disruptions
  • The system should work even in very crowded situations
  • The new system should work as fast as possible
  • The new system should grant the disabled person influence on the situation
  • The system should be suitable to all types of disabled people with different disabilities

Other stakeholders

It is important to consider all other stakeholders in this project while designing. Other non wheelchair-bound train passengers are also stakeholders, they should not be disadvantaged by the new wheelchair assistant. Therefore, it is undesirable that the new robotic assistant will cause any (more) train delay. Moreover, it is important that there is still room for the other passengers to board and stay on the train. Also, boarding and exiting of this stakeholder needs to be taken into account. This should be considered to see if the new design has an impact on the other passengers.

The government is also a stakeholder, as they are the institution responsible for making society as handi-friendly as possible. They may therefore be involved in partial funding of the project. In more detail, the Ministry of Infrastructure and Environment is involved as a stakeholder.

Conclusions of the questionnaires

From the questionnaires we identified a specific user need: he/she mentioned he wanted more influence on the process. Since we are designing in an iterative manner, our concept got updated after the questionnaire result was known. It was decided to involve the concept of shared control in our autonomous robot. To close the loop, in an ideal situation we would want to test this interpretation of the user need with the user. However, due to time constraints and a lack of interest from disabled people to answer questions regarding the topic, we were unable to reach out to check this interpretation. However, a literature research can be performed to find out more on the topic of disabled people, shared control and a lack of influence. This can be found below in this Wiki.

Current Situation

The current model

In the following section you will be guided through the current process of boarding a train when you are disabled.

  • First, you have to contact the NS to apply for NS travel assistance. This can be done in two ways: by telephone or online.
    • In case you want to do it online, you have to one-time register with NS, after which you should plan your trip. Then, you can ask for travel assistance.
    • By telephone you simply have to pass on your trip as planned, after which the NS can provide travel assistance.
  • In both cases the disabled person has to contact the NS at least one hour before travelling. You cannot travel anywhere if you are disabled; travel assistance is only possible at about 100 of the 400 train stations in the Netherlands.
  • After applying, you have to be at a pre-set meeting point at least 15 minutes before your train departs.
  • The travel assistant will then take you to the right platform and help you enter the train.
  • This happens as follows: the travel assistant, often with another NS employee, takes the ramp (which is on wheels) and rolls it towards the desirable train entrance.
  • Then, they fold out the ramp and align it correctly with the train’s height. This happens after all other passengers have entered the train, ideally at a train entrance where there is plenty of room for the disabled person to stay during the trip.
  • After docking the ramp, the disabled person either drives up the ramp himself or the NS travel assistant helps.
  • As soon as the person is inside the train, the NS staff begins to fold up the ramp again and they bring it back to the original position on the platform.
  • The NS then contacts other staff at the destination station and pass on in which train compartment the disabled person is.
  • Then, at the destination station NS staff can take the ramp again and simply have to wait for the person to arrive.
  • As soon as the train arrives, they help the person leave the train in a similar way as they help them with boarding.
  • Often, the NS travel assistant helps the person during the entire process, which means he will only stop helping as soon as the person is off the platform and ready to continue its journey in the city of destination.

Other Countries

A modern wheelchair lift

Most railway companies in other European countries are bound by law to accomodate disabled people onto their train. Trains like the Eurostar have dedicated spaces inside trains in the 1st class cars, and allow for an additional passenger to come with the wheelchair bound customer. Most railways companies work like the NS system: you have to plan your trip ahead of time (online or through customer service) so the railway employees can help you along your trip. However, not all trips are allowed because railway companies like Deutsche Bahn have a specific time that they need to make sure you can transfer between trains, thus some passengers have to wait for the next train because a 10 minute transfer between trains is not possible.

Either ramps or mobile wheelchair lifts are used. These are stored on the platform and chained to a pole or wall and the railway employee will put this ramp in place for you. Then, when it’s connected to the train door the railway employee will push you on board or place you on the mobile wheelchair lift. When you are on the lift both sides are closed and the employee presses a button to align the height with the train door. Once it’s done lifting, the front ramp will go down and you can ride on the train on your own. It’s also possible for trains to have a ramp inside the train floor that goes out when a button is pressed. Companies that use the wheelchair lift are: VIA Canada, TGV (France), SBB (Switzerland), Trenitalia (Italy).

Starting in 2013, a test was done in Den Bosch with led lights on the platform to show where the train will stop and where doors are located, including which wagons are full and which are empty. The results of this experiment have been included in the NS app, which now shows how long trains are and how busy certain trains are.

Robotic Solution Specifications

RPC's

In this section the requirements, preferences and constraints of the complete solution are stated. The complete solution is only described and visualized to the extent of this wiki page, hence most of the RPC's are satisfied in theory, not in practice.

Requirements

  • Completely safe to use for the disabled person but also completely safe to other passengers on the train.
  • Able to use continuously, if not it will cause delay for the train or the person misses the train.
  • Easy to use, disabled or elderly people have to be able to operate it.
  • Completely autonomous, this means that the disabled person can enter and exit the train all by their self.
  • The solution should not be the main cause for delay of the train.
  • The solution should take care of faster boarding and deboarding than the current approach.
  • The solution must be resistible to weather conditions and aging.

Preferences

  • Let the person board and deboard as fast as possible.
  • A solution that is as cheap as possible for both research costs and manufacturing costs.
  • As comfortable as possible to user.

Constraints

  • The solution has to fit for every different train, think about the width of the doors and the height of the entrance.
  • The solution has to fit on the waiting platform
  • The solution needs some powersource
  • Time available to board the train equals *4* minutes

Idealized solution

The idealized solution has to fulfill every requirement, preference and constraint. The biggest goal is that disabled people are able to travel all by themselves. This means that they can reach the platform and use the automated assistance system to exit and enter the train without any staff being involved.

Chairliftdown.JPGWheelchairfinal.png Chairliftup.JPG

  • Before using the robot, the disabled person has to use the app (see information below) to enter his trip and to reserve the robot at the platform of departure and arrival.
  • When someone arrives at a train station the first thing they need to do is to get to the right platform with the use of the elevators that are already present at every station.
  • The disabled person needs to check in like every other person that uses the train. The people that need assistance when entering or exiting the train have a special OV-card which can be used at the robot touchscreen to activate the automated assistance. Since the user has entered his trip in the app, the robot knows which side of the platform he may have to drive to, in case he should drive autonomously.
  • The disabled person enters the robot with his/her wheelchair. He/she can select on a touchscreen whether or not to drive themselves, or let the robot drive.
    • If you want to drive yourself, you can use the joystick to navigate towards the train doors. Where one should position itself with the robot before docking is depicted in the figure below. The picture is explained in detail in the next chapter.

Picture 1

With the use of shared control, he/she can already drive itself towards the train, and in case the person navigates too close near an obstacle, the system will take over and redirect the robot, passing the obstacle. The person can choose if he wants to pass the object over the left or right side. When the train ultimately arrives, the robot will autonomously dock.

    • If the disabled person does not want to drive, the robot will autonomously drive towards its docking position. During the movement of the robot the robot should always pick the shortest but also the safest path to the door. With safest is meant that the robot should never hit any obstacles or any passengers. To realize this the robot needs to be aware of its own position and the position of obstacles and people. The robot should therefore be able to constantly adapt its driving path to avoid moving obstacles as efficient as possible. This is a very important aspect, which will be elaborated on later in this wiki.
  • After docking, the person can enter the train.
  • In the mean time, at the desired destination, the robot is stationed already at the door where the person wants to exit the train (this information is transmitted through the OV pole)
  • When the person then arrives, he can leave the train immediately.
  • There may be modifications in the train's timetable, causing the train to arrive at a different platform. Since the disabled person's trip is checked in with the NS, they know where the person is heading and will use the robot at the 'old platform'. In case of such a change of platform, the system can automatically reschedule the robot, to make sure there is a robot at the other platform available. As there are two robots on every platform, it is impossible the robot is already taken at that platform; after all, if the train can enter at that platform, no other train is at that platform, which equals one free robot. There may be a disabled person on the other side of the platform using a robot, but there are two robots, and only one disabled person can travel per train per time frame.

Safety regulations & Patent Check

The concept needs to Safety regulations for autonomous driving vehicles:

For autonomous driving vehicles there are up to today no universal laws or regulations. A congress[1] near the end of this year should shed some light on this issue. For now we think it suffices to make the vehicle as safe as possible, so the risk of getting in a hazard while driving is minimal. The people around the vehicle should be aware it is driving and have to move out of the way, this can be achieved with an alarm and floodlights.

Safety regulations for lifting people:

There are a lot of rules and regulations for lifting a person, but most of them are simple like: the lift should be designed to minimize the risk of getting in to a hazard. The full list of rules and regulations can be found here: [2.x]Comment: manier van citeren moet overal hetzelfde, nog bespreken. The important thing we should take in mind regarding these safety regulations is that in no case the vehicle can flip over while lifting a person or the person can drive off the lift while going up or down.

[1]: http://www.autonomousregulationscongress.com/

[2.0]: http://www.hse.gov.uk/work-equipment-machinery/lift-persons.htm

[2.1]: http://www.hse.gov.uk/work-equipment-machinery/machinery-directive-essential-requirements.htm

[2.2]: http://www.legislation.gov.uk/uksi/2008/1597/schedule/2/part/1/made

There are no existing patents regarding the basic idea of autonomous train assistance. This patent check was done by inserting the terms wheelchair, train and wheelchair, lifting into the US Patent & Trademark Office search tool which includes international trademarks. [1] Thus, the robotic solution does not need to take patent law into account regarding the robotic wheelchair lift concept.

Comparing Current and New Solution

In this section, we are comparing the current and new solution and check whether the new solution complies with the RPC's.

  • Completely safe to use for the disabled person but also completely safe to other passengers on the train.

This requirement is guaranteed for the current solution since train staff is involved it is completely safe to use.

  • Able to use continuously, if not it will cause delay for the train or the person misses the train.

Since only one is available per platform it could cause delay when two trains arrive at the same time at the same platform. This requirement is therefore not applicable for the current solution.

  • Easy to use, disabled or elderly people have to be able to operate it.

Currently the ramp is not operated by the disabled people but by the trainstaff, therefore it can not be compared.

  • Completely autonomous, this means that the disabled person can enter and exit the train all by their self.

As said before the ramp is operated by the train staff and therefore the current solution does not apply with this requirement.

  • The solution should not cause delay for other people who want to board the train.

The current solution is in most cases not causing delay for other people since they wait till everyone else has entered the train before the help the disabled person with entering the train.

Conceptual designs

In order to get to a right solution for our problem statement, conceptual designs need to be made. Five different conceptual designs were formed and on the basis of the RPCs and , the best conceptual design will be chosen. To come to a preliminary design the best conceptual design is adapted to fulfill the requirements and preferences of the users even more. In this section the list of RPC's will be givenComment: the list of RPC's bestaat niet! together with five conceptual designs and at last the preliminary design.

Design 1

Design 1 involves an autonomous driving vehicle which can automatically drive to a certain location at the platform. The car only drives in a straight line parallel to the railway and therefore one robotic vehicle is needed per platform. The robot has wheels and an extendable shelf that can be attached to the train when the doors are opened. When someone wants to use the robot to board a train one simply walks up to the robot and pushes a button. The robot will be positioned at the end or front part of the platform depending on where the nearest elevator is located. When the train has arrived the robot will move to the door that is nearest to its location. This is either at the rear of the train or at the front (depending on which direction the train travels).

The robot will be positioned using sensors in the doors to let it know where the doors are located exactly. When the doors are opened the robot will unfold its ramp and the person can board the training. By the use of a pressure sensor in the shelf the robot knows whether the person has entered the train. After the person has entered the train the robot will lift the shelf up again and then drives back to its original position. When the person inside the train wants to exit the train at a certain station the (not yet existing) extension of the NS app can be used. The app shares the information with the robot and the robot can know in advance that someone wants to exit the train. The robot can move in place when the train arrives (it can start moving when the door sensor is within its reach).

When the doors open the shelf will be put in place again and the person can exit the train. When the person has left the shelf and is on the platform the robot will again lift the shelf and go back in its original position. To make sure the robot has enough power there will be a power station at the beginning position of the robot. The robot can attach to the power station and charge his batteries (same way as the lawnmower robot).

Design 2

Design 2 uses a crane to lift wheelchairs and moves them on or off the train. With this design there is no need for a car on the platform. There will be designated doors for people in wheelchairs where the crane positioned on the train. The crane has a lifting cable with four universal clamps which can be locked on the wheels of the wheelchair. The advantage of this concept is that it does not need anything on the platform which can cause obstructions for other persons. Getting off the train is just as easy as getting on. You will not have to worry if whether the crane is on the right platform at the right door and on the right time when you arrive, because the crane moves with you in the train. The disadvantage of this design is that you need to attach the clamps to the wheelchair yourself.

It does not work autonomously. If you are incapable of operating it yourself, you still need someone to help you. The second disadvantage is that all the trains need to be adjusted, which takes a lot of time and will probably cost a lot of money.

Design 3

Design 3 is in many ways similar to the current ramp that is being used at NS stations. It involves two ramps that are folded upwards, and when one wants to use the ramp both sides flip down and level with the desirable height. At one side of the ramp this equals the height of the entrance of the train, and on the other side this equals the height of the platform. In this way, a person in a wheelchair can simply drive upward or downward if one wants to enter or respectively leave the train. When the ramp is folded upward, a simple user interface could be installed. The screen would allow interaction between user and platform; the user could enter an ‘order’ after which the robot can perform its duty. The robot is driven by two large wheels, one on each side, which allow for easy rotation within the platform environment. The robot autonomously navigates in this environment. The robot is stationed at one single spot per platform, where it can recharge itself after serving. The ramp has raised edges, to avoid someone falling off of the ramp.

Design 4

This design will focus on the docking problem for an autonomous robot. The wheelchair boarding system, as mentioned, has three main stages. The alert, dock and board stage. In this design the vehicle will use wireless network and latency to triangulate the position. There will be beacons in the platform, this could also be in the docking station, but that is probably less accurate. The robot has two sender/receiver combinations. One on the front and one on the back. They will send signals to the beacons, the beacons resend them. With the latency information the robot will be able to triangulate its position and orientation. At the same time there is a send/receive combination module under the stair of the train. This will also ping the beacons. The beacons then again triangulate the position and send this information to the robot. So at this moment the robot knows where it is and there is also a goal. In order to move to the goal, there are at least three things required:

Solutions

  • The motion is to be planned within the kinematic constraints of the robot. A Quintic polynomial could be used to control start values of position, velocity and acceleration. The problem is that the robot is constrained in it's movement. So the orientation matters. We could describe the path as a series of robotic links, making constraints between the links. In a way the robot can always go from one to another.
  • The motion should be tracked by suppressing disturbances. This could be done using the kinematic equations of motion represented in a state space.
  • It has to move around obstacles, human and inhuman. This could be done by planning a path around it. Proximity sensors make a map of the nearest obstacles. Just a thought on this problems allows to fantasize a solution where the controller tracks the path, but starts deviating from the path as the sensors pick up obstacles. So instead of thriving for zero error, the error could increase with sensor input. The human obstacles are mobile, which means that they could move aside if urged to.

The solutions posed for the problems are made up using current knowledge. In order to find smart solutions, we might look into "Truck Docking". This is investigated by truck companies and shows a similar problem.

Design 5

Design 5 is an autonomous mobile lifting robot on four wheels that will help the disabled person board the train without any assistance from railway employees. The robot is stationed at a charging hub on each platform and has to be activated through the NS app and physical interaction with the OV-chipcard. Once the trip is planned and you "log in" on the robot with your chipcard, the robot will move itself to where the train will stop, ideally already aligned with a train door. Then, the train arrives and the robot will autonomously align itself with the door and open the back gate so the disabled can ride into the lifting platform.

Then, the backdoor closes for safety and the person is lifted. When at the right height, the front door is brought down so the wheelchair can move over it into the train. When the person has left the robot, it should detect this and return to its original position and then move back to the charging hub as soon as possible so other passenger can use the door. This design will be connected to wifi to get the trip information from the NS app and accurate train arrival times. In order to be ready to dock to the train when it arrives, the disabled person has to activate the robot to move to the correct position 5-10 minutes before arrival of the train. It will have to avoid passengers and bags on the ground on its own, but ideally this problem is limited by either introducing a "wheelchair robot path" on the ground so people know where to avoid placing bags or it has sensors in front that enable it to manouvre around these objects.

Because it has accurate trip information through the NS app the robot will know train arrivals in which a wheelchair is, so it needs to be ready to help this person get out of the train completely on its on, without any physical "log in" with the ns app.

Preliminary design

The Preliminary design is basically a combination of design 1 and 5. The design will be a autonomously driving vehicle that can be placed at each platform. The vehicle has 4 wheels and uses a horizontal plate that can be lifted up and down to be able to reach the right height to enter the train. It will only be able to drive in a straight line parallel to the train rails. It will be placed at one end of the platform which we will call its homing position.

At his homing position a power station will be placed. The robot will always come back to the homing position and attach itself to the power station. The robot has to be equipped with different kind of sensor. For example the robot should be able to sense obstacles in its drive path. When the robot senses something is in its way it should stop and give some kind of signal to let it surroundings know that something is blocking the robot. Another design challenge is to find out how the robot can locate a door where the person can enter or exit the train. The first idea for a solution to this is to equip every train with a sensor at the very first and last door of the train these doors will then be used as an entrance for disabled people. An advantage of this solution is that the robot can always choose the door which is nearest to its homing position and therefore less people will walk in its driveway and the time to arrive at the door will be short.

Proof of concept

As can be read in the chapter with the current solution the idea to lift the a plate in order to enter the train is already used in some countries. However this principle is then used in the way the ramp is used in the Netherlands. The plate still has to be controlled by the train staff and therefore the disabled people can still not enter and exit a train all by their self. To be able to proof that our idea could work in real life a prototype is going to be build. Since it would be too extensive to build the entire robot the focus will be on determining how to reach a certain locationComment: dit hele stuk klopt niet meer. The lifting of the plate is already proven to work since it is already used. The last part of our idea that needs to be proven is how you can activate the robot and how to let it know which position it has to reach. This also applies when someone wants to exit the train, how does the robot locate the position of the person that wants to get off board? To solve these problems a literature research will be executed together with some brainstorming. To prove that the robot will actually be able to reach a certain position the prototype has some requirements, preferences and constraints.

Robotic Solution Concept Explained

NS App integration

The disabled person uses the app to enter his/her trip. This can easily be implemented in the existing app. In the first picture you can see the app where one can plan his trip. In the second picture you can select in which time frame you want to travel, and a wheelchair-logo signals which trip is possible, i.e. if the robot at a particular time frame is not already reserved. In the third picture you can see the button at the top where you can subsequently reserve the robot for your travel.

Picture 1 Picture 2 Picture 3

Interface on robot and check-in

At the height of an average person in a wheelchair, special armrests are appended which the person can use to interact with the system. The picture below shows the right side armrest that holds the joystick. The person can control the wheelchair robot with this joystick.

ArmrestOV-checkin touchscreen touchscreen

On the left side of the wheelchair, an integrated touchscreen is visibile for the user. This touchscreen acts as a panel to activate the robot with the OV-chipcard (picture 1) and as a navigation tool. Users can toggle between joystick-mode and autonomous mode with this control. Also, when in autonomous mode navigation options are shown when objects are encountered.

Robot-surrounding interaction

Personal distance

When approaching and passing other people at the train station, the robot should take into consideration the concept of personal space. Moreover, based on past research on the matter, we should devise an ideal way of approaching other people. Research by Brandl et al. (2016) has looked at the design of the phase when a personal-service robot approaches a human being. Although in our case the robot merely passes other people, multiple important insights from this research should be incorporated in our design:

  • In human-human interaction, Hall(1966) roughly distinguished between 5 zones, which were later described by Walters et al.(2008).
  • Personalspace.png
  • Research by Koay, Syrdal et al.(2007) found that a mechanoid robot is allowed to come closer to humans than a humanoid robot. Our robot is mechanoid, which decreases the amount of personal space required.
  • Butler and Agah(2001) found that a fast approach by a robot(1 m/s) made participants feel less comfortable than a slow approach(0.25 or 0.38 m/s). We should therefore be aware that we cannot unlimitedly increase the robot’s speed to fasten the process; apparently a slower approach is more human-friendly.
  • Zlotwoski et al.(2012) performed research on the approach direction of walking humans, which is also highly relevant for this project. They found that human prefer to be approached from a front-left direction or a front-right direction rather than from the front. However, as our robot will be dealing with many people in a highly dynamic environment, the extent to which this ideal angle can be achieved is limited.
  • Brandl et al.(2016) performed research on the distances that were accepted of a robot approaching, while standing, sitting and lying at three different speeds of approaching. V1 = 0.25 m/s, V2 = 0.5 m/s and V3 = 0.75 m/s.
  • Graphps.png
  • As the graph shows, at a speed of 0.5 m/s, the mean of distances that were accepted is about 1 meter. This implies our robot should not be closer to people than 1 meter. However, as we are dealing with highly crowded and dynamic situations, this is not a realistic option. We should therefore employ other techniques to decrease the amount of personal space that is desired, if we want to set this bar lower than recommended in this study.
  • A study by Koay et al. (2014) researched whether the usage of LED display colours to signal movements would decrease the amount of personal space needed. This hypothesis was not supported, suggesting this would not make a difference.

We can draw several conclusions from the above information:

  • The ideal minimal distance for our robot is 1 meter at a speed of 0.5 m/s. Our robot will drive at an average speed of 0.5 m/s. At a lower speed of 0.25 m/s, the distance drops to about 0.8 meter. 1 meter is not feasible in a busy train station environment. We will lower this, but our anchor is at 1 meter and we must beware to not lower this distance too much. A reasonable distance may be 0.6 meter; according to Hall’s research(1966) this falls within the personal zone, without intruding the intimate zone.
  • Since our robot is mechanoid, it is allowed to come closer to humans than humanoid robots.

Traversing the platform

While traversing the platform, we should keep in mind two requirements:

  • If the robot is driving (autonomously), it should be clear at all times what its direction is. This should be communicated to the other people at the train station, to avoid collision. The robot should make clear what its future actions are. To illustrate the situation: if the robot will be turning left in less than 3 seconds, arrows should be pointing to the left. There is therefore an advance of about 3 seconds.
  • If the robot is driving, other people at the station should not be walking in front of it or closely behind it.

To find out the best way of fulfilling these requirements, we first take a look at the state-of-the-art, which we may draw inspiration from.

Vodafone Smart Jacket

The Vodafone smart jacket is a ‘smart’ jacket, intended to increase visibility of cyclists in traffic in the dark and to improve traffic safety. The jacket is connected to your smartphone, and before cycling you plan your trip on your phone. By actively tracking your location during the trip, the jacket indicates your future direction by an illuminated red arrow on the back: see Figure below. So, if the cyclist aims to turn right in the near future (about 30 meter), other traffic users know his intentions. This is still a prototype and has not been implemented in society, hence we cannot draw many conclusions of the effectiveness of the concept. However, the arrow indicating the cyclist’s direction serves as an inspiration for our robot.

Picture 1

Nao robot

The infamous Nao robot indicates its direction most often by looking towards the direction. This perception of gaze direction (of the robot) is crucial for this to work. A study by Torta(2014) has indicated that a ‘3D head is needed for mimicking gaze direction’, and that ‘head orientation is sufficient to elicit eye contact’. In the case of our robotic system, a 3D head is not applicable, which is why we cannot draw inspiration from this. Moreover, Nao walks generally slow, which limits the risks of collision while walking. Although this may limit the risks of collision, the technique is not very effective for our robotic system as we aim to transport the disabled person as fast as possible.

Picture 1

Cars

A major example of an electronic system indicating a vehicle’s direction is obviously a car. The car is by far the most well-known among people. A car’s headlights are white/yellow, while a car’s rear lights are red. Since cars are extremely common in everyday life, people are most likely to associate white light with the front of a vehicle, and red lights with the back of a vehicle. This can be incorporated in our project.

Our design

Summarizing the above, we can design the following system to indicate the robot’s direction to bystanders at the train station: the car will have 4 lights indicating its direction. 2 led arrows will be displayed on the floor, on the back and front. 2 other led arrows will be displayed on the robot itself; since many people do not watch the floor while they are walking, those additional arrows on eye-level will increase visibility of the robot. Arrows on the front and arrows on the back floor will illuminate the direction. This system indicates direction in multiple ways:

  • The arrow points in the direction the robot is moving
  • If the robot is moving backward, the arrows will flip and turn backward too.
  • The colors on the ground will attract attention and people will see the robot moving
  • The colors on the ground will prevent people from walking in this illuminated space

The pictures below show two concepts the final design could use, more research is done further in this report. The black band shows a hypothetical way.

Picture 1 Picture 2

Bewaring users of robot

In this part we are searching for ways to make people move aside. The robot will use sound and warning lights to indicate it's movement towards the object. If the flow is in front of the robot the people have to evade as the robot moves forward. If the arc in which people evade the robot becomes bigger, the flow will probably go around the robot, but at a certain point flip to the other side of the robot. Also a green light shining on the ground at the back of the robot indicates a positive route for the people in the flow assisting the redirection of the flow to the back of the robot. This could together make it possible for the robot to move towards the train.

Light

The color of the lights is chosen to be red and green, but red might indicate danger instead of awareness. That is why orange might be more suitable for this case. Safety is risk. Safety of the surrounding people, but also of the disabled. The green light on the back might not be that useful as disabled - especially elderly - might not be able to move through a mass of people and could follow the robot. The green light might be an indication for people to move behind the robot. The robot should also be safe to bump into for the flow of people.

Alarm

The alarm cannot cause panic on the waiting platform. Therefore it should not make a similar sound to that of police, ambulance, fireguard or other emergency institutions. In fact it might even be a good idea to play some familiar music. Maybe even piano music. This is already used in Taiwan with garbage collection. The garbage trucks play "Fur Elise" from Beethoven. The people know this and go out to bring their garbage to the truck. It is draws attention and it is not especially agitating. Thus it might help to make the waiting platform slightly more friendly. The robot should indicate its approach by a sound. For the scope of this project it is not a priority to find the actual sound; however, we can identify the requirements for this sound:

  • It should be loud enough to be heard by every person on the station, possible hard of hearing people and people with headphones in. It should however not be too loud; it should not cause a hearing impairment, or cause annoyance or possible scare to people.
  • The sound should not be very ‘alarm-like’, as this may cause a scare, and more importantly, may cause people to panic or think of an emergency. The robot passing is obviously not an emergency, and the sound should therefore indicate the robot’s passing in a serene, but hearable manner. Another option would be for a robot voice to signal its passing, e.g. 'Please move!'.
  • To enhance pleasure in use, we could choose for a song to indicate the robot’s passing. As the sound is meant to beware people of the robot coming, we could for example use ‘Go your own way’ from Fleetwood Mac.

Docking the train

As the robot closes the distance to the train door, it will most probably encounter a group of people waiting to enter. It somehow has to make them move aside in order to reach the door. There is also the difference in approach when boarding or leaving the train. Upon boarding the robot could position itself to the side of the train, such that the way is free for leaving passengers. Upon leaving the train the robot could go through the center so the wheelchair can leave the train, before others can board.

Picture 1


As mentioned the train has information about the position of the disabled person in the train. If the disabled scans his OV-card inside the train a beacon starts, but it could also activate a red light on the door that indicates that a disabled will leave the train. The same could be applied as the disabled scans his OV-chipcard at the dock. The robot connects to the beacon at the door of the train, and the established connection will also trigger a red light on the outside of the door. This could make the end phase of the docking less troublesome, because it will hopefully have the effect of deterrence at the train doors where the robot will board.

Picture 1

Autonomous Driving

Localization and Orientation

In the algorithm that enables the robot to find the goal, there are positions and orientations requested. This section will elaborate on the triangulation of positions and orientations. To triangulate the position there are three beacons required [math]\displaystyle{ (A, B, C) }[/math]. These beacons have positions [math]\displaystyle{ [A, B, C] = [(0,0), (0,B), (C,0) }[/math] in which [math]\displaystyle{ B }[/math] & [math]\displaystyle{ C }[/math] are constant values since the beacons do not move. Next we calculate the distance to all the beacons from the sender/receiver on the robot. We use the timestamp -[math]\displaystyle{ t_{X,i} }[/math]- and the speed of the signal - now assumed to be light speed - [math]\displaystyle{ C }[/math]. The [math]\displaystyle{ i }[/math] refers to the sender receiver node to which the distance applies. In other words the [math]\displaystyle{ i }[/math] refers to the corresponding coordinate system.

[math]\displaystyle{ r_{X,i} = (t_{X,i} \cdot C)^2 }[/math] with [math]\displaystyle{ X \in [A, B, C] }[/math]

These distances and the rule of cosines are used to calculate the x and y position:

[math]\displaystyle{ Y_i = \frac{B^2 + r_{A,i}^2 - r_{B,i}^2}{2B} }[/math]

[math]\displaystyle{ X_i = \frac{C^2 + r_{A,i}^2 - r_{C,i}^2}{2C} }[/math]

The angle is calculated using the two positions on the robot

[math]\displaystyle{ \theta = Atan(\frac{Y_2 - Y_1}{X_2 - Y_1}) }[/math]

One of the main uncertainties up until now is the determination of the distance towards the beacons. That still needs some further research.

Beacons of Marvelous Minds

After some research we found out that the drones team uses a Marvelmind Robotics beacon system. The system can be used with an arduino. The company provides - among other - the following information:

"Marvelmind Indoor Navigation System is off-the-shelf indoor navigation system designed for providing precise (+-2cm) location data to autonomous robots, vehicles (AGV) and copters.

The navigation system is based on stationary ultrasonic beacons united by radio interface in license-free band. Location of a mobile beacon installed on a robot (vehicle, copter, human, VR) is calculated based on the propagation delay of ultrasonic signal (Time-Of-Flight or TOF) to a set of stationary ultrasonic beacons using trilateration. Stationary beacons form the map automatically. No manual entering of coordinates or distance measurement is required. If stationary beacons are not moved, the map is built only once and then the system is ready to function after 7-10 seconds after the modem is powered

The system needs an unobstructed sight by a mobile beacon of two stationary or more stationary beacons simultaneously – for 2D (X,Y) tracking. The distance between beacons cannot exceed 30 m."

So in order to use this we have to connect the modem to the computer and run their software. In the meantime the Arduino can obtain position information from the mobile beacon using the EART & SPI protocol. Examples of scripts/protocols using the extra ports and beacons are available and probably sufficient to connect the position system. The problem is that it requires a lot of knowledge about programming to understand what is happening and how to alter scripts to our wishes. We know we should use EART & SPI protocols to communicate, but we have no idea how this is implemented in the Arduino or even how the protocols work. In the proof of concept the localization is thus not implemented, but the hardware of Marvelmind or something similar should be used in the final design of the robot.

Motion Planning

Well this part was due for the proof of concept, but it got scrapped. We will only discuss different solutions here. When we took a look at the wiki of an other group it was noticed they already had a nice summary of available options, that is why we show the same here:

In order to plan a path for the mobile robot from station to the train door there is some investigation needed. We started looking for papers about trajectory planning for mobile robots with kinematic constraints. Therefore we started searching for some interesting articles on the topic. several search terms were used (path planning, kinematic constraints, mobile robot, real-time path evaluation). path planning with kinematic constraints real-time path evaluation mobile robotComment: herschrijven

1. Real-time randomized path planning for robot navigation
J Bruce, M Veloso - Intelligent Robots and Systems, 2002. IEEE …, 2002 - ieeexplore.ieee.org
... A higher value of this gain value (beta) results in shorter paths from the root to the leaves ... Several
important lessons can be drawn from this work in the context of real-time path planning: ... for the
extend operator may perform better than a more correct model when planning time is ...
Geciteerd door 482 Verwante artikelen 
2. Mobile robot trajectory planning with dynamic and kinematic constraints
V Munoz, A Ollero, M Prado… - Robotics and Automation, …, 1994 - ieeexplore.ieee.org
... planner, a local path planner, and a path tracker to execute the planned paths [9]. The ... paper
proposes a solution based on the combination of a simple local path planning algorithm, an ...
according to the kinematic and dynamic constraints of the vehicle and the path's features. ...
Geciteerd door 62 Verwante artikelen 
3. New potential functions for mobile robot path planning
SS Ge, YJ Cui - IEEE Transactions on robotics and automation, 2000 - ieeexplore.ieee.org
... X. Yun, and V. Kumar, “Control of mechanical systems with rolling constraints: Application to ... for
publication by Associate Editor J. Ponce and Editor V. Lumelsky upon evaluation of the ... To
overcome this problem, the repulsive potential functions for path planning are modified by ...
Geciteerd door 794 Verwante artikelen 
4. Guidelines in nonholonomic motion planning for mobile robots
JP Laumond, S Sekhavat, F Lamiraux - Robot motion planning and control, 1998 - Springer
... Guidelines in Nonhotonomic Motion Planning for Mobile Robots 15 in an iterated algorithm that
produces a path ending as close to the goal as wanted. ... These results are critical to evaluate the
combinatorial complexity of the approximation of holonomic paths by a sequence of ...
Geciteerd door 919 Verwante artikelen 
5. Dynamic path modification for car-like nonholonomic mobile robots
M Khatib, H Jaouni, R Chatila… - Robotics and …, 1997 - ieeexplore.ieee.org
... [3] BH Krogh and CE Thorpe. Integrated path plan- ning and dynamic steering control for
autonomous vehicles. In Proc. ... [8] S. Quinlan and 0. Khatib. Elastic bands: connect- ing path
planning and control. ... Optimal paths for a car that goes both forwards and backwards. ...
Geciteerd door 169 Verwante artikelen

So searching only gives us a lot more leads.... Christoph Spruk states in "Planning Motion Trajectories for Mobile Robots Using Splines" that "Motion planning for wheeled mobile robots (WMR) in controlled environments is considered a solved problem. Typical solutions are path planning on a 2D grid and reactive collision avoidance." While his thesis goes into higher levels of planning, it also equals 110 pages. In general we are feeling overwhelmed by information and the complexity of it. Furthermore, we are convinced that the most complex solution is generally not the best solution for us.

So instead of searching for "the answer" in literature we started using current knowledge to construct a basic solution. This way we can deliver intermediate solutions and enhance the solution where it is necessary. It also allows for more specific problem statements.

It is difficult to plan a path, without a world view, because the environment is continuously changing (e.g. walking people). In order to be able build and program a working intelligent prototype for this problem, advanced programming skills are needed. The framework to sense and construct a world view is beyond the scope of this project. To still achieve our goal, we used our basic knowledge of programming to construct a state based algorithm. The algorithms main objective is to reach its goal. When the robots sensors sense an obstacle, it switches state to avoid collision and maneuver around it.

Obstacle Avoidance

In order to make it more suitable for the train platform case, we would like to make use of the dynamics that govern the people moving on the platform. In order to achieve this one should first make a distinction between static and dynamic objects. Otherwise it is not possible to determine which actions to take. In order to distinct static and dynamic objects, we want to use a coarse 2D grid with tiles of 1 [m^2]. All benches and static static objects are hard coded inside the grid. Then we use the sensors to detect objects. They will give an indication of the tile in which the object is detected. Then it will evaluate the map and decide if it knows the object or not. If it knows the object it can just follow the path that was already planned around the object, otherwise it should find a way to get past the object. In this case it could be a static-dynamic object like a case, which can be moved. Or a dynamic object that can move itself.

Docking the train

The robot will position itself in the right spot using local light sensitive sensors and reflectors beneath the train. Picture 1

Shared Control

Dallaway and Tollyfield (1990) acknowledge the importance of control for disabled people in their article Task-specific control of a robotic aid for disabled people: “The psychological importance of access to and control of one’s surroundings is obvious to the disabled and is increasingly recognized by those working in the social services. Help in these areas is commonly being provided by a combination of human assistants and a limited range of environmental control systems. Robotic aids, while only providing restricted integration with the surroundings, give a degree of versatility not possible with other forms of environmental control.” In the context of this journal article, control is referring to control of one’s surroundings. This confirms our interpretation of wanting more control; since we are designing shared control we are literally giving the disabled person more control of his surroundings.

Another interesting paper was written by Petry et al. in 2010. This states the following: ”Shared control initiatives take advantage of the user’s intelligence and assist the driver in the navigation process when dangerous situations are detected, extending and complementing user capabilities.” Moreover, an important aspect of shared control, which is also mentioned in this article, is the fact that shared control can “reduce the navigation complexity”. This is obviously highly beneficial when designing such a robotic system. This specific article did research on intelligent wheelchairs which share control with the user, avoiding risky situations. After testing with volunteers, they were presented a questionnaire which tested their perception of safety with and without the assistance of shared control. In the first case, the user manually controlled the wheelchair, decreasing safety perception and increasing collisions. The results are shown in figure 8. Clearly, the safety perception with shared control is much better. This is an important finding for our project; although handing the disabled person manual control may give them more influence in the situation, shared control will work better and increase the perception of safety.

touchscreen

Another research that serves as an inspiration for this project was performed by Connell and Viola(1990). They made a striking comparison to riding a horse and driving a car. A horse will not crash at high speed, and “if you fall asleep in the saddle, a horse will continue to follow the path it is on.” This illustrates the added value of shared control very well. In this article, the robot works as follows: the operator(the disabled person) is free to drive the robot in any direction, but the robot will refuse to continue its path if it detects an obstacle. This is similar to the way we are designing our robot. Shared control is beneficial in two cases: if the robot is too cautious (for example in a very busy environment), the disabled person can gain complete control to increase efficiency. On the other hand, when the person is either unable to drive, or tired, he can fully hand over all power to the robot.

Because there is no information available with respect to shared control in wheelchair lifting devices, When the system encounters an object or the end of the platform, it needs the user to take over the driving mechanism. But what is the best way to indicate to the user that it needs to act? Obviously, it needs to signal the user with information on what is happening, and when it desires the user to take over the control and move the machine himself. “Shared control beteen human and machine: using haptic steering wheel to aid in land vehicle guidance” (Steele et al, 2001) concludes that incorporating haptic feedback into the control device (so in our case the joystick) improves the alertness of users. A haptic feedback is for example a vibration signal to the user when the machine encounters a problem and needs to give control to the user. This alerts the user to immediately take control. “Haptic shared control: smoothly shifting control authority?” (Abbink et al, 2011) concludes that haptic shared control can lead to short-term performance benefits (faster and more accurate vehicle control, lower levels of control effort). Thus, it would be wise to incorporate a force feedback (haptic control) into the feedback system to the user. Much like the autopilot of Tesla (https://www.tesla.com/nl_NL/autopilot), which requires the driver to place its hands on/near the steering wheel and has haptic feedback when it needs to user to act, our machine could require the user to place its hand on the joystick.

When the wheelchair lifting device encounters an obstacle, the end of the platform, or an error, it needs to signal to the user what is needed of him. Because a screen is already incorporated in the device due to the checking in system of the OV-chipcard, it makes sense to give this also the purpose of indicating signals when the user needs to take control. The user also needs to have the option to take control himself, without the obstacle encountering a problem. This shared control can be displayed on the screen, and it can be made touchscreen so the user can press a button and take control of the machine. This however brings a problem, because according to “Visual-haptic feedback interaction in automotive touchscreens” (Pitts et al, 2012) touchscreens in the automotive industry take away user awareness of the surroundings (because it adds a visual workload to the user). However, it also concludes that incorporating haptic feedback counters this and improves the overall situational awareness. This research suggests that is it a good idea to provide information on the screen alerting the user that he needs to take control (by pressing the button on the touchscreen), while also alerting the user with force feedback (vibration) to inform him he needs to take an action.

touchscreentouchscreen

Another problem the user might encounter is that it has limited visibility directly in front of the machine, because a ramp is attached to the front. Because our lifting device will know its location and the end of the platform, the feedback device (touchscreen) could indicate how far away you are from the end of the platform or how far obstacles are directly in front of the machine.

The results of the questionnaire, combined with the above research, has several implications for our design:

  • Although we are unable to test our interpretation of the questionnaires, the literature research above nevertheless confirms shared control is an added value to this robotic system.
  • The way of implementing shared control will be similar to the ‘Mister Ed’ robot by Connell and Viola(1990): the operator is free to drive in any direction, but the robot will refuse to continue its path when detecting an obstacle. The robot will then pass the obstacle, over the left or right side (this can be chosen by the user). The robot is effectively looking over the disabled person’s shoulder to remain safe at all times.
  • Besides implementing the principle of shared control, the concept itself already gives the disabled person more influence on the process, as he or she now does not have to contact the NS long beforehand, is independent of NS travel assistants and can use the robot without any help.

Interface design

The disabled person interacts with the system in multiple ways. In the following section, we will explore what the best design is for every part of the process. Moreover, we will take a look at the system’s interaction with other people at the train station.


What problem does the prototype solve?

Because the lifting system is already a proven concept in other countries, we will not try to prove this in our prototype. However, in these countries you still need to make reservations and need help from railways personnel. Thus, the thing our prototype will have to prove is that it is able to autonomously help the disabled customer. This means that from start of the trip to the end of the trip, no external help from railway personnel is needed. This starts with the planning of the trip; disabled people will be able to plan their trip in the NS app and "reserve" the robot to make sure it is available to them at a specific time. The NS app is linked to their OV-chipcard, so when the trip is planned, all the robot will have to do is scan the chipcard. Of course, this does not need to be an hour before departure and can be done when you arrive on the platform. The prototype will try to prove that no railway personnel is needed. This means that it needs to move autonomously from start to finish. When "activated" the prototype will move to the right position (closer to the edge of the platform, in front of where a train door will be). All moving that this protoype will do includes the avoidance of people and objects in its path. Our prototype will observe obstacles with ultrasonic proximity sensors that will enable it to see objects in front of the prototype and also see edge of the platform. In the real life, the disabled person in the wheelchair will have to follow the robot. Then, when the train has arrived, the prototype will move itself perfectly infront of the door. Because the boarding mechanism is already a proven concept, we exclude this from our prototype. When it is done, the prototype will return to its initial position and return to idle mode.

Our prototype will observe the world in front of him and will be able to discern between a moving object (person or moving bag) and a stationary object (a bag or person standing still). This will be done detail toevoegen hoe we dat uiteindelijk precies doen (met rode and blauwe blokken of niet, etc.). When the prototype observes a stationary object, it will wait a certain amount of time to make sure it is infact a stationary object. Then, when the object isnt moving, it can signal it by an alarm going off that it needs to move. When this does not happen it needs to move around the object. -toevoegen hoe we precies om objecten heen gaan in onze code-. Then, when the train arrives, the prototype needs to let persons out of the train first before continueing the boarding process.

Determining location of the door

For the robot to be able to help a person exit the train the location of the person in the train is needed. The trains in the Netherlands do not stop at exactly the same place on a platform every time. This will cause a problem when a robot is being used. In Den Bosch they are experimenting real time updated led lights to show the people were a door will be located when the train stops. This information can be useful for determining the location of the person on board of the train. When the location of the door for handicapped persons is known the system can send this information to the robot. Then the robot knows where someone is located and can drive to that location.

Ledlight.PNG

Role of disabled person in the solution

The solution could also let the disabled person play a role in it, for example that the vehicle is controlled by the person. When brainstorming about the concepts another approach could be to let the disabled person be the one who controls the vehicle/ ramp. This way the person can access the train without any help of the staff without the need of the vehicle being autonomous. One idea was to let the person board the vehicle at its homing position and then let the person control the vehicle with a joystick and in this way be able to place the vehicle in position their selves. The problem with this is however that the person that needs assistance is not always able to control the vehicle. Think about people that cannot use their arms or people suffering from blindness. Another option was to let the robot follow the person that needs assistance. This solution could be very useful when people want to enter the train however when someone wants to exit the train there is no one to control the robot. This problem is actually also true for the first idea. Because of this it is decided that the person will not be a part of the solution. The robot should be able to drive to the right location all by itself.

Unusual situations

As we all know the trains do not always function as they should do. To deal with delayed and canceled trains the robot needs to be aware of this. Imagine that the location of where the train arrives at a trainstation then the robot needs to be aware of this in order get to the right door. To solve these problems the NS app will be used, this app will send the real time updated information to the robot to let it know when trains are delayed or moved to a different platform. With this information the robot is always able to get to the right location in time. The people that need to use the robot can activate it with this same app in order to let the robot know when it has to drive to a certain location. Comment: dit is ook al ergens anders beschreven in de wiki

RPC's prototype

Comment: checken of dit nog klopt Requirements

  • Drive in a straight line and be able to correct its movement.
  • Drive by one press of a button to a desired location, robot needs to find the right location and be able to reach it.
  • Avoid hitting obstacles, therefore the robot needs to be able to sense objects in its surrounding that are at least within 1 meter of its own position. When an obstacle is too close the robot has to stop and let its surroundings know that something is blocking its path.

Preferences

  • Not only drive in a straight line but determine a path and be able to make turns.
  • Reach the desired location as fast as possible, without risking to hit an obstacle.
  • Be able to sense an object in its surrounding and in case the object is in his pathway be able to alter its path to navigate around it.
  • The prototype should be as cheap as possible.

Constraints

  • Drive alongside the railway without falling of the platform.
  • The prototype cannot cost more than €150,-
  • The prototype should have dimensions around 30X30 cm

Prototype research

Prototype schematic.jpeg

Motion Planning

Literature

Solution

It is difficult to plan a path, without a world view, because the environment is continuously changing (e.g. walking people). In order to be able build and program a working intelligent prototype for this problem, advanced programming skills are needed. The framework to sense and construct a world view is beyond the scope of this project. To still achieve our goal, we used our basic knowledge of programming to construct a state based algorithm. The algorithms main objective is to reach its goal. When the robots sensors sense an obstacle, it switches state to avoid collision and maneuver around it. The code is shown below:


%This is a function that finds the goal using position and orientation
while pos ~= goal_pos % We want to loop until we reach the goal position
   % The first loop makes the robot move towards the goal as long as there
   % is no obstacle in it's way
   while S_F == Empty
       th_R = %ask angular position Robot
       th_G = %ask th_goal
       
       e = th_G - th_R
       T1 = -e*C1;
       T2 =  e*C1;
       
       if e <= pi/32
           T1 = T1 + C2
           T2 = T2 + C2
       end
   end
S_G = S_F % the sensor that is looking at the goal is the sensor on front
   % Then the robot turns until the front is clear and drives on until the
   % way towards the goal is clear.
   while S_G ~= empty 
       % update the sensor that looks towards the goal
       if dth =< pi/4 && dth > 7*pi/4
           S_G = S_F
       elseif dth =< 3*pi/4 && dth > pi/4
           S_G = S_L
       elseif dth =< 5*pi/4 && dth > 3*pi/4
           S_G = S_B   
       elseif dth =< 7*pi/4 && dth > 5*pi/4
           S_G = S_R
       end
       
       dth = 0
       th_R = %ask angular position robot 
       % Turn until the front of the robot has clearance
       if S_F ~= empty
           dth = dth - pi/32
           pause
           e = th_G - th_R + dth
           T1 = -e*C1
           T2 =  e*C1
       else % move forwards if the front has clearance
           e = th_G - th_R + dth
           T1 = -e*C1
           T2 =  e*C1
           if e <= pi/32 % keep the direction otherwise don't go forward this is to keep the vehicle safer
           T1 = T1 + C2
           T2 = T2 + C2
           end
       end
   end
   % This loops until the goal is reached so it will go back to the top
   % and start moving towards the goal
end
% Upon reaching the goal only the rotation matters and it is always the
% same. So the goal orientation get's redefined and the robot starts
% turning using a feedback loop.
th_G = pi/2
while e > pi/180  %% 1 degree equals 1.6cm 
th_R = %ask angular position Robot
e = th_G - th_R
T1 = -e*C1
T2 =  e*C1
end


The code represents a possible start of a simulated implementation in Matlab. The robot will facilitate an Arduino, hence the algorithm will change form, and it is interesting to already mention some of Arduino statements that could be used. So, instead of all these while loops the Arduino will run several states. A switch within a while loop will choose the state for each iteration. Inside the states certain conditions change the state variable, which the switch will detect. Furthermore, presently a lot of repetitive code is used. Especially for going forward en turning. This will ideally be separate functions that are requested within states. We are planning on making a more comprehensive code in order to make it clearer.

Additions could be:

  • A 2D grid map. The sensors could update states of tiles. This way it could update its world map as it drives towards the train. We could use probabilities and let sensor input stack before deciding on the actual observation and switching states of tiles. Also simple search algorithms (A*) could be used to make the robot move more efficiently over the platform.

Closing the gap

The new solution

Although it is not quite finished we believe this is due as soon as the arduino and beacons are available, because at the moment we have a hard time to keep it comprehensive and clear for ourselves. That is why we rewrote some of the previous code into language. The different approach of objects is implemented in the code, but the check of the 2D grid is not. Here is the new code: Next week we will start on the arduino and things will become clearer.

%This is a function that finds the goal using position and orientation
while pos ~= goal_pos % We want to loop until we reach the goal position
   % The first loop makes the robot move towards the goal as long as there
   % is no obstacle in it's way
   while S_F == Empty
       th_R = %ask angular position Robot
       th_G = %ask th_goal
       Define angle
       Define error
       Control angle
       if angle_diff <= pi/32
           Forward
           pause 1
       end
   end
   S_G = S_F % the sensor that is looking at the goal is the sensor on front
   % Then the robot turns until the front is clear and drives on until the
   % way towards the goal is clear.
   while S_G ~= empty 
       % update the sensor that looks towards the goal
       Update Goal sensor     
       % while the object is not to close
       t = 0 
       while t < 1
           t = t + 1
       if S_G > 0.2 
           State = Approach{
           sound alarm
           lights on
           Update angle
           Define error angle
           Control angle
           Forward slow
           pause 1
           motors off}
       % Turn until the front of the robot has clearance
       elseif S_F ~= empty
           State = turn{
           Change angle
           pause 1
           Define error angle
           Control angle
           Motors off}
       else % move forwards if the front has clearance
           State = update angle
           Define error angle
           Control angle
           pause random(1-5)
           motors off
           if angle_diff <= pi/32 % keep the direction otherwise don't go forward. this is to keep the vehicle safer
               Forward
               pause 1
               motors off
           end
       end
       end
   end
   % This loops until the goal is reached so it will go back to the top
   % and start moving towards the goal
   motors off
end
% Upon reaching the goal only the rotation matters and it is always the
% same. So the goal orientation get's redefined and the robot starts
% turning using a feedback loop.
th_G = pi/2
while e > pi/180  %% 1 degree equals 1.6cm 
th_R = %ask angular position Robot
Define error angle
Control error
end

Position and Orientation Determination

Sensors for determining obstacles

For the sensors we did some research on the internet and came up with three possible solutions.

  • infrared sensor to detect heat;
  • ultrasonic sensors;
  • Lasers;
  • camera;

For the camera you need sufficient software to distinguish people from other static objects. The infrared sensor would be perfect to distinguish a person from a static object, but this can be inaccurate on warm days or when a person is holding a bag or something cold in front of them. We went to visit Jeroen Houtman with our problem. He told us we could best make use of an ultrasonic sensor, because lasers were to expensive for the scope of this project. While it is hard to distinguish dynamic objects from static objects with ultrasonic sensors, Jeroen suggested that we should make a world map of definite static objects(eg. benches, trash bins, advertisement boards). The ultrasonic sensor can then detect the unpredicted dynamic and static objects. When it detects an unpredicted object, the robot can set an alarm of and tell the object to move away. If it does not move away it is assumed to be a static object and the robot will try to maneuver around it, but instead of moving to the right until the path is free, the robot will turn back after 1 meter and try again to move away the dynamic objects with alarms. Jeroen then addressed us to Ruud van den Bogeart, for the sensors. Ruud van den Bogeart told us it is probably possible to find a suitable sensor for our problem, and that we should come back next week to discuss this.

List of necessities and estimated costs

Comment: nog aanpassen

  • 4 Wheels (2 powered 2 swivel wheels) (10,-)
  • 2 electromotors (20,-)
  • Arduino (70,-)
  • Plate as chassis (2,-)
  • Beacons (?)
  • Sensors (25,-)
  • Wires (0)
  • Batteries or powerbank (10,-)
  • Lights (green & red)
  • Alarm (less than 80 dB)

Collaboration process

In this section we will discuss the team process and how the team collaborates. Every week a short update is given on what was done during the week and what was discussed in meetings.

4 September

This day our team was formed. We immediately established each other’s strengths, depending on our background. We discussed some ideas and concluded our main idea would revolve around the train environment. Throughout the week, we communicated who would take on what role in terms of the presentation of 11-9. On Wednesday, part of the group met up again to further refine the main concept. It was then decided we would focus on the boarding of a train by disabled people. Karlijn started working on the presentation, and wrote about the subject, objectives, users and approach. Luka maintained the wiki, while Gijs created an elaborate planning by means of a Gantt chart. Tjacco defined the milestones and deliverables. Throughout the week, a new group member, Jeroen, joined. He started creating the questionnaires we are going to use further in this project. On Sunday, we defined the group roles for the coming few weeks: Luka will maintain the wiki in terms of design process and help with the prototype, Karlijn will do qualitative research on the user requirements by means of the questionnaires and maintain the wiki in terms of collaboration process, Jeroen will do literature research on the state-of-the-art in the field, and Tjacco and Gijs will work on the prototype.

11 September

This day we presented our idea. We received some substantive feedback which we immediately incorporated in the planning: this week we will clearly define the RPC’s, after which we will all create a concept. Moreover, we finish all questionnaires, which enables us to start distributing the questionnaires from Tuesday. On Wednesday we meet again to compare the concepts, and refine our idea. We also received feedback saying we should be clear about the scope of the boarding process we would focus on. Due to that, Gijs started working on a block diagram which would map the entire process from start to finish, to gain clarity. We decided we wanted to focus on every part of the process. Jeroen will in this week start doing literature research, to gain insight in the current situation at the NS. All team members are very involved in the process and all work is divided among the group. Clear deadlines are set and processed in the planning.

18 September

In the section above you can read about the plans that were made for week 2. In the following section, the results of that week will be described we will pinpoint the following steps.

• In week 2 a list of general RPC’s was made; that is, a list that contains the RPC’s that the hypothetical system in real life should adhere to. We decided however that those RPC’s are not applicable to the prototype that we are aiming to build. This week, Luka will define an additional list of RPC’s specific to the prototype.

• On Wednesday, the team met up, and after discussing the concepts we concluded our idea would be largely based on Jeroen’s concept: a lift.

• In addition to the block diagram that Gijs made, the scope of this project will be further defined this week. Moreover, we have picked a specific part of the boarding process that we will focus on, namely, the docking of the robot near the train. This will be further described by Luka. Moreover, Luka will describe the ideal process. Karlijn will describe the process as is.

• Last week, Jeroen has indeed started doing literature research and found out that Canada and France already use the lift system that we have come up with. It does not however autonomously navigate. This Monday, we discussed this in a meeting with the teachers, and came to the conclusion that this is positive: since the lifting part of the system already exists, we can focus on the docking. Moreover, the existing systems can serve as an inspiration and can be a starting point from whereon to start developing our system.

• Karlijn and Tjacco finalized the questionnaires, after which Karlijn started handing out the questionnaires at NS stations. Moreover, questionnaires were distributed via personal networks and Facebook. This has however not yielded as many response as we had hoped for. Because of this, we have decided to extend the deadline one week and we will start collecting all answers and interpreting the data in week 3.

• This week, Gijs and Tjacco will do further research on motion planning, motion tracking and obstacle avoidance.

• This week, Jeroen will focus perform more practical work; besides finishing his literature research he will inform at the Innovation Space what options there are with regard to material and Arduinos and he will update the planning.

• This Monday, when meeting with the teachers, multiple conclusions were drawn with regard to the team. We openly discussed the collaboration to this day. Karlijn, and other team members, missed a leading figure within the group that has an overview of what everybody is doing during the week and if everybody is meeting their deadlines. Therefore, from now on, every week another group member will be that week’s group leader. He or she will check during the week if all is going as planned, and if everybody can finish his or her work before Sunday afternoon. This week Karlijn is group leader. Moreover, we missed clear deadlines last week and a large part of the group only started working on Sunday. This week, the entire group is therefore expected to finish his/her part before Friday. Friday, we will meet to discuss our work and define some final tasks that can be finished in the weekend. This will allow wiki maintainer Luka to upload all sections before Sunday.

22 September

On the 22nd of September we had a group meeting again. We concluded that from that week on, we were going to meet every week on Monday and Friday, rather than Monday and Wednesday. This gives more room to finish all work during the week and gives the possibility to discuss one’s work with one another. Moreover, on Friday we can determine what should be done in the weekend.

We discussed the work everybody had done until that day, mainly Gijs en Tjacco’s work on position determination by using beacons. Moreover, we discussed the materials we think we are going to need and aim to have a list with all needed material next week. Jeroen has checked with 4WBB0 if there are any Arduinos left, which he will hear back from later. Gijs en Tjacco also started coding for the Arduino, which we aim to have checked next Monday in the panel. Jeroen will work on specifying the protoype RPC list with specific measurements and he will elaborate on his patent check. Gijs will summarize all literature he has found so far, while Tjacco will do further research on the beacons. Karlijn will process the questionnaires before Wednesday and re-write the USE part of the wiki and the process description.

29 September

This week, we had a meeting on Monday with the teachers in which we discussed our progress. We received a lot of feedback, which we dealt with this week.

  • We have contacted Michiel van Gorp of Engineering Design and can next week pick up an Arduino to use for this project.
  • Feedback we got involved the question: what does our prototype show with regard to the problem? We discussed this in the group, and concluded that in the current situation we might not be solving the most important/interesting/challenging part of the problem. We are therefore from now on focusing on the detection in a dynamic environment. This is a problem that is namely not only related to this specific domain of trains, it is a more general problem that pertains to the entire social robot domain. We are focusing on how to create a world model and, especially, how to detect static ánd dynamic objects in the environment and how to differentiate between those two. This week we have already done more research in this area, which yielded several topics:
    • Beacons: by using beacons we can real-time track where the robot is located. Gijs and Tjacco have visited mr. Duarte and from this it became clear we can use the beacons for this project. This means however we have to test the prototype specifically at Duarte.
    • World model: we intend to pre-program static objects in the train environment (e.g. benches). In this way, the robot knows what to avoid.
    • Comment: Iemand bijschrijven: Detection of people: blabla (ook iets zeggen over koffers)
    • Comment: Iemand bijschrijven: wat gebeurt er als persoon voor robot loopt?
  • We also received the following feedback: we should know what the requirements are for an autonomous system in order for it to fully replace humans. We have this week further specified our list of RPC’s, and the results of the questionnaires have yielded additional user requirements.
  • From the meeting with the teachers we received the info that at the train station in Den Bosch sensors indicate where the train is going to be when arriving, and where the doors are. This implies that there is an information system at the NS which knows where a train will stop exactly. This may have a slight margin. This is extremely useful for the project, as this means we can assume the robot can access that information system and use that info for where it should go. For the final centimeters, it can use the beacons located in the doors to dock.
  • We have discussed with our group what the role of the disabled person should be. We have unanimously decided not to give this person a guiding role in the robot, as we can never assume the skills of the person beforehand. What his role will actually be on the other hand, is specified in the wiki under ‘Idealized solution’.

A short overview of what everybody has done this week:

  • Tjacco was group leader, which implies he led the meeting on Friday, and discussed everybody’s progress during the week.
  • Tjacco, Gijs and Luka met up with Ruud van den Bogaert to discuss the possibilities with regard to sensors. He was initially enthusiastic and offered to lend a hand, but later this week he emailed us saying he did not have time to help us and shared a few websites which we could look at for more info on ultrasonic sensors.
  • Gijs and Tjacco like mentioned also visited mr. Duarte to discuss the use of beacons.
  • Karlijn collected all results of the questionnaires and did a thematic analysis on the results, which yielded several user requirements. Moreover, Karlijn updated the collaboration process in the wiki and finalized the process description and user and enterprise analysis.
  • Jeroen and Luka both updated the wiki with all the work that was done this week, and incorporated the feedback in the work on the wiki.
  • Jeroen has also checked the wiki for consistency.
  • Gijs and Tjacco have updated their code for the Arduino.

6 October

This Monday, we again had a meeting with the teachers. In this meeting, we have discussed the results of the questionnaires. An important result was the desired increase of influence on the process by disabled people. This can be interpreted in multiple ways, as is described in our User Analysis above. Together with the teachers we have come up with the concept of shared control. This is an interesting new approach to the robotic system, which takes into account the results of the questionnaires. It is important for a project like this that what we are demonstrating at the end is supported by the results of the questionnaires, which is why we are implementing shared control in our prototype. This has multiple implications for our project, as we need to redesign the concept and codes.

This week, everybody has done the following:

  • Luka has assembled the hardware of the prototype and mounted everything on (wheels, motor, etc.)
  • Tjacco, Gijs and Luka have searched for parts for the prototype in all rest parts of 4WBB0.
  • Gijs has looked at the Arduino and code with Tjacco while Jeroen has researched how to code shared control.
  • On Friday, the team has tested Gijs' and Tjacco's code on the prototype which failed; the motors did not work. This is something we need to look at next week.
  • Karlijn has looked at the user analysis and implemented the results of the questionnaire in the new concept. A literature research was performed on shared control to check our interpretation.

13 October

This week, Gijs was group leader, which meant he made notes of every meeting and arranged a planning for this week. On Monday, we had a meeting with the teachers, in which we refined our project. Several questions were asked, which we all answered in the wiki this week. We have further refined our final prototype and concept and the requirements of the prototype. Moreover, we have discussed what the prototype will show with regard to this project's problem statement.

The tasks of everyone this week were as follows:

  • Tjacco has looked further into the connection between the Arduino and laptop. Moreover, he has refined the communication between the Arduino and the sensors.
  • Luka has finished building the hardware; he placed the sensors, LED lights, a power amp, the wheels and a transistor.
  • Gijs has looked into playing music from the Arduino, has coded the lights for the Arduino and has helped Tjacco with his tasks.
  • Jeroen did research on the communication between user and robot in case the robot takes over the steering.
  • Karlijn has performed research on how to communicate to bystanders the passing of the robot, in terms of light and music. Moreover, she has written about what happens in case the robot collides with anything, and what happens in case of modifications in the train timetable. Besides that, she has updated the group collaboration process.

23 October

The past two weeks, we have worked on finalizing the project. In week 7, we received the following feedback:

  • The concept of personal space should be considered in the wiki -> this was solved by Karljn: she performed a literature research on the human-robot interaction and based on that, devised an ideal design of the interaction between our robot and its surroundings.
  • What is the function of the LED arrows, how should they work? -> this was fixed by Jeroen, Gijs and Karlijn, who further researched the topic and worked out the topic in more detail in the wiki
  • What will the robot do with objects it approaches? -> after a group meeting, we decided we wanted the robot to drive past an object in case it approaches an object, after which it hands back the control to its user again. This is worked out in the wiki by Tjacco.
  • Our sensors did not work at the time, Luka has worked on the prototype to fix this.
  • Jeroen has prepared the presentation for week 8 and paid attention to the touchscreen display.
  • Karlijn has checked the entire wiki and commented on parts that needed to be fixed by other group members.


After the presentation on Monday of week 8, we again sat down with the group and set some final deadlines

  • In week 8, we all finalized the wiki with the latest information
  • In week 8, we will be peer-reviewing one another.

Arduino Codes

Following scripts are gathered to be used for final implementation, but the lack of knowledge in programming makes it hard to explain what is happening and to select scripts.

Preobtained Examples

Create extra Tx and Rx ports

EART communication

data exchange with mobile beacon

For checksum the CRC-16 is used. Last two bytes of N-bytes frame are filled with CRC-16, applied to first (N-2) bytes of frame. To check data you can apply CRC-16 to all frame of N bytes, the result value should be zero. Below is the implementation of the algorithm in the 'C': typedef ushort ModbusCrc;// ushort – two bytes typedef union { ushort w; struct{ uchar lo; uchar hi; } b; ucharbs[2]; } Bytes; static Modbus CrcmodbusCalcCrc(const void *buf, ushort length) { uchar *arr = (uchar *)buf; Bytes crc; crc.w = 0xffff; while(length--){ chari; bool odd; crc.b.lo ^= *arr++; for(i = 0; i< 8; i++){ odd = crc.w& 0x01; crc.w>>= 1; if(odd) crc.w ^= 0xa001; } } return (ModbusCrc)crc.w; }

Communication with Beacons

/*

*   This simple example every 1 sec sends 16 bytes of the user data to the hedgehog via UART
*/

/*

 The circuit:
* Serial data from hedgehog : digital pin 0 (RXD)
* Serial data to hedgehog : digital pin 1 (TXD)
* LCD RS pin : digital pin 8
* LCD Enable pin : digital pin 9
* LCD D4 pin : digital pin 4
* LCD D5 pin : digital pin 5
* LCD D6 pin : digital pin 6
* LCD D7 pin : digital pin 7
* LCD BL pin : digital pin 10
*Vcc pin :  +5
*/
  1. include <LiquidCrystal.h>

////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////// // MARVELMIND HEDGEHOG RELATED PART

typedef union {byte b[2]; unsigned int w;} uni_8x2_16;

  1. define PACKET_TYPE_STREAM_FROM_HEDGE 0x47
  2. define PACKET_TYPE_REPLY_TO_STREAM 0x48
  3. define PACKET_TYPE_READ_FROM_DEVICE 0x49
  4. define PACKET_TYPE_WRITE_TO_DEVICE 0x4a
  1. define USER_PAYLOAD_DATA_ID 0x200
  1. define DATA_OFS 5

///

  1. define HEDGEHOG_BUF_SIZE 64

byte hedgehog_serial_buf[HEDGEHOG_BUF_SIZE];

  1. define USER_FRAME_SIZE 32

uint8_t user_packet_counter;

  1. define USER_DATA_RATE_MSEC 63 /* 63 msec ~ 16 Hz */

////////////////////////////////////////////////////////////////////////////

// Marvelmind hedgehog support initialize void setup_hedgehog() {

 Serial.begin(500000); // hedgehog transmits data on 500 kbps  
 user_packet_counter= 0;

}

////////////////////////////////////////

void hedgehog_send_packet(byte address, byte packet_type, unsigned int id, byte data_size) {byte frameSizeBeforeCRC;

  hedgehog_serial_buf[0]= address;
  hedgehog_serial_buf[1]= packet_type;
  hedgehog_serial_buf[2]= id&0xff;
  hedgehog_serial_buf[3]= (id>>8)&0xff;
  if (data_size != 0)
  {
    hedgehog_serial_buf[4]= data_size;
    frameSizeBeforeCRC= data_size+5;
  }  
  else
  {
    frameSizeBeforeCRC= 4;
  }
  hedgehog_set_crc16(&hedgehog_serial_buf[0], frameSizeBeforeCRC);
  Serial.write(hedgehog_serial_buf, frameSizeBeforeCRC+2);

}

// Sends user data void hedgehog_send_user_data() {uint8_t i;

 // ---- Fill payload data begin 
 for(i=0;i<USER_FRAME_SIZE;i++)
   hedgehog_serial_buf[DATA_OFS + i]= user_packet_counter++;
 // ---- Fill payload data end
 
 hedgehog_send_packet(0, PACKET_TYPE_READ_FROM_DEVICE, USER_PAYLOAD_DATA_ID, USER_FRAME_SIZE); 

}

////////////////////////////////////////

// Calculate CRC-16 of hedgehog packet void hedgehog_set_crc16(byte *buf, byte size) {uni_8x2_16 sum;

byte shift_cnt;
byte byte_cnt;
 sum.w=0xffffU;
 for(byte_cnt=size; byte_cnt>0; byte_cnt--)
  {
  sum.w=(unsigned int) ((sum.w/256U)*256U + ((sum.w%256U)^(buf[size-byte_cnt])));
    for(shift_cnt=0; shift_cnt<8; shift_cnt++)
      {
        if((sum.w&0x1)==1) sum.w=(unsigned int)((sum.w>>1)^0xa001U);
                      else sum.w>>=1;
      }
  }
 buf[size]=sum.b[0];
 buf[size+1]=sum.b[1];// little endian

}// hedgehog_set_crc16

// END OF MARVELMIND HEDGEHOG RELATED PART ////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////

LiquidCrystal lcd(8, 13, 9, 4, 5, 6, 7);

void setup() {

 lcd.clear(); 
 lcd.begin(16, 2);
 lcd.setCursor(0,0);
 lcd.print("Sends user data"); 
 setup_hedgehog();//    Marvelmind hedgehog support initialize

}

void loop() {

  delay(USER_DATA_RATE_MSEC);
  
  hedgehog_send_user_data();// Send user data to hedgehog

}

Communication with Speaker

Motor Control

Modified Versions

Arduino Code crash avoidance

this code is only written to make sure the robot does not drive into obstacles or falls of the platform. The keypad of the laptop is used as user input and the sensors are used to check if the way is clear.


// pin number of on-board LED
int ledPin = 13;
// Pulse Width Modulation (PWM) pins
int PWM1 = 3;
int PWM2 = 5;
int PWM3 = 6;
int PWM4 = 11;
// Sensor in Arduino
#include <NewPing.h>
#define SONAR_NUM     15 // Number or sensors.
#define MAX_DISTANCE 200 // Max distance in cm.
#define PING_INTERVAL 33 // Milliseconds between pings.
unsigned long pingTimer[SONAR_NUM]; // When each pings.
unsigned int cm[SONAR_NUM]; // Store ping distances.
uint8_t currentSensor = 0; // Which sensor is active.
int d[SONAR_NUM] 
NewPing sonar[SONAR_NUM] = { // Sensor object array.
NewPing(7, 8, MAX_DISTANCE), // NewPing sonar(trigger_pin,echo_pin[,max_cm_distance]);
NewPing(12, 13, MAX_DISTANCE),
};
void setup() {
 
// Setup Sonar Sensors ------------------------------
   Serial2.begin(115200);
 pingTimer[0] = millis() + 75; // First ping start in ms.
 for (uint8_t i = 1; i < SONAR_NUM; i++)
   pingTimer[i] = pingTimer[i - 1] + PING_INTERVAL;
 
 // setup DC motors ----------------------------------
 // all outputs to zero
 analogWrite(PWM1,0);
 analogWrite(PWM2,0);
 analogWrite(PWM3,0);
 analogWrite(PWM4,0);
 Serial1.begin(115200); //for Ethernet or Wifi
 
   
 // clear the input buffer
 while (Serial1.available())
    Serial1.read();  
 }
void loop() {
for (uint8_t i = 0; i < SONAR_NUM; i++) {
   if (millis() >= pingTimer[i]) {
     pingTimer[i] += PING_INTERVAL * SONAR_NUM;
     if (i == 0 && currentSensor == SONAR_NUM - 1)
       oneSensorCycle(); // Do something with results.
     sonar[currentSensor].timer_stop();
     currentSensor = i;
     cm[currentSensor] = 0;
     sonar[currentSensor].ping_timer(echoCheck);
   }
 } 
}
void echoCheck() { // If ping echo, set distance to array.
 if (sonar[currentSensor].check_timer())
   cm[currentSensor] = sonar[currentSensor].ping_result / US_ROUNDTRIP_CM;
}
void oneSensorCycle() { // Do something with the results.
 for (uint8_t i = 0; i < SONAR_NUM; i++) {
   Serial2.print(i);
   Serial2.print("=");
   Serial2.print(cm[i]);
   Serial2.print("cm ");
   // get the median distance out of 5 pings. 
   d[currentSensor] = sonar[currentSensor].convert_cm(sonar[currentSensor].ping_median(5))
   // d[currentSensor] = sonar[currentSensor].ping_cm
 decide()\\ go to the decision tree
 }
 Serial2.println();
}
void decide() { \\ the decision tree
if(Serial1.available() > 0)
 {
   char Command = Serial1.read();
   switch(Command)
   {
     // if 5 kill all outputs
     case '5':
       analogWrite(PWM1,0);
       analogWrite(PWM2,0);
       analogWrite(PWM3,0);
       analogWrite(PWM4,0);
     break;
     // in other cases switch to all combinations of forward and reverse
     // 255 means full speed
     case '8':
     if(d[1] < 5 && d[1] > 0){ // check if there is platform to drive on
     if(d[0] > 40) // If the object is far away or not existing go on
      {
       analogWrite(PWM1,255);
       analogWrite(PWM2,0);
       analogWrite(PWM3,255);
       analogWrite(PWM4,0);
       else if(d[0] > 15) // If the object is not to close continue slowly
         {
         analogWrite(PWM1,55);
         analogWrite(PWM2,0);
         analogWrite(PWM3,55);
         analogWrite(PWM4,0);
         else // stop if the object is to close
         analogWrite(PWM1,0);
         analogWrite(PWM2,0);
         analogWrite(PWM3,0);
         analogWrite(PWM4,0);
         }
      }
      else
         analogWrite(PWM1,0);
         analogWrite(PWM2,0);
         analogWrite(PWM3,0);
         analogWrite(PWM4,0);
     }
     break;
// case 2 can be used to drive backwards as well, but we only have two sensors. 
//one for obstacle avoidance
//one to stay on the platform     
//      case '2':
//      if(d[1] > 40)
  //    {
    //   analogWrite(PWM1,0);
      // analogWrite(PWM2,255);
//        analogWrite(PWM3,0);
 //      analogWrite(PWM4,255);
   //    else if ( d[1] > 15) // If the object is not to close continue slowly
     //    {
       //  analogWrite(PWM1,0);
         //analogWrite(PWM2,55);
//          analogWrite(PWM3,0);
 //        analogWrite(PWM4,55);
   //      else  // stop if the object is to close
     //    analogWrite(PWM1,0);
       //  analogWrite(PWM2,0);
         //analogWrite(PWM3,0);
//          analogWrite(PWM4,0);
 //        }
   //   }
     //break;
      
     case '4': // One may turn left. This time...
       analogWrite(PWM1,255);
       analogWrite(PWM2,0);
       analogWrite(PWM3,0);
       analogWrite(PWM4,255);
     break;
     
     case '6': // One may turn right. This time...
       analogWrite(PWM1,0);
       analogWrite(PWM2,255);
       analogWrite(PWM3,255);
       analogWrite(PWM4,0);
     break;      
    }
 } 
}

References

SPECIAL TRAVEL NEEDS. (n.d.). Retrieved October 27, 2017, from https://www.eurostar.com/rw-en/travel-info/travel-planning/accessibility

E.V., D. Z. (2017, May 23). Retrieved October 27, 2017, from http://www.germany.travel/en/ms/barrier-free-germany/how-to-book/deutsche-bahn.html

Beantwoord: Bevindingen proef station 's Hertogenbosch. (n.d.). Retrieved October 27, 2017, from https://forum.ns.nl/archief-43/bevindingen-proef-station-s-hertogenbosch-873

Patent check. Retrieved October 27, 2017, from http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.html&r=0&f=S&l=50&d=PG01&OS=train%2BAND%2Bwheelchair&RS=train%2BAND%2Bwheelchair&PrevList1=Prev.%2B50%2BHits&TD=1328&Srch1=train&Srch2=wheelchair&Conj1=AND&StartNum=&Query=lifting%2BAND%2Bwheelchair

Connell, J., & Viola, P. (1990). Cooperative control of a semi-autonomous mobile robot. In Proceedings IEEE International Conference on Robotics and Automation (Vol. 2, pp. 1118–1121). https://doi.org/10.1109/ROBOT.1990.126145

Dallaway, J. L., & Tollyfield, A. J. (1990). Task-specific people control of a robotic aid for disabled. Journal of Microcomputer Applications, 321–335.

Petry, M. R., Moreira, A. P., Braga, R. A. M., & Reis, L. P. (2010). Shared control for obstacle avoidance in intelligent wheelchairs. In 2010 IEEE Conference on Robotics, Automation and Mechatronics, RAM 2010 (pp. 182–187). https://doi.org/10.1109/RAMECH.2010.5513193

Steptoe, A., Shankar, A., Demakakos, P., & Wardle, J. (2013). Social isolation, loneliness, and all-cause mortality in older men and women. Proceedings of the National Academy of Sciences, 110(15), 5797-5801

Oishi, S. (2010). The psychology of residential mobility: Implications for the self, social relationships, and well-being. Perspectives on Psychological Science, 5(1), 5-21.

Ivens, L. and Kant, A. (2004). Ontspoord, Gehandicapten bij de NS. Tweede-Kamerfractie SP.

Brandl, C., Mertens, A. and Schlick, C. M. (2016), Human-Robot Interaction in Assisted Personal Services: Factors Influencing Distances That Humans Will Accept between Themselves and an Approaching Service Robot. Hum. Factors Man., 26: 713–727. doi:10.1002/hfm.20675

Hall, E. T. (1966). The hidden dimension: Man's use of space in public and private. London: The Bodley Head.

Walters, M. L., Syrdal, D. S., Koay, K. L., Dautenhahn, K., & te Boekhorst, R. (2008). Human approach distances to a mechanical-looking robot with different robot voice styles (pp. 707–712). Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, August 1–3, 2008, Munich.

Koay, K. L., Syrdal, D. S., Walters, M. L., & Dautenhahn, K. (2007). Living with robots: Investigating the habituation effect in participants' preferences during a longitudinal human-robot interaction study (pp. 564–569). Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, August 26–29, 2007, Jeju.

Butler, J. T., & Agah, A. (2001). Psychological effects of behavior patterns of a mobile personal robot. Autonomous Robots, 10, 185–202.

Złotowski, J. A., Weiss, A., & Tscheligi, M. (2012). Navigating in public space: Participants' evaluation of a robot's approach behaviour (pp. 283–284). Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, March 5–8, 2012, Boston, MA.

Koay, K.L., Syrdal, D.S., Ashgari-Oskoei, M. et al. Int J of Soc Robotics (2014) 6: 469. https://doi.org/10.1007/s12369-014-0232-4