PRE2020 3 Group11

From Control Systems Technology Group
Jump to navigation Jump to search

The acceptance of self-driving cars



Abstract

Name Studentnumber Email
Laura Smulders 1342819 L.a.smulders@student.tue.nl
Sam Blauwhof 1439065 S.e.blauwhof@student.tue.nl
Joris van Aalst 1470418 J.v.aalst@student.tue.nl
Roel van Gool 1236549 R.p.v.gool@student.tue.nl
Roxane Wijnen 1248413 R.a.r.wijnen@student.tue.nl

Problem statement

Self-driving cars are believed to be more safe than manually driven cars. However, they can not be a 100% safe. Because crashes and collisions are unavoidable, self-driving cars should be programmed for responding to situations where accidents are highly likely or unavoidable (Sven Nyholm, Jilles Smids, 2016). There are three moral problems involving self-driving cars. First, the problem of who decides how self-driving cars should be programmed to deal with accidents exists. Next, the moral question who has to take the moral and legal responsibility for harms caused by self-driving cars is asked. Finally, there is the decision-making of risks and uncertainty.

There is the trolley problem, which is a moral problem because of human perspective on moral decisions made by machine intelligence, such as self-driving cars. For example, should a self-driving car hit a pregnant woman or swerve into a wall and kill its four passengers? There is also a moral responsibility for harms caused by self-driving cars. Suppose, for example, when there is an accident between an autonomous car and a conventional car, this will not only be followed by legal proceedings, it will also lead to a debate about who is morally responsible for what happened (Sven Nyholm, Jilles Smids, 2016).

A lot of uncertainty is involved with self-driving cars. The self-driving car cannot acquire certain knowledge about the truck’s trajectory, its speed at the time of collision, and its actual weight. Second, focusing on the self-driving car itself, in order to calculate the optimal trajectory, the self-driving car needs to have perfect knowledge of the state of the road, since any slipperiness of the road limits its maximal deceleration. Finally, if we turn to the elderly pedestrian, again we can easily identify a number of sources of uncertainty. Using facial recognition software, the self-driving car can perhaps estimate his age with some degree of precision and confidence. But it may merely guess his actual state of health (Sven Nyholm, Jilles Smids, 2016).

The decision-making about self-driving cars is more realistically represented as being made by multiple stakeholders; ordinary citizens, lawyers, ethicists, engineers, risk assessment experts, car-manufacturers, government, etc. These stakeholders need to negotiate a mutually agreed-upon solution (Sven Nyholm, Jilles Smids, 2016). This report will focus on the relevant factors that contribute to the acceptance of self-driving cars with the main focus on the private end-user. Taking into account the ethical theories: utilitarianism, kantianism, virtue ethics, deontology, ethical plurism, ethical absolutism and ethical relativism.

State-of-the-art/Hypothesis

Research question:

What are the relevant factors that contribute to the acceptance of self-driving cars for the private end-user?


The developments and advances in the technology of autonomous vehicles have brought self-driving vehicles to the forefront of public interest and discussion recently. In response to the rapid technological progress of self-driving cars, governments have already begun to develop strategies to address the challenges that may result from the introduction of self-driving cars (Schoettle, B., 2014). The Dutch national government aims to take the lead in these developments and prepare the Netherlands for their implementation. The Ministry of Infrastructure and the Environment has opened the public roads to large-scale tests with self-driving passenger cars and trucks. The Dutch cabinet has adopted a bill which in the near future will make it possible to conduct experiments with self-driving cars without a driver being physically present in the vehicle (Mobility, public transport and road safety, n.d.).

The end-consumers (the actual drivers) will eventually decide whether self-driving cars will successfully materialize on the mass market. However, the lack of wider empirical evidence for the user perspective forms the rationale for this research. User resistance to change has been found to be an important cause for many implementation problems, so it is very probable that the self-driving car will meet considerable resistance. It is likely that a significant percentage of drivers may not be comfortable with full autonomous driving. People might experience driving to be adventurous, thrilling and pleasurable (König, M., 2017). There is also the question whether self-driving cars could be seen as providing the ultimate level of autonomy when making people dependent on the technology. Given that self-driving cars could be tracked steadily could lead to privacy issues. Another potential cause for barriers towards self-driving cars is the risk of ‘misbehaving computer system’. With autonomous vehicles, criminals or terrorists might be able to hack into and use their cars for illegal purposes. Further, the unavoidable rate of failure and crashes could lead to mistrust. Especially as people tend to underestimate the safety of technology while putting excessive trust in human capabilities like their own driving skills (König, M., 2017).

In several recent surveys on the topic of self-driving vehicles, the public has expressed some concern regarding owning or using vehicles with this technology. Looking at the survey of Public opinion about autonomous and self-driving vehicles in the U.S., the U.K, and Australia, the majority of respondents had previously heard of self-driving vehicles, had a positive initial opinion of the technology, and had high expectations about the benefits of the technology (Brandon Schoettle, 2014). However, the majority of respondents expressed high levels of concern about riding in self-driving cars, security issues related to self-driving cars, and self-driving cars not performing as well as actual drivers. Respondents also expressed high levels of concern about vehicles without driver controls (Schoettle, B., 2014). In the survey of User’s resistance towards radical innovations: The case of the self-driving car, findings are that people who used a car more often tended to be less open to the benefits of self-driving cars. The most pronounced desire of respondents was to have the possibility to manually take over control of the car whenever wanted. This indicates that the drivers want to be enabled to decide when to switch to self-driving mode and have the option to resume control in situations when the driver does not trust the technology. In the survey the most severe concern involving the car and the technology itself was the fear of possible attacks by hackers (König, M., 2017).

In literature research, scientific articles discuss three moral problems involving self-driving cars. These moral problems consist of the problem of who decides how self-driving cars should be programmed to deal with accidents, the moral question who has to take the moral and legal responsibility for harms caused by self-driving cars and the decision-making of risks and uncertainty (Nyholm, S., Smids, J., 2016). (As mentioned in the Problem Statement.)

This report will focus on the relevant factors that contribute to the acceptance of self-driving cars for the private end-user. Together with the literature research and the several surveys conducted on the topic of self-driving vehicles, these relevant factors will be the ethical theories, the moral and legal responsibility, safety, privacy and the perspective of the private end-user.

Survey

Introduction

Method

Fully completed surveys were received for 115 respondents.


Research design

For this questionnaire, a non-probability convenience sampling method was applied that leveraged the group’s broad networks. Even though convenience sampling means that the sample is not representative, it was a feasible opportunity to reach out to the crucial audience and to enable the collection of relevant data forming first evidence. As the questionnaire was conducted with the general public, there was no strict geographical scope in order to reach as many different people as possible. This allows first indications of driver’s attitudes towards self-driving vehicles not applying to certain regions. The survey is conducted in the Netherlands.


Data collection

Data was collected over a one-week time frame in March 2021 using an online questionnaire using Microsoft Forms (Microsoft Forms, 2021), a web-based survey company. Tis method was chosen for several reasons. Assessed information was widely available among the public. Due to Covid-19, an online approach made it easier to reach people to ensure physical distancing. And by not requiring an interviewer to be present, it reduced both potential bias and cost and time. Microsoft Forms is used because it has a safe environment and it meets EU privacy standards.

Respondents were reached by sending out emails and private messages on social media, via Whatsapp, including both personalized invitational letter, explicitly stating self-driving vehicles as the topic of the research, as well as a direct link to the online questionnaire. A consent from was included on the cover page of the questionnaire where respondents were assured of anonymity and confidentiality. Given the study’s exploratively, reaching a large number of respondents was prioritized. The minimum number of respondents favored was 100.


Measures

In the questionnaire, several relevant factors related to self-driving vehicles were examined. The main topics addressed in the questionnaire were as follows:

- Familiarity with self-driving vehicles

- Expected benefits of self-driving vehicles

- Concerns about different implementations of self-driving vehicles

- Favored ethical settings in self-driving vehicles

- Acceptance of legal responsibility in unavoidable crashes with self-driving vehicles


Personal car use and demographics

In the first part of the questionnaire, participants had to answer the question whether they have a drivers license. Additionally, the respondents were asked how often they drove a car presented with the answering options ‘(almost) every day, weekly, monthly, annually and never’. Furthermore, demographical questions regarding age and education were asked.

Familiarity with self-driving vehicles

Participant’s existing knowledge about self-driving vehicles was assessed. Respondents were confronted with a set of rating questions containing even, numerical Likert scales made up of four points ranging from ‘unfamiliar’ (1) to ‘familiar’ (4).

Expected benefits of self-driving vehicles

Participants were further asked to rate their agreement with statements reflecting presumed benefits of the use of self-driving vehicles. To allow for a ‘neutral’ opinion, the statements were combined with a 5-point scale ranging from ‘very unlikely’ (1) to ‘very likely’ (5). A 5-point Likert scale is used because in forced choice experiments, consisting of a 4-point Likert scale, choices are contaminated by random guesses.

Concerns about different implementations of self-driving vehicles

After the expected benefits of self-driving vehicles, respondents were asked to rate their concerns with statements regarding self-driving vehicles by using a 4-point Likert scale ranging from ‘not concerned’ (1) to ‘very concerned’ (4).

Favored ethical settings in self-driving vehicles

The preferred ethical setting, in which participants would like to see self-driving vehicles which are on the road, is assessed with 5 ranking options from first choice to last choice. Furthermore, there are statements regarding ethical settings used in self-driving vehicles assessed with a 5-point Likert scale, to allow the neutral opinion, ranging from ‘strongly disagree’ (1) to ‘strongly agree’ (5).

Acceptance of legal responsibility in unavoidable crashes with self-driving vehicles

Lastly, participants were asked to rate their agreement with statements about the legal responsibility in unavoidable crashes with self-driving vehicles. Again with a 5-point Likert scale, to allow the neutral opinion, ranging from ‘very unlikely’ (1) to ‘very likely’ (5).


The full text of the questionnaire is included in the appendix.

Results

1.Education.png

2.Drivers license.png

3.Car usage.png

4.Familiarity.png

5.Advantages.png

6.Worries.png

7.Ethics.png

8.Agree.png

9.Usage.png

The major part of our respondents are highly educated. We have too little low educated respondents to compare between levels of education. We can only compare between age groups and we can see differences between how much people use a car and how willing they are to accept a self-driving vehicle.

In general, we see that people think that there will be many advantages, but they also worry about some probable disadvantages. People think these advantages will occur in respective order from most probable to least probable:

1. Better fuel-savings (38.6% somewhat likely, 46.5% very likely)

2. Less traffic jams (39.5% somewhat likely, 39.5% very likely)

3. Less accidents (54.5% somewhat likely, 26.3% very likely)

4. Lower vehicle emissions (36% somewhat likely, 30.7% very likely)

5. Lower insurance rates (37.2% somewhat likely, 17.7% very likely)

6. Decreased severity of accidents (45.1% somewhat likely, 10.6% very likely)

7. Shorter travels (23.7% somewhat likely, 12.3% very likely)

It is remarkable that the majority of the respondents agree with all advantages, accept shorter travels, whereas a computer is able to compute the most efficient route and a human being is not. Overall, we can conclude that people believe self-driving vehicles will bring many advantages with it. The disadvantages are ordered below in same way of ranking as the advantages:

1. Driving in a vehicle without a human able to intervene (42.7% worried, 30.1% very worried)

2. System security (against hackers) (39.3% worried, 15.9% very worried)

3. Confused self-driving vehicles in unpredictable conditions (39.8% worried, 11.1% very worried)

4. Safety consequences of device-malfunction or system failure (35.5% worried, 13.1% very worried)

5. Interaction with non-self-driving vehicles (36.9% worried, 10.7% very worried)

6. Legal liability for drivers/owners (38.8% worried, 7.8% very worried)

These are the six most likely disadvantages according to our respondents. We can conclude from this that self-driving vehicles will be much more accepted, if the option to intervene will be implemented. Luckily, this is not hard to realize for manufacturers, but this will sure have legal consequences. In general, people feel safer when they are in control, and however they must handle quickly to prevent an accident, they have a feeling of being in control. Second, we have system security. We can conclude that the majority of the people would like significant proof, that it is hard for hackers to break into the system of a car or multiple cars at the same time. This surely is important and a major part of the manufacturers’ focus already is on this aspect. It makes sense that people worry about this point, because when it is true that less accidents will happen, system breaches could deliver way more trouble, if incautiously secured. Slightly more than half of the respondents are worried about vehicles being confused by situations they don’t recognize and therefore hard to/impossible to predict. Over time this aspect will improve and maybe all situations will be predictable somewhere in the future. For now, safety could be guaranteed by making the car pull over when it really doesn’t know what to do else.

From the ninth question, we can make a top five of what ethical settings most respondents seem to prefer:

1. The car should always opt to inflict as less damage as possible, to the lowest amount of people, whether they are bystanders or passengers

2. The car shouldn’t be allowed to make an explicit choice between human lives and therefore shouldn’t be able to intervene before an unavoidable accident. As a result, there will be a random victim.

3. The car should always prioritize the lives and health of the passengers above those of bystanders.

4. The car should always prioritize the lives and health of bystanders above those of the passengers.

5. The choice which the car makes, should be based on what most road-users want.

We see that the respondents think that the car should not prioritize passengers or bystanders above each other in general. Instead, they want the least damage to be inflicted or the car not making a decision at all. People don’t think that the owner or bystanders should be preferred and this could be due to the fact that both groups don’t cause the accident mostly. Still, they have a slight preference for saving the owner’s life. This could encourage more people to make use of self-driving cars, which is what we want. Also, this could be due to the fact that bystanders are more likely to cause the accident than the owner, because the owner isn’t in control, but a bystander could jump in front of the car due to his own fault for example.

The conclusion of question 10 is very simple, people want the ethical setting to be set by the manufacturer, but they do think it is important which one they make. So


Amount of responses

- 115 responses.

- 104 responded to q2

- 115 responded to q3

- 114 responded to q4

- 114 responded to q5

- 114 responded to q6

- 112 responded to everything in q7.

  - 1 person did not respond to subquestions 1, 2, 7 
  - 1 person did not respond to subquestions 2 
  - 1 person did not respond to subquestions 3-7 
  - So including partly complete responses there are 115 responses

- 67 responded to everything in q8. (-39 since had to throw out responses)

- 1 person did not respond to 3

- 1 person did not respond to 4

- 1 person did not respond to 6

- 3 persons did not respond to 12

- 1 person did not respond to 2-6, 8-10, 12

- 1 person did not respond to 1-12

- 1 person did not respond to 2-12

- So including partly complete responses there are 75 responses (1 person completely skipped q8).

- 115 responded to everything in q9

- 112 responded to everything in q10

- 1 person did not respond to 2,3

- 2 persons did not respond to 3

- So including partly complete responses there are 115 responses

110 responded to everything in q11

2 persons did not respond to 2

1 person did not respond to 4

1 person did not respond to 5

1 person did not respond to 2, 3, 4, 5

So including partly complete responses there are 115 responses

For the percentages in the next section I simply did not include blank answers. So if (for instance) for q7 3 people did not respond to subquestion 2, then the percentages of that section are calculated as a percentage of 112. In the grouped section I did include blank answers (except the removed answers from q8).

Also good to mention in discussion are the following two things: We should have made it so one could only complete the survey with an answer everywhere, since that would have avoided blank answers Although we cannot be sure why some people did not fill in some questions, it is likely that part of it was that they were not thoroughly reading everything. To combat this we could have included a test question


Basic results (not grouped per demographic category) q2: Since this is an open question, I’ve made two intervals. Youngest person is 17, oldest 80. About half of respondents are between 17-30, and other half between 42-80 (no one between 30 and 42). This is likely since we asked people around us to fill in the survey, which mainly will be friends of similar age, and (older) family members. If I tried to make 3 categories, then there were too few people in one category to justify the category. Also, since there is an obvious dichotomy in the data, it makes sense to use that dichotomy.

51.3% <31 39.1% >41 9.6% no answer

(out of 115 responses)

q3: 2.6% no education/ incomplete primary education 6.1% High school diploma 7.8% MBO 32.2% HBO/WO (no diploma) 29.6% HBO/WO Bachelor diploma 20.0% HBO/WO Master diploma 1.7% Doctor, Phd

(out of 115 responses)


q4: 89.5% driving license 10.5 no driving license

(out of 114 responses)

q5: 28.1% (nearly) every day 43.0% Weekly 24.5% Monthly 4.4 % Yearly 0.0% Never

(out of 114 responses)

q6: 22.8% Unfamiliar 16.7% Somewhat unfamiliar 42.1% Somewhat familiar 18.4% Familiar

(out of 114 responses)

[q7:] For all the sub-questions of q7, these are the possible answers: (Very unlikely, Somewhat unlikely, No opinion/neutral, Somewhat likely, Very likely)

Fewer accidents: 0.9%, 14.0%, 4.4%, 54.4%, 26.3% Decreased severity of accidents: 8%, 19.5%, 16.8%, 45.1%, 10.6% Fewer traffic jams: 2.6%, 11.4%, 7%, 39.5%, 39.5% Shorter travel-length: 7.9%, 26.3%, 29.8%, 23.7%, 12.3% Lower vehicle emission: 1.8%, 12.3%, 19.3%, 36.0%, 30.7% Better fuel-savings: 0.9%, 7.9%, 6.1%, 38.6%, 46.5% Lower insurance rates: 5.3%, 13.3%, 26.5%, 37.2%, 17.7%


q8: First 39 respondents were given a wrong survey for this question; one of the possible answers did not make sense, so we decided to toss out the first 39 respondents for question 8. Therefore we are left with 76 respondents (plus/minus the number of people per subquestion that did not fill in anything).

For all the sub-questions of q8, these are the possible answers: (Not concerned, slightly concerned, concerned, very concerned)

Driving in a vehicle with autonomous technology: 32.0%, 48.0%, 14.7%, 5.3% Safety-consequences of device-malfunction or system failure: 9.6%, 43.8%, 41.5%, 15.1% Legal liability for drivers/owners: 13.9%, 43.1%, 36.1%, 6.9% System security (against hackers): 12.5%, 33.3%, 40.3%, 13.9% Data privacy (location and destination): 31.5%, 36.9%, 17.8%, 13.7% interaction with non-self driving vehicles: 26.4%, 22.2%, 38.9%, 12.5% Interaction with pedestrians or cyclists: 13.5%, 39.2%, 33.8%, 13.5% Learning to use self-driving cars: 64.4%, 24.6%, 9.6%, 1.4% System performance under bad weather conditions: 36.1%, 50.0%, 9.7%, 4.2% Confused self-driving vehicles in unpredictable conditions: 8.2%, 43.9%, 35.6%, 12.3% Driving ability of self-driving vehicles compared to humans: 46.0%, 39.2%, 13.5%, 1.3% Driving in a vehicle without a human able to intervene: 12.9%, 15.7%, 44.3%, 27.1%

q9: This is a ranking based question, so here we note the percentage of people who had each of the options as first, second, …, fifth choice. The most popular options are listed first

Car should always opt to inflict as less damage as possible, to the lowest amount of people, whether they are bystanders or passengers: 59.1%, 26.1%, 10.4%, 2.6%, 1.7%

The car should not be allowed to make an explicit choice between human lives and therefore should not be able to intervene in the case of an unavoidable accident. As a result, there will be a random victim: 16.5%, 25.2%, 13.0%, 24.3%, 20.9%


The car should always prioritize the lives and health of the passengers above those of bystanders: 13.0%, 15.7%, 29.6%, 27.8%, 13.9%

The car should always prioritize the lives and health of the bystanders above those of the passengers: 7.0%, 23.5%, 24.3%, 27.8%, 17.4%

The choice that the car makes should be based on what the majority of road-users want: 4.3%, 9.6%, 22.6%, 17.4%, 46.1%

This can be summarized in the following table:


q10: For all the sub-questions of q10, these are the possible answers: (Strongly disagree, disagree, no opinion/neutral, agree, strongly agree)

Self-driving vehicles must not be sold with an ethical setting that the user can adjust. Instead this setting must be determined by, for example, the government or the manufacturer: 0.9%, 7.1%, 13.3%, 33.6%, 45.1%

I would rather buy a self-driving car with an adjustable ethical setting, than a self-driving car with an unadjustable ethical setting: 30.4%, 33%, 18.8%, 14.3%, 3.6%

When I will use a self-driving car, the specific ethical setting would be important for me: 8.1%, 13.5%, 23.4%, 37.8%, 16.2%


q11: For all the sub-questions of q11, these are the possible answers:

(Very unlikely, somewhat unlikely, no opinion/neutral, somewhat likely, very likely)


Manufacturers are fully liable when a self-driving car causes an accident, even if this discourages them to produce self-driving cars: 2.6%, 26.1%, 8.7%, 38.3%, 24.3%

Manufacturers are partially liable, in order to make them produce self-driving cars while encouraging them to correct errors: 2.7%, 9.7%, 13.3%, 52.2%, 22.1%

I am liable when my self-driving car causes an accident, even if I cannot intervene (fully autonomous vehicle): 49.1%, 27.2%, 9.6%, 11.4%, 2.6%

I am liable when my self-driving car causes an accident, because I have the possibility to intervene (semi-autonomous vehicle): 2.7%, 9.7%, 12.4%, 46.0%, 29.2%

Everyone with a self-driving car is liable when a self-driving car causes an accident, by means of mandatory insurance or tax: 3.6%, 17.0%, 32.1%, 28.6%, 18.8%

Discussion

Discussion

Target Group

The major part of our respondents are highly educated. We have too little low educated respondents to compare between levels of education. We can only compare between age groups and we can see differences between how much people use a car and how willing they are to accept a self-driving vehicle. The benefit of choosing a target group of high educated people, is that they have much general knowledge and are able to understand possible effects of self-driving cars. The disadvantage is of course, that people without much general knowledge and with lower IQ’s have to accept self-driving cars as well. The abundance of respondents between 31-40 years can have a negative effect on the accuracy of our survey because the opinion of this age-group might differ from the other two age-groups. Also smaller and more specific age-groups may have worked out better if there were more respondents. Now the elderly are in the same category as people who are in their forties for example. Also, just 10.5% of the respondents didn’t have a driver’s license, which is very little. People without driver’s license may have different opinions, because use of cars will be more accessible for them. This group is not represented enough. 22.8% of the respondents indicates to be unfamiliar with self-driving vehicles. If they picked this answer appropriately, it means that they have never heard of self-driving vehicles and therefore they are unaware of the possible advantages or disadvantages or what a self-driving car is. This is almost a quarter of the respondents and they might have given random answers.

Survey

A few issues could have had impact on the outcome of our survey. At first, some questions were not answered by some respondents. These answers were treated as if they didn’t exist. This is why the amount of answers between questions can vary. Percentages were calculated with the amount of answers to a specific question, not by the amount of respondents in general. For example, three people didn’t respond to subquestion 7.2. In this case 112 answers would be 100%. Differences in numbers of total answers can negatively contribute to the accuracy of research. This could have been avoided by making the questions compulsory, so one can only submit the form when every question is answered. Why these questions haven’t been answered can be due to multiple reasons: At first, people may have quickly filled in the answers and submitted the form without very much thinking. We have seen massive differences in completion time of the survey. For example, the lowest completion time was 2 minutes and 10 seconds, while the mean completion time was 12:22. For this, it must be mentioned that there are also completion times of over two hours, which must have been caused by people who already opened the form in their browsers, but filled it in much later. This has no immediate effect on the accuracy of the answers, but it has heightened the completion time. Still, 2 minutes and 10 seconds for example, is very quick. Probably people who have filled it in this quick, forgot to fill in something or filled questions in inappropriately. Another reason why they may not have filled in something, is because they didn’t know what to answer or didn’t understand the question. In this case, it may be beneficial for a correct representation of truth, that they didn’t just pick one out. But of course, for these cases the option ‘neutral’ was included. This brings us to another point of discussion: when the form was first opened, question 8 included five possible answers. They were: ‘not worried’, ‘somewhat worried’, ‘neutral’, ‘worried’, ‘very worried’. Later on, when 39 respondents had already submitted the form, ‘neutral’ was removed from the list of possible answers because it didn’t fit in this scale. Neutral is of course quite similar to not worried, because when one is not worried, his/her thoughts are neutral about the subject. It doesn’t fit between ‘somewhat worried’ and ‘worried’. Answers of these first 39 respondents were omitted in the results, because this confusing scale of answers can have negatively affected the accuracy of the answers.

Answers

Question 7

It is remarkable that the majority of the respondents agree with all advantages, except shorter travels where it is almost equal. A computer is able to compute the most efficient route and a human being is not. Overall, we can conclude that people believe self-driving vehicles will bring many advantages with it. This means that people are positive towards self-driving vehicles in general, because they think it will bring many benefits. There is no significant difference between age-groups in the results. This is remarkable, since results from other surveys point out that elderly see less advantages than younger people. This could be caused by the fact that there isn’t a specific age-group for the elderly. If there was one, difference may be more significant.

Question 8

We can conclude from this question that self-driving vehicles will be much more accepted, if the option to intervene will be implemented. Fortunately, this is not hard to realize for manufacturers, but this will sure have legal consequences. If a human is able to intervene, partial or full liability could be on the ‘driver’ instead of on the manufacturer. In general, people feel safer when they are in control, and however they must handle quickly to prevent an accident, they have a feeling of being in control. Second, we have system security. We can conclude that the majority of the people would like significant proof, that it is hard for hackers to break into the system of a car or multiple cars at the same time. This surely is important and a major part of the manufacturers’ focus already is on this aspect. It makes sense that people worry about this point. Although many people see the advantage of less accidents, a system security breach could deliver way more trouble. If the network of a manufacturer is hacked by terrorists for example, they might be able to make every car crash. It would be positive for general acceptation if there will be invested in system security. Slightly more than half of the respondents are worried about vehicles being confused by situations they don’t recognize and therefore hard to/impossible to predict. Over time this aspect will improve and maybe all situations will be predictable somewhere in the future.

While older people are as positive as younger people about the advantages, they are more negative about the disadvantages. They are more worried to drive in a self-driving car (46.43% of people below 31 is not worried about driving in a self-driving car, while 23.08% of people above 42 is not worried about driving in a self-driving car). This might be due to the fact that they are longer used to conventional cars than younger people are, and therefore it will take more time for them to get used to self-driving cars. Also younger people are less worried about data privacy concerns (42.86% to 21.05% not worried). Either they don’t care with what happens with their data, or they don’t think their data will be handled insecurely. Still, more than 50% of all people are worried or very worried about data privacy. Existing privacy laws should be adapted to this new technology and new laws should be made to make more people accept these cars.

It is remarkable that older people are not more worried about how to use self-driving cars, while they are worried about driving a self-driving car in general. This could be because they think it’s a self-driving car, so they don’t have to do anything. In general, older people are more worried about the negatives than younger people are. This is in line with expectations, as already mentioned, in general older people are more negative about self-driving cars or new technologies. It was odd that there wasn’t a difference in the expectations of the positives between the age-groups, but older people are still more negative below the bottom line.

Question 10

The conclusion of question 10 is very simple, people want the ethical setting to be set by the manufacturer, but they do think it is important which one they make. It may seem on one hand a bit contradictory, because the ethical settings matter. If that matters much, why wouldn’t one rather buy a car when he can bend the settings to his will? It makes sense that people rather have other cars’ settings set by the manufacturer, because otherwise many people maybe would opt to be saved as driver and not a bystander. But why would that prevent one from buying one himself? When you can adjust the settings, you can choose precisely which one you want, and if the setting is important, this would be ideal. A logical explanation for this could be that people might not want to be responsible for the choice the car makes in the end. When you have chosen a specific setting, you are partially responsible for the outcome of an accident. If the manufacturer has chosen everything, you couldn’t have done anything about the outcome. Between age-groups there aren’t significant differences, except from that younger people see it as less of a problem when they can adjust ethical settings. Younger people answer 22.81% strongly disagree and 33.33% disagree, while older people answer 38.64% strongly disagree and the same for disagree. But still, overall it would be positive for acceptance if the manufacturer defines the ethical settings. Which is good, because then it is obvious what other cars will do and no one has to worry about choosing for others or not.

Question 11

Most people are willing to use self-driving cars in four of the five situations. We see that many people want manufacturers to be partially liable. This would encourage them to produce and develop self-driving cars, as mentioned in the answer. Many people could be pushed to this answer by the way the answer is formulated. This could have affected the view of reality. This answer includes the word ‘encourage’ which is positive and the answer with full liability includes the word ‘discourage’, which is negative. We should have in mind that some people could have chosen for the answer with partial liability over the answer with full liability because of this positive and negative tone. The answer with the highest positive response, was that users are liable themselves, when they are able to intervene. This means that an option to intervene can be advised, which agrees with the conclusion to question 8. As mentioned below question 8, users may be liable with that option, but we can conclude from this, that many don’t see that as a problem. Of course again, some people could be biased towards this answer, because this answer seems of the highest moral value.

Conclusions

Appendix

Acceptatie van volledig zelfrijdende auto's

Consent Form

Instemming onderzoeksdeelname voor onderzoek ‘Acceptatie van volledig zelfrijdende auto’s’. Dit document geeft u informatie over het onderzoek ‘Acceptatie van volledig zelfrijdende auto’s’.

Voordat het experiment begint is het belangrijk dat u kennis neemt van de werkwijze die bij dit experiment gevolgd wordt en dat u instemt met vrijwillige deelname. Leest u dit document a.u.b. aandachtig door.


Doel en nut van het experiment

Het doel van dit onderzoek is te meten welke relevante factoren bijdragen aan de acceptatie van de volledig zelfrijdende auto voor privégebruik. Het onderzoek wordt uitgevoerd door de studenten Laura Smulders, Sam Blauwhof, Joris van Aalst, Roel van Gool en Roxane Wijnen van de Technische Universiteit Eindhoven, onder supervisie van dr. ir. M.J.G. van de Molengraft.


Procedure

Dit onderzoek vult u online in via uw webbrowser. In dit onderzoek wordt u een aantal vragen gesteld over de volgende relevante factoren: gebruikersperspectief, veiligheid, ethische instelling, verantwoordelijkheid. Ook worden er wat additionele demografische vragen gesteld.


Duur

Het onderzoek duurt ongeveer 5-10 minuten.


Vrijwilligheid

Uw deelname is geheel vrijwillig. U kunt zonder opgave van redenen weigeren mee te doen aan het onderzoek en uw deelname op welk moment dan ook afbreken door de browser af te sluiten. Ook kunt u nog achteraf (binnen 24 uur) weigeren dat uw gegevens voor het onderzoek mogen worden gebruikt. Dit alles blijft te allen tijde zonder nadelige gevolgen.


Vertrouwelijkheid

Wij delen geen persoonlijke informatie over u met mensen buiten het onderzoeksteam. De informatie die we met dit onderzoeksproject verzamelen wordt gebruikt voor het schrijven van wetenschappelijke publicaties en wordt slechts op groepsniveau gerapporteerd. Alles gebeurt geheel anoniem en niets kan naar u teruggevoerd worden. Alleen de onderzoekers kennen uw identiteit en die informatie wordt zorgvuldig afgesloten bewaard.


Nadere inlichtingen

Als u nog verdere informatie wilt over dit onderzoek of voor eventuele klachten, dan kunt u zich wenden tot Roel van Gool (roel.vangool@gmail.com).


Instemming onderzoeksdeelname

Door onderstaand 'Volgende' aan te klikken geeft u aan dat u dit document en de werkwijze hebt begrepen en dat u ermee instemt om vrijwillig deel te nemen aan dit onderzoek van de bovengenoemde studenten van de Technische Universiteit Eindhoven.


Demografie


Wat is uw leeftijd?


Open vraag


Wat is uw geslacht?


- Man

- Vrouw

- Anders


Wat is uw hoogst behaalde opleidingsniveau?


- Geen opleiding/ onvolledige basisonderwijs

- Middelbaar diploma

- Middelbaar beroepsonderwijs (MBO)

- Hoger beroepsonderwijs of wetenschappelijk onderwijs zonder diploma (HBO/WO)

- Bachelor diploma (HBO/WO)

- Master diploma (HBO/WO)

- Doctor, PhD


Heeft u een auto rijbewijs?


- Ja

- Nee


Hoe vaak gebruikt u een auto?


- (Bijna) elke dag

- Wekelijks

- Maandelijks

- Jaarlijks

- Nooit


Gebruikersperspectief


Hoe bekend bent u met zelfrijdende auto’s?


Onbekend, Enigszins onbekend, Enigszins bekend, Bekend


Hoe waarschijnlijk denkt u dat de volgende voordelen optreden bij het gebruik van volledig zelfrijdende auto’s?


Zeer onwaarschijnlijk, Eenigszins onwaarschijnlijk, Geen mening/neutraal, Enigszins waarschijnlijk, Zeer waarschijnlijk


- Minder ongelukken

- Verminderde ernst van ongelukken

- Minder verkeersopstoppingen

- Kortere reizen

- Lagere voertuigemissies

- Betere brandstofbesparing

- Lagere verzekeringstarieven


Veiligheid


(Niet bezorgd, Enigszins bezorgd, Neutraal, Bezorgd, Zeer bezorgd)


Hoe bezorgd bent u over de volgende kwesties gerelateerd aan volledig zelfrijdende voertuigen?


- Autorijden in een voertuig met zelfrijdende technologie

- Veiligheids consequenties van apparaatstoring of systeem mislukking

- Wettelijke aansprakelijkheid voor chauffeurs/eigenaren bij ongelukken

- Systeembeveiliging (tegen hackers)

- Gegevensprivacy (locatie en bestemming)

- Interactie met niet-zelfrijdende voertuigen

- Interactie met voetgangers en fietsers

- Zelfrijdende voertuigen leren gebruiken

- Systeemprestaties in slecht weer

- Verwarde zelfrijdende voertuigen door onvoorspelbare situaties

- Rijvermogen van het zelfrijdende voertuig in vergelijking tot menselijk rijvermogen

- Rijden in een voertuig zonder dat de bestuurder kan ingrijpen


Ethische instelling


In sommige onvermijdbare ongelukken zal de auto een keuze moeten maken tussen verschillende mensenlevens, bijvoorbeeld tussen die van de bestuurder en voetgangers. Het zou zelfs mogelijk zijn een instelling aan een zelfrijdende auto toe te voegen die bepaalt welke keuze de auto zou moeten maken in het geval van een ongeluk. De onderstaande vragen gaan over deze ethische instelling.


Met welke ethische instelling ziet u het liefst zelfrijdende auto’s op de weg? Rangschik van favoriet (1) naar minst favoriet (5):


1. De auto zou altijd het leven en gezondheid van de inzittende(n) moeten prioriteren boven die van omstander(s).

2. De auto zou altijd het leven en gezondheid van de omstander(s) prioriteren boven die van de inzittende(n).

3. De auto zou altijd moeten kiezen om de minste hoeveelheid schade aan te richten bij de minste hoeveelheid mensen, of ze nu omstander of inzittend zijn.

4. De auto zou geen expliciete keuze mogen maken tussen mensenlevens, en zou dus bij een onvermijdbaar ongeluk niet moeten ingrijpen. Als gevolg zal er dus willekeurig een slachtoffer vallen.

5. De keuze die de auto maakt zou gebaseerd moeten zijn op wat het merendeel van de weggebruikers wilt.


Note to ourselves: from top to bottom they represent the following ethical theories: egoism, virtue ethics (kinda), utilitarianism, deontology, contractualism.


In hoeverre bent u het met de volgende stellingen eens:


Zeer oneens, Oneens, Geen mening/neutraal, Eens, Zeer eens


- Zelfrijdende auto’s moeten niet verkocht worden met een door de gebruiker instelbare ethische instelling. In plaats daarvan moet deze instelling vastgesteld worden door bijvoorbeeld de regering of de fabrikant.

- Ik zou liever een zelfrijdende auto kopen met een instelbare ethische instelling dan een zelfrijdende auto met een onaantastbare ethische instelling.

- Als ik een een zelfrijdende auto gebruik, zou voor mij de specifieke ethische instelling van belang zijn.


Verantwoordelijkheid


In hoeverre zou u het volledig zelfrijdende voertuig gebruiken in de volgende stellingen?


Zeer onwaarschijnlijk, Enigszins onwaarschijnlijk, Geen mening/neutraal, Enigszins waarschijnlijk, Zeer waarschijnlijk


1. Fabrikanten zijn volledig aansprakelijk als een zelfrijdende auto een ongeluk maakt, ook als dit ze ontmoedigt om zelfrijdende auto’s te produceren.

2. Fabrikanten zijn gedeeltelijk aansprakelijk zodat ze wel produceren, maar altijd aangemoedigd zijn om fouten te verbeteren.

3. Ik ben zelf aansprakelijk als mijn zelfrijdende auto een ongeluk maakt, ook al kan ik niet zelf ingrijpen (volledig autonome auto).

4. Ik ben zelf aansprakelijk als mijn zelfrijdende auto een ongeluk maakt, omdat ik de mogelijkheid heb om in te grijpen (semi-autonome auto).

5. Iedereen met een zelfrijdende auto is aansprakelijk als een zelfrijdende auto een ongeluk maakt door middel van een verplichte verzekering of belasting.


https://forms.office.com/Pages/DesignPage.aspx?fragment=FormId%3DR_J9zM5gD0qddXBM9g78ZIQEJ0K6qk1Epl7wQE_GwFJUQzRFUEg3RFVEMFVFVDY4NFVMQVJaRUgxQi4u%26Token%3D7a2197128d054f1d9d81e3056e2eafde

Relevant factors

Ethical theories

A key feature of self-driving cars is that the decision making process is taken away from the person in the driver’s seat, and instead bestowed upon the car itself. From this several ethical dilemmas emerge, one of which is essentially a version of the trolley problem. When an unavoidable collision will occur, it is important to define the desired behaviour of the self-driving car. It might be the case that in such a scenario, the car has to choose whether to prioritize the life and health of its passengers or the people outside of the vehicle. In real life such cases are relatively rare [reference 1] , but the ethical theory underlying that decision will have possibly an impact on the acceptance of the technology. Self-driving vehicles that decide who might live and who might die are essentially in a scenario where some moral reasoning is required in order to produce the best outcome for all parties involved. Given that cars seem not to be capable of moral reasoning, programmers must choose for them the right ethical setting on which to base such decisions on. However, ethical decisions are not often clear cut. Imagine driving at high speed in a self-driving car, and suddenly the car in front comes to a sudden halt. The self-driving car can either suddenly break as well, possibly harming the passengers, or it can swerve into a motorcyclist, possibly harming them. This scenario can be regarded as an adapted version of the trolley problem. One could argue that since the motorcyclist is not at fault, the self-driving car should prioritize their safety. After all, the passenger made the decision to enter the car, putting at least some responsibility on them. On the other hand, people who buy might buy the self-driving car will have an expectation to not be put in avoidable danger. No matter the choice of the car, and the underlying ethical theory that it is (possibly) based on, it is likely that the behaviour and decision-making of the car has more chance of being socially accepted if it can morally be justified. Therefore in this section there is first highlighted some possible ethical theories, and then we will discuss some relevant aspects that surround the implementation of all ethical theories.


Ethical theories under consideration Although there are not a lot of actions a car could take in the above-described scenario, there are a lot of ethical theories that can help to inform the car to make such a decision. The most prominent ethical theories that might prima facie be useful are utilitarianism, deontology, virtue ethics, contractualism, and egoism. There are also three meta-ethical frameworks which should be considered: Relativism, absolutism, and pluralism. These frameworks can by themselves not influence the decision-making process as they make no normative claims of their own, but they are useful in deciding how the possible ethical knob of a self-driving car might work (see section …). Utilitarianism considers consequences of actions, as opposed to the action itself. This means that the correct moral decision or action in any scenario is the one that produces the most good. Although ‘’good’’ is a subjective term, in most versions of utilitarianism this usually refers to the net happiness or welfare increase for all associated parties. [reference]. Circumstances or the intrinsic nature of an action is not taken into account, as opposed to Deontology. Deontology does not judge the morality of an action based on its consequences, but on the action itself. Deontology posits that moral actions are those actions which have been taken on the grounds of a set of pre-determined rules, which hold universally and absolutely. This means that for a deontologist, some actions are wrong or right no matter their outcome.

The third major normative ethical theory is virtue ethics. Virtue ethics emphasizes the virtues, or moral character, as opposed to rules or consequences. Virtues are seen as positive or ‘’good’’ character traits. Examples of such traits are courage, or modesty. A moral person should do actions which realize these traits, and therefore moral actions are those which cause a persons’ virtues to be realized.

Other than the three major classical normative ethics theories, there are two more prima facie relevant theories. The first of which is egoism. Normative egoism posits that the only actions that should be taken morally are those actions that maximize the individuals self-interest. An egoist only considers the benefit and detriments other people experience in so far as those experiences will influence the self-interest of the egoist. Although it may not seem like it, egoism is very similar to utilitarianism, except that utilitarianism focusses on the maximum happiness of all people involved, and egoism only focusses on the maximum happiness of the individual.

The last ethical theory that can be applied to the adapted trolley problem is (social) contractualism. Contractualism does not make any claims about the inherent morality of actions, but rather posits that a moral action is one that is mutually agreed upon by all parties affected by the action. What this agreement should look like exactly differs per version of contractualism: in some versions there must be unanimous consent, while in other versions there must be a simple or a supermajority. A good action is therefore one that can be justified by other relevant parties, and a wrong action is one that cannot be justified by the same.


Ethical theories applied to adapted trolley problem. First, let us apply utilitarianism to the adapted trolley problem. On a micro level a self-driving car with a utilitarian ethical setting would firstly want to minimize the amount of deaths, and then minimize the total number of severe injuries sustained by all people who are affected by the collisions. This seems simple enough, but there are, among others, two issues with this implementation of a utilitarian setting. If for instance the technology is so advanced that it can target people based on if they’re wearing a helmet or not, then it would be safer for the car to collide with a biker wearing a helmet over a biker who is not wearing one, assuming all else is equal. Now the biker with a helmet is targeted, even though they are the one putting in effort to be safe. This is unfair, and if this is implemented, then it is possible some people will stop trying to take safety measures seriously, in order to not be targeted by a utilitarian self-driving car. This would ultimately reduce the overall safety on the road, which is exactly the opposite of what a utilitarian wants.

The second problem is that it seems that although people want other road-users in self-driving cars to adopt a utilitarian setting, they themselves would rather buy cars that give preferential treatment to passengers (REFEFRENCE). Therefore if self-driving cars are only sold with a utilitarian ethical setting, then less people might be inclined to buy them, again reducing the overall safety on the road.

There are multiple possible counters to these two issues that a ‘’true’’ utilitarian might propose. To counter the first problem, the utilitarian would simply not program the car to make a distinction between people who wear a helmet or those who do not wear a helmet. A distinction would also not be made in similar scenarios, since this solution is not only relevant to cases where helmets are involved. Of course, there are also be scenarios where the safer of the two options will be chosen by the self-driving car, assuming the same amount of people are at risk in both options. The difference between a valid safe choice and an invalid safe choice is that some safety measures are explicitly taken (such as the decision to put on a helmet), while others are more a byproduct of another decision (such as riding a bus versus driving a car). Driving a bus might be safer than driving a car, but most people who are passengers in a bus do not choose to be because of safety reasons. They might not have a car, or they do so out of concern of climate change. Since people in this scenario did not choose to ride a bus because of safety reasons, it is likely they will also not stop riding the bus because of a slightly increased chance of being hit by a self-driving car. Of course, this is only a thought-experiment, but if this also holds true in practice, then a utilitarian would find it acceptable for the self-driving car to choose the safer option in the bus vs car scenario, whereas in the helmet vs no helmet scenario the utilitarian would not find it acceptable to choose the safer option.

To counter the second problem, the ‘’true’’ utilitarian would ultimately want to reduce death and or/harm by reducing the amount of traffic accidents. If in practice that means that a significant number of people will not buy a self-driving car with a utilitarian setting, then the utilitarian would rather the self-driving cars be sold with an egoistic setting that gives passengers preferential treatment. This way, even though when an accident involving a self-driving car occurs it will be more deadly than with a utilitarian setting, accidents will overall decrease since more self-driving cars will be present.

There are more problems with a utilitarian approach to self-driving cars, but they are unrelated to the two micro vs macro utilitarian problems we just treated. One of these problems has to do with discrimination. In an unavoidable collision scenario where the self-driving car has to either hit an adult man or a child, the adult has more chance of survival. Is the car therefore justified in choosing the man? A utilitarian would say the car is indeed justified, except if this decision has been found to cause consumers to be turned away from purchasing and using self-driving cars. Prima facie, this would not seem to be the case, but there is no major literature on this topic that gives any definitive or exploratory answer (REFERENCE OR NOT). In some countries, it has already been made law that this type of discrimination is illegal, such as in Germany (REFERENCE).

The deontological ethical setting would not allow for a choice to be made that harms or kills a person, no matter the potential amount of saved lives. Therefore when faced with an unavoidable (possibly) deadly collision, the car would simply not make a decision at all, and events would play out ‘’naturally’’. In essence, this makes the actual ‘’chosen’’ collision somewhat random. As in the original trolley problem, the moral entity, in this case the car (or more accurately, the programmer who programs the ethics into the car), would simply not intervene at all. Deontologists are of the opinion that there is a difference between doing and allowing harm, and by not letting the car intervene in an unavoidable accident, both the passenger(s) and the programmer(s) are absolved of any moral responsibility. Some people might be happy with such a setting, since many people could not fathom being (morally) responsible for the deaths of others. By entering a self-driving car with a utilitarian ethical setting, the passenger(s) cannot be absolved of some moral responsibility in the case of an accident, since they made a conscious decision to buy a car that has been implemented to make explicit decisions. The same can not be said of passengers that enter a deontological self-driving car.

A virtue ethics response to the adapted trolley problem is very hard to come up with. An ethical setting based on virtue ethics would want the car to make a decision that improves the virtues of the moral entity. Therefore the decision that the car makes depends on which virtue we would want to improve. Take for instance bravery. One could posit that it is brave to take up danger to yourself if it means that other people will be safer for it. If we assume the moral entity/entities to be the passenger(s), then the self-driving car would always choose to put the passengers in danger, since this would improve on their bravery. There are two problems with this approach: firstly, it is hard to optimize any decision the car makes, since it is impossible to find a decision that always improves on all virtues. Also, what are those virtues in the first place? Is it for instance virtuous to sacrifice yourself if you leave behind a family? Secondly, since the car is not actually a moral agent, whose virtues should the decision the car makes improve upon? The programmers’ or the passengers’? This is unclear. If the programmers’ virtues should be improved, then it seems prima facie extremely unlikely that people would be willing to buy cars that might sacrifice themselves to improve the virtue(s) of a programmer they never met. If the passengers’ virtues should be improved, then people might be slightly more sympathetic, but even then I assume most people do not want to sacrifice their lives to improve upon an abstract notion of virtue and morality.

If we take the perspective of self-driving car buyers and users, the ethical egoist response is to prioritize the lives of the passengers above all else. As said in the ‘’utilitarian’’ part of this section, people who buy and use the car seem to prefer a self-driving car that always puts the lives of themselves above others (REFERENCE). This setting could also possibly be regarded as the setting of a ‘’true’’ utilitarian. There is another possible benefit to this ethical setting, namely that they are more predictable. If self-driving cars become very prevalent, that means that any self-driving car must always account for the decisions other self-driving cars are making. Therefore, if all self-driving cars prioritize themselves, their road behaviour becomes more predictable to other self-driving cars. However, this argument is theoretical in nature, and there are some game theorists who do not agree. The moral argument against ethical egoism is that it seems, and indeed is incredibly selfish. An ethical egoist might sacrifice hundreds of lives to save themselves. However, a ‘’true’’ ethical egoist is not always extremely selfish, since extremely selfish behaviour is not tolerated by others. A ‘true’’ ethical egoist would therefore also consider the feelings of other people, since their thoughts and decisions may influence the reward any egoist may get out of any given situation. In the case of unavoidable (deadly) accidents however, no matter the feelings of others, an egoist that values their own life above all else will not care for the feelings of others, since there can be nothing more important now or in the future than their own life.

Up until now we have considered only the perspective of buyers and users of self-driving cars, but the actual moral agent is the programmer (or a collection of people in the company that employs the programmer). Their egoist response would be based on how often they are planning to use the self-driving car for which they design the software. If they do not plan to use it at all, then the ethical egoist response of the programmer would be to implement a utilitarian ethical setting, since the programmer will be on average safer. If they however plan to use the self-driving car a lot, then the ethical egoist response is to implement an ethical setting that prioritizes the passenger.

A contractualist ethical setting is one that is agreed upon by all relevant parties. Unanimous consent seems impossible to get, so in practice this would probably a simple democratic vote, where an arrangement of ethical settings, or a combination of ethical settings are proposed. Each possible affected person can take a vote on these settings, and the democratic winner(s) will be implemented. The tough question is: who is affected by the decisions of a self-driving car? Self-driving cars can potentially drive across whole continents; from Portugal to China, or from Canada to Argentina. If the decisions of these self-driving cars can influence events in a collection of multiple countries, should people in all these countries be part of the decision making process? If so, should there be a global vote on the specific ethical settings that can be implemented? Or if the vote is done nationally, does that mean that the ethical setting of a car must be changed when the self-driving cars enters a country where citizens voted on a different ethical setting? In practice this seems very difficult to implement. If any or all of these contractualist ethical settings are practically possible, then this setting almost completely solves the responsibility aspect of self-driving cars: if all relevant parties can vote, then society as a whole can be held ethically and legally responsible. Since responsibility might be one of the factors that contribute to the acceptance of self-driving cars, having a realistic solution to the issue of responsibility will likely positively impact public perception of self-driving cars.

Letting the user decide the ethical setting or not. Also, all cars the same setting or not? It is clear that there is no ethical setting that is perfect for every scenario. For various different reasons, some authors advocate for people to be able to choose their own ethical settings. One can imagine an ‘’ethical knob’’, which has different programmable ethical settings. An ethical knob might be on a scale from altruistic, to egoistic, with an impartial setting in them middle. Maybe there will even be a deontological setting which does not intervene in unavoidable accidents. There are several reasons to implement such an ethical knob. People might want to be able to buy cars that mirror their own moral mindset. Millar (2015) observes that self-driving cars can be regarded as moral proxies, which implement moral choices. Implementing a moral knob also makes it easier to assign responsibility to someone in the case of an unavoidable accident (Sandberg & Bradshaw‐Martin, 2013; cf. Lin, 2014), since the passengers of the car have explicitly chosen the decision of the car. This might impact acceptance of the technology both positively and negatively. Prima facie it seems that people who want to buy self-driving cars might want to be able to choose their own ethical setting, but on the other hand people also do not like to be held responsible for any accident (REFERENCE). Also, other relevant road parties might not accept self-driving car passengers to choose their own ethical setting, since it is likely they will choose an egoistic setting, which negatively impacts their own road experience. This is especially true if the car would be equipped with an ‘’extremely egoistic’’ setting in which the value of the passengers life is worth even 100 or more other people. It seems likely people will not accept a self-driving car making such decisions, so perhaps manufacturers will limit how egoistic the ethical knob can be turned. Surely, these kind of ethical settings are too unpopular, perhaps even with people who might benefit from an extremely egoistic setting (REFERENCE).

The same can be said for an ethical knob that can not only be turned by the user to fit their moral convictions, but can even be modified by the user to fit other kind of preferences. An ethical knob that is able to discriminate on gender, or race might be technologically possible to make, but users’ should not be allowed to let their self-driving cars be racist or sexist. Discrimination based on race or sex is illegal in many countries, so these ethical settings, if even possible to implement, will likely be outlawed anyway, as Germany has already done (REFERENCE). To gauge which kind of settings are regarded as unacceptable, a contractualist might propose a democratic vote to gauge which kind of settings are regarded as unacceptable. The free choice of people to make their own-self driving car will then be limited by the democratic choice of all relevant road users. Such an arrangement might prove to be an acceptable middle ground between no ethical knob or a completely customizable ethical knob. However, whether this would actually lead to increased acceptance of the technology over the other two options has not been settled or mentioned in any other academic literature (REFERENCE IF POSSIBLE).

What do users appear to want? - Very few articles explicitly endorse an ethical theory to be applied in the case of self-driving cars. (page 6 of https://pure.tue.nl/ws/portalfiles/portal/101570905/Nyholm_2018_Philosophy_Compass.pdf). Few exceptions are papers by Gogoll and Müller and by Derek Leben. - Articles blablabla

Extra stuff (not per se in here, but relevant for discussion) - Not accounting for: ease of programming. Just assume it to be possible. It might be that some aspects of what we have discussed are not going to be relevant, since technology allows for it. But it is better to assume it will be possible and be prepared, than not know what to do when technology does arrive. - Maybe AI can figure out the best ethical theories, but that would essentially be a black box. Would we be comfortable with that?

In general, most of this is relevant to acceptance, since some of these settings increase or decrease the amount of accidents. We also hypothesize that when the number of accidents decrease, the acceptance will likely grow.

Responsibility

Whereas automated vehicles were a distant future a mere twenty years ago, they are a reality right now. For some years companies like for example Google have run trials with automated vehicles in actual traffic situations and have driven millions of kilometers autonomously. However, between December 2016 and November 2017, Waymo's self-driving cars for example drove about 350.000 miles and human driver retook the wheel 63 times. This is an average of about 5.600 miles between every disengagement. Uber has not been testing its self-driving cars long enough in California to be required to release its disengagement numbers (Wakabayashi, D., 2018). Though this research has been ground-breaking, there have also been some incidents in the past years. In 2016 a Tesla driver was killed while using the car’s autopilot because the vehicle failed to recognize a white truck (Yadron & Tynon, 2016). In 2018 a self-driving Volvo in Arizona collided with a pedestrian who did not survive the accident. It was believed to be the first pedestrian death associated with self-driving technology. When an Uber self-driving car and a conventional vehicle collided in Tempe in March 2017, city police said that extra safety regulations were not necessary, the conventional car was was at fault, not the self-driving vehicle (Wakabayashi, D., 2018).

One very important factor in the development and sale of automated vehicles is the question of who is responsible when things go wrong. In this section we will look in detail at all factors involved and come up with certain solutions. As brought up by Marchant and Lindor (2012), there are three questions that need to be analysed. Firstly, who will be liable in the case of an accident? Secondly, how much weight should be given to the fact that autonomous vehicles are supposed to be safer than conventional vehicles in determining who of the involved people should be held responsible? Lastly, will a higher percentage of crashes be caused because of a manufacturing ‘defect’, compared to crashes with conventional vehicles where driver error is usually attributed to the cause (Marchant & Lindor, 2012)?

Current legislation

If we take a look at how responsibility works for conventional vehicles, we find that responsibility is usually addressed to the driver due to failure to obey to the traffic regulations (Pöllänen, Read, Lane, Thompson, & Salmon, 2020). This can be as small and common as driving too fast or losing attention for a fraction of a moment, something nearly everyone is guilty of doing at some point. Where this usually doesn’t matter, sometimes it can lead to catastrophically results. This moment of misfortune still holds the driver responsible. As Nagel (1982) theorized, between driving a little too fast and killing a child than crosses the street unexpected and there being no child, there is only bad luck. The consequence, however, is vast for the child, but also for the driver (Nagel, 1982). This reasoning could also be applied to automated vehicles. If an accident happens it is just bad luck for the driver, and he will without doubt be liable. However, looking at the fact that this depends on luck, and the fact that most autonomous vehicles allow for restricted to no control, this option is not considered as a plausible one (Hevelke & Nida-Rümelin, 2015).

Blame attribution

A couple of studies have shown that the level of control is crucial in blame attribution. McManus and Rutchick (2018) showed that people attribute less blame to a driver in a fully automated vehicle in comparison to a situation where the driver selected a different algorithm (e.g. to behave selfishly) or drove manually (McManus & Rutchick, 2018). Another study (Li, Zhao, Cho, Ju, & Malle, 2016) investigated blame attribution between the manufacturer, government agencies, the driver and pedestrians. They found that blame is reduced for drivers when the vehicle is fully autonomous, whereas the blame for the manufacturer or government agencies increased.

The manufacturer

It would be obvious to say the manufacturer of the car is responsible. They designed the car, so if it makes a mistake, they are to blame. However, there are different types of defects in the manufacturing process. Firstly, there is a defect in manufacturing itself, where the product did not end up as it was supposed to, even though rules are followed with care. This error is very rare, since manufacturing these days is done with a very low error rate (Marchant & Lindor, 2012). A second defect lies in the instructions. When it is failed to adequately instruct and warn, this could result in a consumer defect. A third defect, and the most significant for autonomous vehicles, is that of design. This holds that the risks of harm could have been prevented or reduced with an alternative design (Marchant & Lindor, 2012).

Any flaw in the system that might cause the car to crash, the manufacturers could have known or did know beforehand. If they then sold the car anyway, there is no question in that they are responsible. However, by holding the manufacturer responsible in every case, it would immensely discourage anyone to start producing these autonomous cars. Especially with technology as complex as autonomous driving systems, it would be nearly impossible to make it flawless (Marchant & Lindor, 2012). In order to encourage people to manufacture autonomous vehicles and still hold them responsible, a balance needs to be found between the two. This is necessary, because removing all liability would also result in undesirable effects (Hevelke & Nida-Rümelin, 2015). In short, there needs to be found a way to hold the manufacturer liable enough that they will keep improving their technology.

Semi-autonomous vehicles

As stated above, there have been studies on blame attribution in fully autonomous vehicles, and those with certain pre-selected algorithms. A semi-autonomous vehicle (with a duty to intervene) has not been discussed yet. A good analogy for a semi-autonomous vehicle would be that of an auto-piloted airplane. The plane flies itself, though it is the responsibility of the pilot to intervene when something goes wrong (Marchant & Lindor, 2012). So, it could be suggested in the question of responsibility in case of an accident to hold the driver of the vehicle responsible. If the car is designed in such a way that the driver has the ability to take over and intervene, this could really be used in an argument against the driver. There is an argument in what the utility of the automated vehicle will be if they are designed like this. After all, when the driver has a duty to intervene, the vehicle can no longer be summoned when needed, it can no longer be used as a safe ride home when drunk, or when tired (Howard, 2013). However, as long as the vehicles will still reduce accidents overall, saying the driver has a duty to intervene or not would still be a better option than using conventional vehicles (Hevelke & Nida-Rümelin, 2015). It could be that the accident rate is dropped even more when the driver actually does have a duty to intervene, due to the fact that it can now intervene when it for example sees something the car doesn’t see. It would also mean that there is more of a transitioning phase when introducing the automated vehicles, instead of them suddenly being fully automatic.

On the other hand, asking the driver to intervene in a fully automated vehicle is questionable. It would assume that the driver can intervene at all times, and this is not always the case due to human error in reaction time or danger anticipation (Hevelke & Nida-Rümelin, 2015). It would be difficult to recognize whether or not the automated vehicle will fail to respond correctly, and thus unclear when the driver needs to intervene. In this case it would be unrealistic to expect the driver to predict a dangerous situation. When implementing this reasoning, another problem is possible to arise: the driver might intervene when it shouldn’t have, resulting in an accident (Douma & Palodichuk, 2012). Next to that, as argued by Hevelke & Ninda-Rümelin (2015), it seems impossible to ask a driver to pay attention all the time to be possible to intervene, while an actual accident is quite rare. All in all, it would be unreasonable to put responsibility on a driver that did not – or could not – intervene.

Shared liability

As is previously discussed, the responsibility of an accident can be placed on the individual driving the autonomous vehicle. For a number of reasons this was not ideal. An alternative would be to create a shared liability. People that drive cars everyday (especially when not necessary) take the risk of possibly causing an accident. They still make the choice to drive the car (Husak, 2004). You can extrapolate this thinking to the use of automated vehicles. If people choose to drive an automated vehicle, they in turn participate in the risk of an accident happening due to the autonomous vehicle. The responsibility of an accident is therefore shared with everyone else in the country also using the automated vehicle. In that sense the driver itself did not do something wrong, it did not intervene too late, it simply shoulders the burden with everyone else. A system that could work with this line of thinking is the entering of a tax or mandatory insurance (Hevelke & Nida-Rümelin, 2015).

So, it seems there are a couple of options. The manufacturer can be fully responsible; however, this could result in the intermittence of autonomous vehicle manufacturing. On the other hand, it is desirable that the manufacturer does have some sort of liability, so they keep investing to improve the vehicle. At the same time, giving the driver full responsibility only seems to be able to work in the beginning phase of autonomous vehicles. When they are still in development, and drivers really do have a duty to intervene. When the vehicles are more sophisticated and able to fully drive autonomously, the responsibility can be shared with all people through a tax or insurance.

Safety

One of the main factors deciding whether self-driving cars will be accepted is the safety of them. Because who would leave their life in the hands of another entity, knowing it is not completely safe. Though almost everyone gets into buses and planes without doubt or fear. Would we be able to do the same with self-driving cars? Cars have become more and more autonomous over the last decades. Furthermore, self-driving cars will operate in unstructured environments, this adds a lot of unexpected situations. (Wagner et al., 2015)


Traffic behaviour The cars safety will be determined by the way it is programmed to act in traffic. Will it stop for every pedestrian? If it does pedestrians will know and cross roads wherever they want. Will it take the driving style of humans? How does the driving behaviour of automated vehicles influence trust and acceptance?

In a research two different designs were presented to a group of participants. One was programmed to simulate a human driver, whilst the other one is communicating with it’s surroundings in a way that it could drive without stopping or slowing down. The research showed no significant different in trust of the two automated vehicles. However, it did show that the longer the research continued the trust grew. (Oliveira et al., 2019) It is therefore to say that the driving behaviour does not necessarily influence the acceptance. But the overall safety of the driving behaviour determines this.

Errors

Despite what we think, humans are quite capable of avoiding car crashes. It is inevitable that a computer never crashes, think about how often your laptop freezes. A slow response of a mini second can have disastrous consequences. Software for self-driving vehicles must be made fundamentally different. This is one of the major challenges currently holding back the development of fully automated cars. On the contrary automated air vehicles are already in use. However, software on automated aircraft is much less complex since they have to deal with fewer obstacles and almost no other vehicles. (Shladover, 2016)

Cybersecurity

The software driving fully AV will have more than 100 million lines of code, so it is impossible to predict the security problems. Windows 10 is made of 50 million lines of code and there have been lots of bugs. Double the amount of code will result in an even higher probability of unknown vulnerabilities. (Parkinson et al., 2017)



Vs humans

Self-driving cars hold the potential of eliminating all accidents, or at least those caused by inattentive drivers. (Wagner et al., 2015) In a research done by Google it is suggested that the Google self-driving cars are safer than conventional human-driven vehicles. However, there is insufficient information to fully take a conclusion on this. But the results lead us to believe that highly-autonomous vehicles will be more safe than humans in certain conditions. This does not mean that there will be no car-crashes in the future, since these cars will keep on being involved in crashes with human drivers (Teoh et al., 2017)


The city

The city is probably one of the most complicated locations for a self-driving car to operate in. It is filled with vulnerable road users, such as pedestrians and bikers which are relatively hard to track. Therefore, freeways are likely to be the first spaces in which the automated cars will be able to operate. This is a much more structured environment met simple rules and less unexpected situations. However, this will not solve the issue of traffic jam at popular destinations. Some might say the ambition is to allow cars, bikes and pedestrians to share road space much more safely, with the effect that more people will choose not to drive. ‘But, if a driverless car or bus will never hit a jaywalker, what will stop pedestrians and cyclists from simply using the street as they please?’ (Duranton, 2016)

Google acknowledges this problem and states that when Google cars cannot operate in existing cities, perhaps new cities need to be created. And the truth is, this sounds silly, but it has happened in the past. The first suburb of America was developed by rail entrepreneurs who realized that developing suburbs was much more profitable than operating railways. (Cox, 2016)

We might need to look at alternative technologies that we need in urban transport. Rather than developing individualist self-driving cars, let’s look at the ‘technology of the network’. How can we connect more people without consuming the space we live in. (Duranton, 2016)


Trust

For decades, we have trusted safe operation of automated mechanisms around and even inside us. However, in the last few years the autonomy of these mechanisms have drastically increased. As mentioned above, this brings along quite a few safety risks. Questions of whether or not to trust a new technology are often answered by testing. (Wagner et al., 2015)

There has been a survey about the trust in fully automated vehicles. Trust was defined as “the attitude that an agent will help achieve an individual’s goal in a situation characterised by uncertainty and vulnerability” (Lee & See, 2004, p.51) Within this survey 60% of the respondents mentioned to have difficulties trusting automated vehicles. Trust in this context can be seen as the driver’s belief that the computer drives at least as good as a human. However, with some uncertaintity of the driver might get involved in accidents because of failures. (Bock, German & Sippl, 2017 cited in Johsen et al., 2017)

The trust to be able to fully implement these technologies is not where it is supposed to be. We know that trust can built-up over time and this is also the case with trusting self driving cars. The hesitation is the greatest amongst the elderly, whom are also the generation that gain a lot of benefits as well. The good news of this research is that 50% of the older adults reported back that they are comfortable with the concept of tools that can help the driver. The amount of tools can grow, whilst the driver/passenger can get used to the idea of a completely self-driving car (Abraham et al., 2016)

Privacy

Self-driving cars rely on an arrangement of new technologies in order to traverse traffic. Some of these technologies have to take data from its environment and/or the people in the car, which can have a big effect on the privacy of both the users of the car and the people around the car. Since fully autonomous cars are not yet on the market, and have not even been build yet, it is unclear how significant the privacy issues might be that are associated with self-driving cars. At minimum, the use of data that tracks locations seems like a necessary implication for self-driving cars, and thus necessary for a self-driving car to function correctly (Boeglin, 2016). This kind of location tracking is already prevalent in mobile phones, and the privacy issues that accompany it are very well known already (Minch, 2004; ). In fact, car GPS that is already in use already suffers from this problem. The car can save specific locations, has to plan routes based on current location, and has to access current traffic data. If anyone were to access this information, they would essentially access a record of the movements of a person, and also of activities associated with the destinations. If one knows that the user of the self-driving car visited a psychiatrist, or an abortion clinic, then one can also make an educated guess on the things the user has been going through in their lives.

Besides these personal concerns that come from location tracking, there are also commercial concerns. The company that tracks location data might use the location data of the car to infer personal information of the user(s), and use this personal information for marketing purposes. We already know that this is possible, since this happens often with tracking mobile phone locations: if a mobile phone user visits a store that sells some product, then Google might use this data to send personalized advertisements to the user. The same could happen with self-driving cars.

According to a paper by Jack Boeglin (REFERENCE), ‘’whether or not a vehicle is likely to threaten its passengers’ privacy can largely be reduced to the question of whether or not that vehicle is communicative.’’. A communicative vehicle relays vehicle information to third parties or receives information from external sources. A vehicle that is more communicative will be likely to collect information. Communicative vehicles could take a number of forms, so therefore it is hard to gauge how severe the associated privacy risks will be. One kind of communicative self-driving car is a car that exchanges data between itself and other self-driving cars. Both cars can use this data for risk mitigation or crash avoidance. Wireless networks are particularly vulnerable, according to Jack Boeglin. When self-driving cars become more prevalent, they might also be able to communicate with roads or road infrastructure (traffic lights or road sensors) to exchange data that will make both parties more effective. As a result, the traffic authority, for instance the municipality, will also have access to the records of each self-driving car. Whether or not people will accept this remains to be seen, and not a lot of research has been done on this subject (REFERENCE).

Self-driving cars that are currently in development are not all communicative types of cars, partly because there does not exist infrastructure yet to support such cars. Privacy risks for non-communicative cars are less prevalent, but not nonexistent. Location tracking will always be an issue, and uncommunicative self-driving cars will still be heavily reliant on sensory data in order to get to the desired destination. This sensory data might still be hacked, but hacking is almost always a negative possibility that infringes on the right of privacy. Self-driving cars are hardly a special case in that regard.

It is largely unclear how users will react to the potential risks to their privacy, since this is a newly emerging technology, and issues such as safety, decision-making and autonomy are usually more pressing issues. We expect that people will not rate privacy as a large concern, and instead will be more concerned with the aforementioned issues. This is especially the case when talking about uncommunicative self-driving cars, which seem to be more prevalent than communicative cars in todays world. We also expect that people largely think of uncommunicative self-driving cars instead of communicative self-driving cars, since communicative cars are a step further into the future than uncommunicative self-driving cars. This probably lowers the perceived level of risk associated with privacy issues among users even more.

(Include a few surveys on self-driving cars and what people think about privacy concerns. Hard to find them though).

Perspective of private end-user

The potential revolutionary change that self-driving cars could stir up would affect many areas of life. Apart from improving safety, efficiency and general mobility, it would change current infrastructure and the relationship between humans and machines (Silberg et al., 2012). This section will focus primarily on the user’s attitude towards self-driving cars, specifically perceived benefits and concerns.

According to the National Highway Transportation Safety Administration cars are currently in ‘level 3 automation’, in which new cars have automated features, but still require an alert driver to intervene when necessary. ‘Level 4 automation’ would mean that a driver is no longer permitted to intervene (Cox, 2016). Before this level can be reached, the general public would need to feel comfortable with letting go of the steering wheel.

General attitude

A research by König & Neumayr (2017) showed that people are generally more worried about self-driving cars when they are older. They also showed that females have more concern than males, and that rural citizens are less interested in self-driving cars than urban citizens (König & Neumayr, 2017). Surprisingly, people who used their car more often seemed less open to the idea of a self-driving car, possibly because the change to self-driving cars would be too radical. Furthermore, the most common desire of people is to have the ability to manually take control of the car when desired. It allows them to still enjoy the pleasures of manually driving and they don’t lose the sense of freedom (Rupp & King, 2010).

Another interesting finding by König & Neumayr (2017) was that people who had no car as well as people who already had a car with more advanced automated features showed a more positive attitude towards self-driving cars. Possibly because the people without a car see it as an opportunity to be able to take part in traffic, and people with advanced cars are more familiar with the technology (König & Neumayr, 2017). Lee et al. (2017) also found that people without a driver’s licence were more likely to use a self-driving car (Lee et al., 2017).

Benefits and concerns

It is common knowledge that many cars crash due to human error. The World Health Organization (2016) reported that road traffic injuries is the leading cause of death among people between the ages of 15 to 29 (World Health Organization, 2016). Raue et al. (2019) argues that removing the human error from driving is one of the biggest potential benefits of self-driving cars. They also pose that driverless cars could potentially decrease congestion, increase mobility for non-drivers and create more efficient use of commuting time. Next to that, there are also environmental benefits; when vehicles no longer need to be built with a tank-like safety, they are lighter and consume less fuel (Bamonte, 2013; Parida et al., 2018; Raue et al., 2019).

König & Neumayr (2017) used a survey to judge people’s attitude towards potential benefits and concerns. They found that people mostly value the fact that a self-driving car could solve transport issues older and disabled people face. This is in accordance with Cox (2016) and Parida et al. (2018), who said the driverless car has the potential to expand opportunity and that it can improve the lives of disabled people and others who are unable to drive(Cox, 2016; Parida et al., 2018). From the survey König & Neumayr (2017) also found that people value the fact that they can engage in other things than driving. Participants did not feel that self-driving cars would give them social recognition, and they did not feel like it would yield to shorter travel times (König & Neumayr, 2017).

On the other hand, there are also some concerns indicated by König & Neumayr (2017). Their participants were mostly concerned with legal issues, followed by concerns for hackers. Lee et al. (2017) also found that especially older adults are concerned with self-driving cars being more expensive. Surprisingly, they found that across al sub-groups people did not trust the functioning of the technology (König & Neumayr, 2017; Raue et al., 2019).

Sharing cars

While many people look positively towards the implementation of self-driving cars, less people are willing to buy one. Many people don’t want to invest more money in self-driving cars than they do in conventional cars right now (Schoettle & Sivak, 2014). Therefore, a car sharing scheme (e.g. a whole fleet provided by a mobility service company, or a ride sharing scheme) is an option to make self-driving cars more popular. This way people would not have to spend a large sum of money, and they could gradually learn to trust the technology by using the shared self-driving cars first (König & Neumayr, 2017). According to Cox (2016), this is not necessarily true. Since corporate mobility companies will then provide the cars, they have to cover the costs of for example vehicle operation, which will increase the fees for the user (Cox, 2016).

So, how would it work when automated vehicles are being used as shared vehicles? Cox (2016) assumes that companies will be providing cars the same way they do now, renting them in short-term or long-term. Especially in large metropolitan areas automated vehicles could substantially shorten a trip, or solve current transportation problems (Cox, 2016; Parida et al., 2018). While cars are being shared, private ownership would still be possible, and people would be able to rent out their own personal cars short-term.

One option of sharing cars is to let people share a single ride. This could decrease the number of cars in an urban area and address issues like congestion, pollution or the problem of finding a parking spot (Parida et al., 2018). However, there are certain issues with ridesharing. Because not every person starts and stops in the same place, trips could actually increase in time, making ridesharing less attractive. Lowering the price of ridesharing might not even be enough to attract travellers. Ridesharing does raise another important question: do people want to share a car with strangers? As stated by Cox (2016), personal security concerns will probably only increase and therefore people will not be willing to share a ride with someone they don’t know.

An important notion is that vehicles are parked on average more than ninety percent of the time (Burgess, 2012). A driverless car fleet provided by a mobility company could possibly reduce the number of cars in a metropolitan city since the urban area is so densely packed. However, these cars would not be attractive to users living in a more rural area, or people that need to travel outside the urban area (Cox, 2016).

In the present day, many people use transit (e.g. train, metro, bus, etc.) in metropolitan areas, though this is not the fastest possible commute. Owen and Levinson (2014) found that many jobs can be reached in about half the time by car than it takes by transit. This is mostly because of the “last mile” problem, the fact that many destinations are beyond walking distance of a transit stop (Owen & Levinson, 2014). Driverless cars can be used to overcome this “last mile” problem, by placing them more at transit stops. However, a fleet of driverless cars can have two consequences on transit. On the one hand it can cause transit users to refrain from using transit because of the improved travel times and door-to-door access. On the other hand, many transit riders have a low income and will probably not be able to pay for a driverless car alternative (Cox, 2016). Though, if the charges of driverless cars are too low this might reduce the attractiveness of transit even more, causing people to use the driverless vehicle for the entire trip (Cox, 2016).

Acceptance

Many studies have delved into technology acceptance across various domains, and many different ways to determine the acceptance of self-driving cars are mentioned. Lee et al (2017) found that across all ages, perceived usefulness, affordability, social support, lifestyle fit and conceptual compatibility are significant determinants (Lee et al., 2017; Raue et al., 2019). Raue et al. (2019) found that people’s risk and benefit perceptions as well as trust in the technology relate to the acceptance of self-driving cars (Raue et al., 2019). According to Rogers (1995), to increase the probability of a wide-spread adoption of the innovation, the following factors need to be taken into account: the relative advantage, the compatibility (steering wheel with a disengage button), the trialability (test-drives), the observability (car-sharing fleets), and complexity (introduction to automation) (König & Neumayr, 2017; Rogers, 1995).

As found by Lee et al. (2017), older adults are possibly not ready yet to let go of the steering wheel. They found that older generations have a lower overall interest and different behavioural intentions to use. However, people with more experience with technology seemed to be more accepting (Lee et al., 2017). Other supporting studies did find that older adults are more likely to accept new in-vehicle technologies (Son, Park, & Park, 2015; Yannis, Antoniou, Vardaki, & Kanellaidis, 2010). However, Lee et al. (2017) also found that across all ages, people would be more likely to use a self-driving car if they would no longer be able to drive themselves due to aging or illness (Lee et al., 2017). As for the general public, Raue et al. (2019) looked into common psychological theories to assess people’s willingness to accept the self-driving car. They found that people who are familiar with actions or activities often perceive them to be less risky, and people’s levels of knowledge about a certain technology can affect how they understand it risks and benefits (Hengstler, Enkel, & Duelli, 2016; Raue et al., 2019). In that sense, affect is used as a decision heuristic (i.e. a mental shortcut) in which people rely on the positive or negative feelings associated to a risk (Visschers & Siegrist, 2018). Because negative emotions weigh more heavily against positive emotions, and people are more likely to recall a negative event, negative affect may influence people to judge self-driving cars to be of higher risk and lower benefit. This negative affect can be caused by anything, like for example the loss of control from removing the steering wheel, or knowledge of accidents involving self-driving cars (Raue et al., 2019). Parida et al. (2019) stresses the importance of public attitude and user acceptance of self-driving cars as the global market acceptance heavily relies on it (Parida et al., 2018).

References used in report

Sven Nyholm, Jilles Smids. (2016). The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem? Ethical Theory and Moral Practice, 1275–1289.

Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics, 21(3), 619–630. https://doi.org/10.1007/s11948-014-9565-5

Marchant, G. E., & Lindor, R. A. (2012). Santa Clara Law Review The Coming Collision Between Autonomous Vehicles and the Liability System THE COMING COLLISION BETWEEN AUTONOMOUS VEHICLES AND THE LIABILITY SYSTEM. Number 4 Article, 52(4), 12–17. Retrieved from http://digitalcommons.law.scu.edu/lawreview

Wagner M., Koopman P. (2015) A Philosophy for Developing Trust in Self-driving Cars. In: Meyer G., Beiker S. (eds) Road Vehicle Automation 2. Lecture Notes in Mobility. Springer, Cham. https://doi.org/10.1007/978-3-319-19078-5_14

Oliveira, L., Proctor, K., Burns, C. G., & Birrell, S. (2019). Driving Style: How Should an Automated Vehicle Behave? Information, 10(6), 219. MDPI AG. Retrieved from http://dx.doi.org/10.3390/info1006021

Shladover, S. (2016). THE TRUTH ABOUT “SELF-DRIVING” CARS. Scientific American, 314(6), 52-57. doi:10.2307/26046990

König, M., & Neumayr, L. (2017). Users’ resistance towards radical innovations: The case of the self-driving car. Transportation Research Part F: Traffic Psychology and Behaviour, 44, 42–52. doi:10.1016/j.trf.2016.10.013

Brandon Schoettle, M. S. (2014). A Survey of Public Opinion about Autonomous and Self-Driving Vehicles in the U.S., the U.K., and Australia. Michigan: The University of Michigan. https://deepblue.lib.umich.edu/bitstream/handle/2027.42/108384/103024.pdf?sequence=1&isAllowed=y

Mobility, public transport and road safety. (n.d.). Retrieved from Government of the Netherlands: https://www.government.nl/topics/mobility-public-transport-and-road-safety/self-driving-vehicles

Microsoft Forms. (2021, March 15). Retrieved from Microsoft Forms: https://forms.office.com/Pages/DesignPage.aspx#Analysis=true&FormId=R_J9zM5gD0qddXBM9g78ZIQEJ0K6qk1Epl7wQE_GwFJUQzRFUEg3RFVEMFVFVDY4NFVMQVJaRUgxQi4u&Token=7a2197128d054f1d9d81e3056e2eafde

References User Perspective

Bamonte, T. J. (2013). Autonomous Vehicles - Drivers for Change. Retrieved March 23, 2021, from https://www.roadsbridges.com/sites/rb/files/05_autonomous vehicles.pdf

Burgess, S. (2012, June 23). Parking: It’s What Your Car Does 90 Percent of the Time. Autoblog. Retrieved from https://www.autoblog.com/2012/06/23/parking-its-what-your-car-does-90-percent-of-the-time/?guccounter=1

Cox, W. (2016). Driverless Cars and the City: Sharing Cars, Not Rides. Cityscape: A Journal of Policy Development and Research, 18(3). Retrieved from http://www.newgeography.com/content/003899-plan-bay-area-telling-people-what-do

Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust-The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105–120. https://doi.org/10.1016/j.techfore.2015.12.014

König, M., & Neumayr, L. (2017). Users’ resistance towards radical innovations: The case of the self-driving car. Transportation Research Part F: Traffic Psychology and Behaviour, 44, 42–52. https://doi.org/10.1016/j.trf.2016.10.013

Lee, C., Ward, C., Raue, M., D’Ambrosio, L., & Coughlin, J. F. (2017). Age differences in acceptance of self-driving cars: A survey of perceptions and attitudes. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10297 LNCS, 3–13. https://doi.org/10.1007/978-3-319-58530-7_1

Owen, A., & Levinson, D. (2014). Access Accros America: Transit 2014, Final Report. Minneapolis, MN.

Parida, S., Franz, M., Abanteriba, S., & Mallavarapu, S. (2018). Autonomous Driving Cars: Future Prospects, Obstacles, User Acceptance and Public Opinion. Advances in Intelligent Systems and Computing, 786, 318–328. https://doi.org/10.1007/978-3-319-93885-1_29

Raue, M., D’Ambrosio, L. A., Ward, C., Lee, C., Jacquillat, C., & Coughlin, J. F. (2019). The Influence of Feelings While Driving Regular Cars on the Perception and Acceptance of Self-Driving Cars. Risk Analysis, 39(2), 358–374. https://doi.org/10.1111/risa.13267

Rogers, E. M. (1995). Diffusion of Innovations (4th ed.). Retrieved from https://books.google.nl/books?hl=nl&lr=&id=v1ii4QsB7jIC&oi=fnd&pg=PR15&dq=Rogers,+E.+M.+(1995).+Diffusion+of+innovations.+New+York.&ots=DMTurPTs7S&sig=gXeTkHXQsnxXXpy5dprofoJMhRQ#v=onepage&q=Rogers%2C E. M. (1995). Diffusion of innovations. New York.&f=false

Rupp, J. D., & King, A. G. (2010). Autonomous Driving - A Practical Roadmap.

Schoettle, B., & Sivak, M. (2014). Public opinion about self-driving vehicles in China, India, Japan, the U.S. and Australia. Retrieved from http://www.umich.edu/~umtriswt

Silberg, G., Wallace, R. ., Matuszak, G., Plessers, J., Brower, C., & Subramanian, D. (2012). Self-driving cars: The next revolution. KPMG LLP & Center of Automotive Research.

Son, J., Park, M., & Park, B. B. (2015). The effect of age, gender and roadway environment on the acceptance and effectiveness of Advanced Driver Assistance Systems. Transportation Research Part F: Traffic Psychology and Behaviour, 31, 12–24. https://doi.org/10.1016/j.trf.2015.03.009

Visschers, V. H. M., & Siegrist, M. (2018). Differences in risk perception between hazards and between individuals. In Psychological Perspectives on Risk and Risk Analysis: Theory, Models, and Applications (pp. 63–80). https://doi.org/10.1007/978-3-319-92478-6_3

World Health Organization. (2016). Road traffic injuries.

Yannis, G., Antoniou, C., Vardaki, S., & Kanellaidis, G. (2010). Older Drivers’ Perception and Acceptance of In-Vehicle Devices for Traffic Safety and Traffic Efficiency. Journal of Transportation Engineering, 136(5), 472–479. https://doi.org/10.1061/(ASCE)TE.1943-5436.0000063

Wakabayashi, D. (2018). Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam. The New York Times, Technology. https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html

Boeglin, J. (2015). The costs of self-driving cars: reconciling freedom and privacy with tort liability in autonomous vehicle regulation. Yale JL & Tech., 17, 171.

26 References

Greenblatt, N. A. (2016). Self-driving cars and the law. IEEE Spectrum, 46-51. doi:10.1109/MSPEC.2016.7419800

Holstein, T., Dodic-Crnkovic, G., & Pellicione, P. (2018). Ethical and Social Aspects of Self-Driving Cars. Retrieved from https://arxiv.org/abs/1802.04103

Nielsen, T. A., & Haustein, S. (2018). On sceptics and enthusiasts: What are the expectations towards self-driving cars? Transport Policy, 49-55. Retrieved from https://doi.org/10.1016/j.tranpol.2018.03.004

Stilgoe, J. (2018). Machine learning, social learning and the governance of self-driving cars. Social Studies of Science, 25-56. Retrieved from https://doi.org/10.1177/0306312717741687

Wagner, M., & Koopman, P. (2015). A Philosophy for Developing Trust in Self-driving Cars. Road Vehicle Automation 2, 163-171. Retrieved from https://link.springer.com/chapter/10.1007/978-3-319-19078-5_14

Sven Nyholm, Jilles Smids. (2016). The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem? Ethical Theory and Moral Practice, 1275–1289.

Nyholm, S. R. (2018). The ethics of crashes with self-driving cars: a roadmap I.

Chandiramani, J. R. (2017). Decision Making under Uncertainty for Automated Vehicles in Urban Situations. Master of Science Thesis.

Ibo van de Poel, Lambèr Royakkers. (2011). Ethics, Technology, and Engineering an introduction. Wiley-Blackwell.

Sam Levin, Nicky Woolf. (2016). Tesla driver killed while using autopilot was watching Harry Potter, witness says. The Guardian. https://www.theguardian.com/technology/2016/jul/01/tesla-driver-killed-autopilot-self-driving-car-harry-potter

Alexander Hevelke, Julian Nida-Rümelin. (2014). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics Joshua Greene. (2013). Moral Tribes.

Noah J. Goodall. (2016). Ethical Decision Making During Automated Vehicle Crashes

Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The Social Dilemma of Autonomous Vehicles. Science, 1573-1576.

Katarzyna de Lazari-Radek, Peter Singer. Utilitarianism: A Very Short Introduction (2017), p.xix, ISBN 978-0-19-872879-5.


Shladover, S. (2016). THE TRUTH ABOUT “SELF-DRIVING” CARS. Scientific American, 314(6), 52-57. doi:10.2307/26046990

Duranton, G. (2016). Transitioning to Driverless Cars. Cityscape, 18(3), 193-196. Retrieved February 7, 2021, from http://www.jstor.org/stable/26328282

Cox, W. (2016). Driverless Cars and the City: Sharing Cars, Not Rides. Cityscape, 18(3), 197-204. Retrieved February 7, 2021, from http://www.jstor.org/stable/26328283

Stone, J. (2017). Who’s at the wheel: Driverless cars and transport policy. ReNew: Technology for a Sustainable Future, (139), 38-41. Retrieved February 7, 2021, from https://www.jstor.org/stable/90002086

Frey, T. (2012). DEMYSTIFYING THE FUTURE: Driverless Highways: Creating Cars That Talk to the Roads. Journal of Environmental Health, 75(5), 38-40. Retrieved February 7, 2021, from http://www.jstor.org/stable/26329536

Focussed on acceptance of the technology:

König, M., & Neumayr, L. (2017b). Users’ resistance towards radical innovations: The case of the self-driving car. Transportation Research Part F: Traffic Psychology and Behaviour, 44, 42–52. https://doi.org/10.1016/j.trf.2016.10.013

Nees, M. A. (2016). Acceptance of Self-driving Cars. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1), 1449–1453. https://doi.org/10.1177/1541931213601332

S. Karnouskos, "Self-Driving Car Acceptance and the Role of Ethics," in IEEE Transactions on Engineering Management, vol. 67, no. 2, pp. 252-265, May 2020, doi: 10.1109/TEM.2018.2877307.

Lee C., Ward C., Raue M., D’Ambrosio L., Coughlin J.F. (2017) Age Differences in Acceptance of Self-driving Cars: A Survey of Perceptions and Attitudes. In: Zhou J., Salvendy G. (eds) Human Aspects of IT for the Aged Population. Aging, Design and User Experience. ITAP 2017. Lecture Notes in Computer Science, vol 10297. Springer, Cham. https://doi.org/10.1007/978-3-319-58530-7_1

Raue, M., D’Ambrosio, L. A., Ward, C., Lee, C., Jacquillat, C., & Coughlin, J. F. (2019). The Influence of Feelings While Driving Regular Cars on the Perception and Acceptance of Self-Driving Cars. Risk Analysis, 39(2), 358–374. https://doi.org/10.1111/risa.13267

Wagner M., Koopman P. (2015) A Philosophy for Developing Trust in Self-driving Cars. In: Meyer G., Beiker S. (eds) Road Vehicle Automation 2. Lecture Notes in Mobility. Springer, Cham. https://doi.org/10.1007/978-3-319-19078-5_14

Oliveira, L., Proctor, K., Burns, C. G., & Birrell, S. (2019). Driving Style: How Should an Automated Vehicle Behave? Information, 10(6), 219. MDPI AG. Retrieved from http://dx.doi.org/10.3390/info1006021

Shladover, S. (2016). THE TRUTH ABOUT “SELF-DRIVING” CARS. Scientific American, 314(6), 52-57. doi:10.2307/26046990

S. Parkinson, P. Ward, K. Wilson and J. Miller, "Cyber Threats Facing Autonomous and Connected Vehicles: Future Challenges," in IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 11, pp. 2898-2915, Nov. 2017, doi: 10.1109/TITS.2017.2665968.

Teoh, E. R., & Kidd, D. G. (2017). Rage against the machine? Google’s self-driving cars versus human drivers. Journal of Safety Research, 63, 57–60. https://doi.org/10.1016/j.jsr.2017.08.008

Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392

Abraham, H., Lee, C., Brady, S., Fitzgerald, C., Mehler, B., Reimer, B. & Coughlin, J.F. (2016). Autonomous Vehicles, Trust, and Driving Alternatives: A survey of consumer preferences. Agelab Life tommorow, 0. Geraadpleegd van https://bestride.com/wp-content/uploads/2016/05/MIT-NEMPA-White-Paper-2016-05-30-final.pdf



Responsibility Douma, F., & Palodichuk, S. A. (2012). Criminal Liability Issues Created by Autonomous Vehicles. Santa Clara Law Review, 52(4), 1157–1169. Retrieved from http://digitalcommons.law.scu.edu/lawreview Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics, 21(3), 619–630. https://doi.org/10.1007/s11948-014-9565-5 Howard, D. (2013). Robots on the Road: The Moral Imperative of the Driverless Car. Retrieved March 7, 2021, from Science Matters website: http://donhoward-blog.nd.edu/2013/11/07/robots-on-the-road-the-moral-imperative-of-the-driverless-car/#.U1oq-1ffKZ1 Marchant, G. E., & Lindor, R. A. (2012). Santa Clara Law Review The Coming Collision Between Autonomous Vehicles and the Liability System THE COMING COLLISION BETWEEN AUTONOMOUS VEHICLES AND THE LIABILITY SYSTEM. Number 4 Article, 52(4), 12–17. Retrieved from http://digitalcommons.law.scu.edu/lawreview

Summaries References

Self-driving cars and the law

The law assumes that a human being is in the driver’s seat of a car. This poses a problem for the inevitable futuristic implementation of self-driving cars. No current laws state who is responsible when an accident happens. Also, roads aren’t adapted to the needs of SDC’s. The biggest part of testing the cars takes place in the United States. A human being must be behind the wheel to intervene before possible accidents there. New laws in favor of these cars must be made soon, because companies won’t fully invest before they know the necessary regulations exist. Car companies are afraid for lawsuits, because they will be extremely expensive, the verdict will be hard to predict because no laws exist and a single lawsuit can lead to a recall of all cars. Finally, car companies are afraid for high punitive damage awards.

The solution, according to the writer, would be that computer drivers should be treated equally as human drivers. Only its conduct needs to be considered, not the thoughts, just like a judge can’t see what a human driver thought when he caused an accident. This means that a computer driver would be found liable when it runs a red light for example. Not when it drives as safe as it can and still cause an accident. The carmaker would be responsible, because they are the ones determining the actions of the car. Afterwards, the carmaker would invest much money in improving safety, because of bad publicity reasons. Judges have much precedent, because cases in which human beings were involved, can be used when a computer is involved and insurances would be lower than for a normal car.

Changes in public policy have to be made as well. A human can see traffic lights and signs et cetera. Camera’s of a car would be able to see and detect them as well, but it would be much easier if radio frequency transmitters were to be implemented. That way, a car can just receive a signal without the chance of not visually detecting it. The rollout of autonomous vehicles has to be speeded, because much oil could be saved, for they drive more efficiently. Also many accidents can be prevented, because they are more reliable than a human being. They rely on electronic signals, which can be processed much faster. Also, SDC’s can learn from mistakes that another car has made, through updates. Human beings can only learn from their own mistakes. Probably when they are implemented, we wouldn’t own cars anymore, but just order one when you need one. Privacy will be a big concern, since manufacturers will know your exact location and your destination. Camera’s are likewise to be installed internally to prevent vandalism. At last, parking spaces in central areas would not be needed anymore, so we have more space.


Ethical and Social Aspects of Self-Driving Cars

It is hard to say which choices a SDC has to make when dangerous situations occur. The trolley-problem is a commonly used model to describe which choices it can make. There exist a few problems why we cannot fully depend on this problem. There are a few ethical theories which we can use, but it’s hard to say which one is correct and they all have different conclusions. Design choices and ethical programming influence each other. A cheaper camera has a negative effect on accurate quick decision-making for example. Self-driving cars are cars which are able to operate without the presence of a human being. This is the highest level of autonomous cars. Self-adaptive software can make sure that a car learns all the time and is not dependent on slow updates. Most of the functionality in the automotive domain is based on software. This relies on computer vision, machine learning and parallel computing. A problem is that calculations are based on an abstract representation of the real world, formed by all things sensed by camera’s etcetera. Engineers have to choose which data they will use, as a camera can see an obstacle when a radar doesn’t.

Safety is the most important requirement of SDC’s. A drivers license for SDC’s is a suggestion. Also an independent organization should be able to check the code. Testing is very important for making sure it is safe enough. Economic aspects happen to be the highest priority of companies and cheap equipment could lead to wrong decision-making. Security is also very important, because when hackers hack into the device, safety will be affected crucially. There exist eight basic principles for security. Should there be a threshold for safety? Should the vehicle be connected to the network or not? Connection makes it easier to prevent accidents and operate more efficiently whereas no connection makes it almost impossible to hack.

Privacy is another requirement. There already exists much legislation on privacy and these cars have to meet them as well. Trust and transparency are other requirements. It is hard to determine to which organizations/people data has to be disclosed. More data-sharing can make it easier for companies to learn from others. Reliability, responsibility and accountability and quality assurance are the final requirements. There are public interests which manufacturers have to take into account. Giving more choice to people can make the people more responsible for the cars actions. New selling points have to be thought of. Will exterior be just as important as it is right now? There are no simple answers to each safety question, but this was the same when normal cars were introduced. Also with those, safety couldn’t be guaranteed. The unsolvable trolley problem must not be tried to be solved.


On sceptics and enthusiasts: What are the expectations towards self-driving cars?

Willingness to accept SDC’s differs between groups of age, gender, country etc. It is unknown what makes this difference and therefore more research on acceptance is needed. This paper studies the acceptance of automated driving and related expectations in the Danish population. The parts of the questionnaire were: car access and travel patterns, interest and attitudes towards SDC’s, expectations towards fully automated vehicles and personal background information. People were divided into the groups of scepticism, enthusiasm and car stress for each part of the questionnaire. Most people are sceptic, followed by indifferent stressed and enthusiasts form the smallest group by extent. Enthusiasts are younger people who live in more urban areas, whereas sceptics are often older and live in more rural areas. Powerlessness and freedom are emotions related to not accepting a SDC.

Enthusiasts live more in urban areas because they are more familiar with driving in congested areas and therefore feel the need towards more efficient driving methods. Although old people might not be able to use conventional driving methods, they are still not willing to accept SDC’s. The differences must be studied and conclusions must be drawn about how we can implement SDC’s so that more people will enjoy them. For example, we can consider to keep manual options, so sceptics won’t lose the joy of driving. For all the results, see the paper.


Machine learning, social learning and the governance of self-driving cars

Developers of SDC’s should aim to make them safer. It is hard to do that, because innovation is very unsure. You don’t know exactly what your product will be, so it is hard to know exactly what you want to be safe to do what. The final design can be different from the current design so you’re testing a different thing. The focus should be on social learning. The system needs to learn from the society and the society needs to learn from the system. Much can also be learned from historical cases. The algorithmic architecture of the programming begins with if-else rules, but situations are too complex to use only this approach. It should learn from vast datasets out of the real world. There could be problems because regulations aren’t necessarily based on needs in the real world and may be arbitrary. Developers aren’t even capable of seeing how the system is learning from the data. Therefore explicit problems need to be defined beforehand. Self-driving and autonomous cars are misnomers, because they are never autonomous. They are driven by social goals. Technology can never have an own will. You must make people aware of the limits. This is why the German government has asked Tesla to rename the Autopilot-function due to failure. It’s dangerous for people to think it’s completely safe. Tesla never connected the failure to their own shortages. But they did install technological alternatives when they noticed some weren’t good enough. Autonomous cars aren’t as independent as people tend to believe. They should be well-trained and therefore it would be positive to democratize the learning, so that every company can maximize the outcome and the safety.


A Philosophy for Developing Trust in Self-driving Cars

Cars become more automated and this will reduce the rates of accidents. They can eliminate accidents due to inattentive drivers. However, humans are able to react way better to situations they are not explicitly trained for. The world contains is an unstructured network and even thousand test-miles cannot eliminate some failures. Inductive inference is crucial in building solid software. This is for example machine learning. A computer can learn for itself what the clearest feature of pedestrians are and how to react to them.

Situations which occur not very often are hard to take into account for programmers and it is not easy to learn for them through experience. According to Popper a theory is only meaningful when it is falsifiable, because one needs only one negative example to falsify a theory. According to the author, one single accident makes the safety case more meaningful. No confirmatory tests should be executed. Rather, the goal should be a negative test result, so we know what to improve. Field testing costs too much money to do for a long time and simulation testing doesn’t fit either, because one will never simulate situations he doesn’t expect to take place. Fuzz testing is a well-fitting alternative, but it is not very efficient, because it uses random values of which a great part aren’t very interesting to test. The Ballista project uses dictionaries of interesting values to test and therefore is more likely to find big vulnerabilities. The conclusion is that the tester should aim to find flaws, instead of never-ending evidence the system works at all times.


The ethics of crashes with self‐driving cars: A roadmap, I

Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then follows an assessment of recent empirical work on lay‐people's attitudes about crash algorithms relevant to the ethical issue of crash optimization. Finally, the article discusses what traditional ethical theories such as utilitarianism, Kantianism, virtue ethics, and contractualism imply about how cars should handle crash scenarios.

It might seem like a good idea to always hand over control to a human driver in any accident scenario. However, typical human reaction‐times are too slow for this to always be a good idea (Hevelke & Nida‐Rümelin, 2015) Jason Millar argues that a person's car should function as a “proxy” for their ethical outlook. People should therefore be able to choose their own ethics settings (Millar, 2014; see also Sandberg & Bradshaw‐Martin, 2013). Similarly, Giuseppe Contissa and colleagues argue that self‐driving cars should be equipped with an “ethical knob,” so that whoever is currently using the car can set it to their preferred settings. (Contissa, Lagioia, & Sartor, 2017) Jan Gogoll and Julian Müller, in contrast, argue that we all have self‐interested reasons to want everyone's cars to be programmed according to the same settings. (Gogoll & Müller, 2017). One advantage to giving people a certain degree of choice here is that this might make it easier to hold them responsible for any bad outcomes that crashes involving their vehicles might give rise to (Sandberg & Bradshaw‐Martin, 2013; cf. Lin, 2014).

One of the questions this raises is whether the vast literature on the trolley problem might be a useful source of ideas about how to deal with the ethics of crashing self‐driving cars. Together with Jilles Smids, I have put forward three reasons for being skeptical about relying very heavily on the trolley problem literature here (Nyholm & Smids, 2016). Firstly, in the trolley literature, we are typically asked to imagine that the only morally relevant factors are a very small set of factors. . Any bigger and more complex sets of considerations are imagined away. Secondly, in most trolley discussions, we are asked to set all questions of moral and legal responsibility aside, and only focus on the choice between the one and the five. In actual traffic ethics, we cannot ignore questions about responsibility. Thirdly, in trolley discussions, a fully deterministic scenario is imagined. It is assumed that we know with certainty what the outcomes of our available choices would be. In contrast, when we are prospectively programming self‐driving cars for how to deal with accident scenarios, we do not know what scenarios they will face. We must make risk‐assessments. (Nyholm & Smids, 2016). Emperical ethics: minimize overall harm. . However, when surveyed about what kinds of cars they themselves would want to use, people tend to favor cars that would save them in an accident scenario. People appear to have inconsistent or paradoxical attitudes. In the finding mentioned above, many people want others to have harm‐minimizing cars, while themselves wanting to have cars that would favor them.

“Top‐down” approach. That is, we can consider what utilitarians (or consequentialists more broadly), Kantians (or deontologists more broadly), virtue ethicists, or contractualists would recommend regarding this topic. Utilitarian ethics is about maximizing overall happiness, while minimizing overall suffering. Kantian ethics is about adopting a set of basic principles (“maxims”) fit to serve as universal laws, in accordance with which all are treated as ends‐in‐themselves and never as mere means. Virtue ethics is about cultivating and then fully realizing a set of basic virtues and excellences. Contractualist ethics is about formulating guidelines people would be willing to adopt as a shared set of rules, based on nonmoral or self‐interested reasons, in a hypothetical scenario where they would be making an unforced agreement about how to live together. A utilitarian would be mindful of the fact that people might be scared of taking rides in “utilitarian” cars, instead preferring cars programmed to prioritize their passengers. . The lesson from Kantian ethics might be that we should choose rules we would be willing to have as universal laws applying equally to all—so as to make everything fair, and not give some people an unjustified advantage in crash‐scenarios. ? It is hard to come up with any virtue ethical ideas about how self‐driving cars should crash (cf. Gurney, 2016). But virtue ethics might help when we think about the ethics of automated driving more generally. Perhaps a lesson from a virtue ethical perspective is that we should try to design and program cars in ways that help to make people act carefully and responsibly when they 6 of 10 NYHOLM use self‐driving cars.


The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

We identify three important ways in which the ethics of accidentalgorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how selfdriving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty.

According to Frances Kamm, the basic philosophical problem is this: why are certain people, using certain methods, morally permitted to kill a smaller number of people to save a greater number, whereas others, using other methods, are not morally permitted to kill the same smaller number to save the same greater number of people? (Kamm 2015) The morally relevant decisions are prospective decisions, or contingency-planning, on the part of human beings. In contrast, in the trolley cases, a person is imagined to be in the situation as it is happening, split-second decision-making. It is unlike the prospective decision-making, or contingency-planning, we need to engage in when we think about how autonomous cars should be programmed to respond to different types of scenarios we think may arise. The decision-making about self-driving cars is more realistically represented as being made by multiple stakeholders – for example, ordinary citizens, lawyers, ethicists, engineers, risk-assessment experts, car-manufacturers, etc. These stakeholders need to negotiate a mutually agreed-upon solution. . In one case, the morally relevant decision-making is made by multiple stakeholders, who are making a prospective decision about how a certain kind of technology should be programmed to respond to situations it might encounter. And there are no limits on what considerations, or what numbers of considerations, might be brought to bear on this decision. In the other case, the morally relevant decision-making is done by a single agent who is responding to the immediate situation he or she is facing – and only a very limited number of considerations are taken into account.

Responsibility: Suppose, for example, there is a collision between an autonomous car and a conventional car, and though nobody dies, people in both cars are seriously injured. This will surely not only be followed by legal proceedings. It will also naturally – and sensibly – lead to a debate about who is morally responsible for what occurred. Forward-looking responsibility is the responsibility that people can have to try to shape what happens in the near or distant future in certain ways. Backward-looking responsibility is the responsibility that people can have for what has happened in the past, either because of what they have done or what they have allowed to happen. (Van de Poel 2011) Applied to riskmanagement and the choice of accident-algorithms for self-driving cars, both kinds of responsibility are highly relevant.

Uncertainties: the self-driving car cannot acquire certain knowledge about the truck’s trajectory, its speed at the time of collision, and its actual weight. Second, focusing on the self-driving car itself, in order to calculate the optimal trajectory, the self-driving car needs (among other things) to have perfect knowledge of the state of the road, since any slipperiness of the road limits its maximal deceleration. Finally, if we turn to the elderly pedestrian, again we can easily identify a number of sources of uncertainty. Using facial recognition software.


Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis

Autonomous cars are involved around legal, but also moral questions. Patrick Lin is concerned that any security gain will constitute a trade-off with human lives. The second question is whether it would be morally okay to put liability on the user based on a duty to pay attention to the road and traffic and to intervene when necessary to avoid accidents. It should depend on whether or not the driver would ever have a chance to intervene. In this article, two options are discussed: driver with a duty to intervene, or a driver with no duty (and thus no control). For the first option, if the driver never had a real chance of intervening, he should not be held responsible. However this holds only for the new cars, and they would still not be accessible to blind etc. For the second option where the driver has no control, it makes more sense to hold them accountable. However, this would make more sense in some kind of tax or insurance. Manufacturers should not be freed of their liability completely (take the Ford Pinto case as an example).


Ethical decision making during automated vehicle crashes

Three arguments were made in this paper: automated vehicles will almost certainly crash, even in ideal conditions; an automated vehicle’s decisions preceding certain crashes will have a moral component; and there is no obvious way to effectively encode human morality in software. A three-phase strategy for developing and regulating moral behavior in automated vehicles was proposed, to be implemented as technology progresses. The first phase is a rationalistic moral system for automated vehicles that will take action to minimize the impact of a crash based on generally agreed upon principles, e.g. injuries are preferable to fatalities. The second phase introduces machine learning techniques to study human decisions across a range of real-world and simulated crash scenarios to develop similar values. The rules from the first approach remain in place as behavioral boundaries. The final phase requires an automated vehicle to express its decisions using natural language, so that its highly complex and potentially incomprehensible-to-humans logic may be understood and corrected.


The social dilemma of autonomous vehicles

When it becomes possible to program decision-making based on moral principles into machines, will self-interest or the public good predominate? In a series of surveys, Bonnefon et al. found that even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles (see the Perspective by Greene). Respondents would also not approve regulations mandating self-sacrifice, and such regulations would make them less willing to buy an autonomous vehicle.


The truth about ‘self-driving’ cars They are coming but not in the way you may have been led to think. Selfdriving cars have many issues: taking save turns, changing road surfaces, snow and ice and avoid traffic cops, crossing guards & emergency vehicles. And automatic stopping for pedestrians will make us people rather walk or take the subway. We have a very unrealistic expectation of self driving cars. They will not happen the way you have been told.

Sam.png

We are currently only arriving at level 3 cars. CEO of Nissan said fully automated cars (level 5) will be on the road by 2020. This isn’t true, level 4 cars may arrive in the next decade. Defining automated driving: much more complex than we think. Despite the popular perception, human drivers are remarkably capable of avoiding crashes. Mind how often your laptop freezes / is slow. This will inevitably lead to crashes, so there is a major software problem.

Software on aircraft is much less complex, since they have to deal with less obstacles and other vehicles. Also, the testing of the automated cars will have lots of problems. A lot of people will have to be subject of crashes statistically over a long period of time. Also, there is boundary money-wise, since the cars must stay affordable for the public. Some people think AI will give us self-driving cars. However, the problem with that is that it is non-deterministic. The possibility of having 2 cars with the same assembly but after a year automation systems will have different behaviour. It is out of our control.

Writer: Fully automated cars will not be here until 2075. In level 3 cars there is a problem with the driver zoning out. This problem so hard, some car manufacturers will not even try level 3. So outside of traffic jam assistants level 3 will probably never happen. Level 4 will happen eventually, but on certain parts of roads and with certain weather conditions. These scenarios might not sound as futuristic as having your own personal electronic chauffeur, but they have the benefit of being possible and soon.


Transitioning to driverless cars

Despite some nuances, the future looks mostly bright. The questions are how to get there, and what the transition to a full system of driverless cars will look like. A lot of the discussion so far has focused on insurance and ethical issues. Who is responsible in case of accidents? If the computer has to choose a victim in a collision, who will it be, its own passenger or a passenger in another car? These questions are interesting, but it is hard to imagine they will be major stumbling blocks. New technologies have brought new risks for many years, and ways have been found to spread those risks and define new forms of protection and liability. The ethical question probably makes for interesting debates in an introduction to ethics class at a university, but it is unlikely to have much practical relevance. Driverless cars will be much safer than cars are now.

A good case can be made that the key transitional problems will be instead about the political economy of the regulation of driverless cars and the cohabitation between driverless cars and cars driven by human beings. For car producers or would-be car producers, two strategies are possible. The first is incremental and consists of making cars gradually less reliant on drivers. That has been the strategy of most incumbent car producers. The incremental strategy presents one major problem, however. Partially driverless cars may be safer, but the true timesaving benefits of driverless cars will occur only when cars become completely driverless. With this scenario, the transition is likely to be extremely long, and how the last step about getting rid of the wheel will take place is unclear.

The alternative strategy is rupture and the direct development of cars without a steering wheel; that is the Google, Inc. strategy. It is an appealing but difficult proposition on several counts. It will require maximum software sophistication right from the start. If anything, processes will get easier with more driverless cars. Some technical issues seem extremely tricky to resolve. Incumbent car manufacturers that are betting on incremental change, not cars without wheels right from the start, will probably do everything they can to prevent fully driverless cars from being able operate.

Realizing that its radical innovation will be a hard sell, Google appears to want to make it even more radical. If Google cars cannot operate in existing cities, perhaps new cities need to be created for them. That probably sounds like a mad idea to many, but history teaches us that it may not be as crazy as it sounds. What was possibly the first suburb of America, the Main Line of Philadelphia, Pennsylvania, was developed by rail entrepreneurs who realized that developing suburbs was much more profitable than operating railways.


Driverless Cars and the City: Sharing Cars, Not Rides

The world of driverless cars heralds revolutionary changes, but for cities (metropolitan areas) the process will be evolutionary. No “Big Bang” will happen, but it will slowly evolve. Driverless cars will not significantly impact urban form, but will expand opportunity and quality of life for the disabled and other people who are unable to drive.

Who’s at the wheel: Driverless cars and transport policy Many of the claims for the benefits of driverless technologies rely on the complete transformation of the existing vehicle fleet. But the transition will not be smooth or uniform: winners and losers in the competition between the different interest groups will depend on many factors.

Freeways are likely to be the first spaces in which the new vehicles will be able to operate. In any case, problems of congestion and competition for space at any popular destination will not be resolved. The ambition is to allow cars, bikes and pedestrians to share road space much more safely than they do today, with the effect that more people will choose not to drive. But, if a driverless car or bus will never hit a jaywalker, what will stop pedestrians and cyclists from simply using the street as they please?

Some analysts are even predicting that the new vehicles will be slower than conventional driving, partly because the current balance of fear will be upset. While this might be attractive to cyclists, will it affect the marketability of Google’s new products? With huge reserves of cash and consequent lobbying power, Google and its ilk will be in a strong position to demand concessions from governments and road authorities. You can just imagine the pitch: we can save you billions on public transport operations, but we need fences to keep bikes and pedestrians out of the way of our vehicles in busy urban centres. Lost in the enthusiasm for the new, is the simple reality of the limited availability of urban space. New technologies of driverless trains may reduce costs and allow us to improve the quality of the service, but only if that is the focus of investment and innovation.

I would urge readers of ReNew to turn their minds to the real alternative technologies we need in urban transport. Rather than follow the individualist model which directs our attention to the technology of the vehicle, let’s turn our attention to the ‘technology of the network’. How can we build on the insights of the Europeans and Canadians and use the potentials of IT and electronics to build better collective transport systems that connect all of us to the life of the city without consuming all the space we need to live and grow.


Driverless Highways: Creating Cars That Talk to the Roads

The art of road building has been improving since the Roman Empire. The highways today remain as little more than dumb surfaces with no data flowing between vechicles and the road. China already has restrictions on the limit of vehicles that can be licensed in Shanghai and Beijing. Going driverless brings some exciting new options. Driverless cars will be a very disruptive technology. To compensate for the loss of a driver, vehicles will need to become more aware of their surroundings. With cameras you create a symbiotic relationship that is far different than human-to-road relationship, which is largely emotion based. An intelligent car coupled with an intellegent road is a powerfull source.

- Lane compression

- Distance compression

- Time compression

On-demand transportation. All car parts and component need to be designed to be more durable and longer lasting. Shifting from driver to rider. More fancy dashboards, movies, music and massage interfaces. China doesn’t need more cars, it needs more transportation.

Conclusion: We all love to drive, but humans are the inconsistent variable in this demanding area of responsibility. Driving requires constant vigilance, constant alertness, and constant involvement. Once we take the driver out of the equation, however, we solve far more problems than the wasted time and energy needed to pilot the vehicle. But vehicle design is only part of the equation. Without reimagining the way we design and maintain highways, driverless cars will only achieve a fraction of their true potential.


Users’ resistance towards radical innovations: The case of the self-driving car

The advent of self-driving cars could eliminate the driver from the driving equation, having the potential to substantially improve safety, time and fuel efficiency as well as mobility in general. The introduction of such a radically new technology is surrounded by a high degree of uncertainty and possibly not all stakeholders would welcome the change. As a result, the wide-spread acceptance and hence adoption of this new technology is far from certain and will thus be analyzed comprehensively in this paper. Given that it will be the end-consumers (the actual drivers) who will eventually decide whether self-driving cars will successfully materialize on the mass market the lack of wider empirical evidence for the user perspective forms the rationale for our research.

User resistance to change has been found to be a crucial cause for many implementation problems. The assumption that a possibly disruptive innovation such as the self-driving car could lead to major resistance on behalf of the public is based on the fact that people regularly react with caution and wariness to ‘new things’ and ‘change’ or, in extreme cases, even fight them.

Possible causes of resistance: Regarding the desired level of automation, Khan, Bacchus, and Erwin (2012, p. 88) hypothesize that “it is likely that a significant percentage of drivers may not be comfortable with full autonomous driving.”

1. People might experience driving to be “adventurous, thrilling and pleasurable” (Steg, 2005, p. 148). Mokhtarian and Salomon (2001, p. 695) argue that travel “is not only derived demand”, but may be “desired for its own sake”. While self-driving cars might post significant advantages for many segments of the population, driving enthusiasts might not be among the people adopting this new technology.

2. Similarly, analyzing reasons why people do not use public transportation, Böhm et al. (2006, p. 4) make a distinction between “moving” and “being moved”, highlighting the latter as “dependent”. This poses the question whether self-driving cars could be seen as providing the ultimate level of autonomy, as people are free to engage in any activity once relieved from the task of driving or, psychologically, making people dependent on technology.

3. Further, as people regularly view their cars as source of power and similar attributes, “it is uncertain whether this close identification of personal autonomy with a person’s vehicle may be different with regard to use of autonomous vehicles” (Glancy, 2012, p. 1188).

4. Other users might resist self-driving technology not because they value the driving task but because they simply do not trust “a machine making decisions for them” (Rupp & King, 2010, p. 3).

5. There are also privacy issues.

6. Another potential cause for barriers towards self-driving technology is the risk of a “misbehaving computer system” (Douma & Palodichuk, 2012, p. 1164). With autonomous vehicles, criminals or terrorists might be able to hack into and use their cars for illegal purposes such as drug trafficking or, even worse, terroristic attacks (Douma & Palodichuk, 2012).

7. Further, the unavoidable rate of failure (or crashes), no matter how small, could foster initial mistrust, especially as people tend to underestimate the safety of technology while putting excessive trust in human capabilities like their own driving skills

Results: This study is an explorative study, since scientific research about self-driving cars is still in its infancy. A non-probability convenience sampling method was applied. Data were collected over a two-week time frame in July 2015 using a quantitative self-completion online questionnaire. Discussion: there were considerable differences between sub-groups with older respondents to be more worried about self-driving cars than younger respondents, females to have more concerns than males, and rural respondents to value self-driving cars less than urban participants. Surprisingly, people who used a car more often tended to be less open. Correspondingly, and across all sub-groups, the most pronounced desire of respondents was to have the possibility to manually take over control of the driving task whenever wanted, which entails the necessity to keep the steering wheel. It is thus seen as crucial to include an overriding function in the initial versions of self-driving cars. It stood out that the more participants knew about self-driving cars, the more positive their attitude towards these vehicles tended to be. Thus, a lack of knowledge about the functioning of the product will most certainly lead to non-adoption.


Acceptance of Self-driving Cars

One study (Payre, Cestac, & Delhomme, 2014) reported that driving while impaired from alcohol, drugs, or medications was a major dimension of acceptance of self-driving vehicles, and other studies have suggested that people expect to be able to engage in a wide variety of secondary tasks in self-driving cars (Kyriakidis, Happee, & De Winter, 2014; Pettersson & Karlsson, 2015).

These emerging expectations may reflect overconfidence in our ability to automate the driving task. The implementation of autonomous vehicles faces considerable unresolved challenges. Unless automation of driving can be implemented with perfect or near-perfect reliability—an outcome that seems implausible, especially during anticipated transitional phases of deployment during which self-driving cars will share roads with traditional vehicles (Sivak & Schoettle, 2015)—the human likely will retain a supervisory role during automated driving. Human operators of autonomous vehicles seem to be in danger of being allocated an especially mundane function: to continuously maintain awareness of the driving scenario in anticipation of very infrequent occasions when human intervention will be necessary. Even if appropriate interfaces can be designed to keep drivers in the loop, it remains unclear whether consumers would accept an automated vehicle that could perform all driving tasks, did perform most driving tasks, yet demanded a high amount of monitoring workload.

Highly idealized portrayals have begun to foster expectations that self-driving cars will require little or no human intervention and will create a windfall of work, leisure, or social time during transit. Initial deployment of self-driving cars could be slowed or harmed if the technology is received with disappointment. Trust in automation is influenced by expectations and attitudes that develop before a person uses a system (Hoff & Bashir, 2015), thus it will be important to understand acceptance before the arrival of self-driving cars on markets (see Payre et al., 2014). To the extent that idealized portrayals of vehicle automation already have begun to influence acceptance, they may also be encouraging unrealistic expectations about automation performance that could be counterproductive to acceptance in the long run.

In this experiment, an online sample of participants read either a realistic or an idealized description of a close friend or family member’s experiences during the first six months of ownership of a self-driving car. The realistic vignette emphasized that the driver felt the need to monitor the vehicle during automated operations and occasionally needed to resume manual control to prevent accidents. The idealistic scenario described a vehicle with perfect reliability that did not require human monitoring or intervention and had won the driver’s trust. A novel, 24-item scale assessed acceptance of self-driving cars in both vignette conditions and a control condition. The idealized portrayal was hypothesized to increase overall acceptance of self-driving cars.

Participants completed an instrument created for this experiment, the Selfdriving Car Acceptance Scale (SCAS). The SCAS featured 24 statements that were written to assess the extent to which participants were accepting of self-driving cars. Responses were made on a 7-point Likert scale with the anchors “strongly disagree” and “strongly agree.” People may be more accepting of self-driving cars under idealized rather than (arguably more) realistic scenarios during the initial deployment of the technology. The effect of the idealized depiction was small, but it suggested that idealized descriptions may be able to affect acceptance of self-driving cars before people interact with them.


Self-Driving Car Acceptance and the Role of Ethics

In the scope of unavoidable accidents, what is the effect of different ethical frameworks governing self-driving car decision-making, on their acceptance? Research question: In the scope of unavoidable accidents, what is the effect of different ethical frameworks governing self-driving car decision-making, on their acceptance? To exemplify the ethics impact on the acceptance of self-driving cars, one has to consider the situation of an eminent fatal accident involving pedestrians and car passengers. One could argue that innocent passengers ought to be spared, and hence the car passengers should bear the risk of being fatally injured. This most probably would be seen positively by the majority of the people in a city, especially the nondrivers. However, the question that is raised is if anyone would then buy such a car if s/he knows s/he is in high danger; probably not. Subsequently, that may result in a decrease in the sales of self-driving cars, and they will never reach a critical mass. Hence, the envisioned benefits coupled with their existence (e.g., overall reduction of accidents) would also not be materialized as expected.

The ethics embedded in the decisionmaking of a self-driving car, especially in the case of unavoidable accidents, would most probably impact their acceptance by the public. Also, the nature of the ethics, i.e., the ethical framework utilized may also play a role, something that is not sufficiently investigated. In this work quantitative positivist research is carried out, and the empirical data is collected via a questionnaire. With respect to the process followed, first, the ethical frameworks are selected and described. Ethical frameworks are posed in the unavoidable accident context and a model that hypothesizes their link to the acceptance of self-driving cars is proposed. Subsequently, a survey with questions that capture the identified factors (ethical frameworks) is constructed and empirical data is collected. The sampling frame is general, the initial scope is university students (at Master’s level) as they pose a good mix of technology savviness and will be able to easily understand the context in which self-driving cars will have to operate. The following frameworks were selected as representative: Utilitarianism, Deontology, Relativism, Absolutism (monism), and Pluralism. Utilitarianism is a normative ethical framework that considers as the best action, the one that maximizes a utility function by considering the positive and negative consequences of the choices pertaining to the decision.

Deontology is a normative ethical framework and considers that there are rules that have an absolute quality in them, which means that they cannot be overridden. As such, deontologists reject that what matters are the consequences of an action, and focus that what matters is the kind of action to be taken. Ethical Relativism is a meta-ethical framework where it is argued that “all norms, values, and approaches are valid only relative to (i.e., within the domain of) a given culture or group of people”. Hence, in this framework, it is proposed that a society’s practices can be judged only by its own moral standards. Ethical absolutism or ethical monism is a meta-ethical framework that is on the antipodal point of the ethical relativism. This framework, also referred to as “doctrine of unity”, can be described as follows: “There are universally valid moral rules, norms, beliefs, practices, etc. [. . . that] define what is right and good for all at all times and in all places – those that differ are wrong”.

Ethical pluralism is a meta-ethical framework that rejects absolutism (that there is only one correct moral truth) and relativism (that there is no correct moral truth) as unsatisfactory and proposes that there is a plurality of moral truths. It is sometimes referred to as “doctrine of multiplicity”. The ethical pluralist argues that indeed there are universal values (as indicated in absolutism) however, instead of considering that there is only a single set always applicable, it considers that there are many which can be interpreted, understood and applied in diverse contexts (as indicated in ethical relativism).

A closer look at Utilitarianism results shown in Figure 3 reveals that most people consider that an assessment of some kind ought to be done by the self-driving car and be integrated into its decision algorithms.

Deontology implies that there is an expectation that the self-driving cars carry out their duties with good intentions independent of consequences. As seen in Figure 4, the prevalent view is that cars should treat all people on an equal basis (hence not assigning values to individual people as utilitarianism suggests), as well as trying to protect the innocent pedestrians.

Absolutism (monism) propagates the existence of global moral values, norms, beliefs, and practices that are praised by the those who agree while they are condemned by those who disagree. Such views propagate group beliefs and may create tensions in society, as shown in the wide-spread of replies in question A4 in Figure 5, whether life is sacred and knowingly killing people by a machine would be acceptable. As shown in figure 5 there is a strong positioning that the car should have such ethics, and take life and death decisions independently if its owner agrees to it or not. This has several implications, as it would mean that self-driving cars would behave differently than their owners might wish, and raises concerns if cars that do so would actually be bought by people who disagree with their car’s decisions in critical situations.

Relativism affirms tolerance and is bound to culture, time, society, which may ease the acceptance of decisions taken by self-driving cars in critical situations. As shown in Figure 6, people consider that the self-driving car ought to take into account such ethics in its decisions. Such considerations may reflect the diversity of cultures and philosophies found in the world, but may also create “deadlocks” where specific decisions of the self-driving car, cannot be praised or condemned.

Pluralism, propagating the plurality of moral truths, provides a balance among the highly heterogeneous world, tolerance and basic human values such as human rights. Hence, ethical differences may be approached at a global scale. This is also reflected in the views captured in Figure 7, where a mix of aspects is shown, e.g., the owner’s or society’s moral views should be considered, while law and global ethical values are ought also to be respected. Therefore, the pluralism framework is seen as a good candidate for decision-making in self-driving cars. However, due to the multiple perspectives that need to be incorporated, it is also a highly complex one, and hence not easy to realize it.

Finally, the survey also measured some aspects of the self-driving car acceptance as shown in Figure 8, from which it is evident that there is a need for ethics to be embedded in self-driving cars. People seem to trust self-driving cars, and therefore they would opt to buy them once they are available, and may prefer them over the normal (non-self-driving) ones. Overall there is a very strong view, that the society needs self-driving cars, as their benefits for a safer and more inclusive society cannot be overseen. the overall strong support for all frameworks means that there is no clear suggestion, at least from this research, that there should be a preference for a specific framework in the self-driving cars, and no one-size-fits-all solution can be proposed. On the contrary, since all of them seem to have an impact, different parts of the society and people may have different needs and preferences. One thing is clear; that the ethical frameworks considered in this research need to be investigated in-depth, not only qualitatively, but also with mass-scale quantitative surveys as part of the overall research priorities set for AI.

Future directions: It is high time to investigate in detail the ethical angle of issues that pertain to the acceptance of self-driving cars, especially from the diverse viewpoints of the multiple stakeholders involved in their lifecycle. As such, an intersectional analysis pertaining law, society, economy, culture, etc. may be the proper way to move forward and tackle issues raised in this work.

Some challenges are: - will people adjust their road behaviour because of reliance on automation?

- If the ethics of the car conflict with the ethics of the buyer, will they actually buy/use the car?

- Is there bias in learning algorithms for self-driving cars, especially in regard to ethics?

- Should all cars have the same ethical setting?

- How do we stop ‘’hackers’’ from making their own preferential ethical setting?

- How do we stop the fact that it is likely that the cost of the car implies better ethical software?

- How do we tackle privacy concerns?

- Who is liable for the ethical decisions of the car?

- How would two cars with different ethical settings negotiate their outcome?


Differences in Acceptance of Self-driving Cars: A Survey of Perceptions and Attitudes

Introduction: There is a significant body of research around technology acceptance across various domains. Numerous studies have built on to earlier models such as the Technology Acceptance Model (TAM) [1] and the Diffusion of Innovations Theory [2]. In TAM, perceived usefulness and perceived ease-of-use are main factors that affect a user’s attitudes toward using technology, which then influences the user’s behavioral intentions and actual usage, as illustrated in Fig. 1. In the Diffusion of Innovations Theory, five characteristics – relative advantage, compatibility, complexity, trialability and observability – are the key factors that underlie adoption.

Age-related changes in physical and cognitive capabilities, however, can lead to declines in mobility and driving abilities [14, 15], leading many older adults to stop driving altogether. For this reason, they may be the primary beneficiaries of self-driving cars. Older adults, however, have knowledge of and experiences with technology that may differ from younger generations, which may cause them to perceive and accept self-driving cars differently.

While research on technology adoption and transportation safety has begun to explore determinants of acceptance and age effects with regards to new automotive technologies, how different generations perceive and accept self-driving cars is not yet fully understood. In this study, a large-scale survey was conducted to investigate older adults’ perceptions of and attitudes toward self-driving cars, and how their perspectives differ from other generations.

Results: The following factors were significant predictors of self-driving car acceptance: perceived usefulness, affordability, social support, lifestyle fit and conceptual compatibility. Across ages, those who perceived self-driving cars to be more practical, affordable, accepted by peers, and compatible with their lifestyles and conceptual mental models were more interested in getting and using them. Furthermore, attitudinal interest in self-driving cars strongly predicted behavioral intentions to use them.

Age was negatively associated with perceptions, attitudes and behavioral intentions toward the acceptance and use of self-driving cars. Older participants perceived self-driving cars as significantly less useful and more difficult to use compared to younger participants. Older adults were also more likely to think that self-driving cars would be more expensive and more difficult to find where to purchase or access. Older adults indicated that they believed self-driving cars were less likely to be backed up with technical support, less likely to provide emotional benefits, less likely to be approved by their peers, less reliable, less likely to work with other technologies they have, and less likely to fit with their lifestyles and mental models, compared to younger participants. Strong inverse relationships with age were also found for overall level of interest in using a self-driving car and likelihood of purchasing one in the future, indicating that older adults are currently less interested in self-driving cars and less likely to use one when it becomes available. Millennials were most favorable toward the use of self-driving cars. The silent generation (born before 1945) said they were not likely to consider using a self-driving car in any case.

Across ages, however, participants indicated that they would be more likely to use a self-driving car if they were no longer able to drive and less likely to use one if they were capable of driving.

In addition to age, experience with technology in general was strongly associated with self-driving car acceptance. Participants who self-reported greater experience with technology in general and higher confidence in use of new technologies were significantly more interested in self-driving cars and more likely to purchase one in the future. hose who self-reported being more knowledgeable of new technologies were significantly more likely to purchase a self-driving car in the future if they were no longer able to drive. The findings suggest that while self-driving car acceptance varies across generations, as shown in Table 4, age may have an indirect effect on acceptance through experience with technology in general. Additionally, current drivers and non-drivers showed minor differences in their attitudes toward using self-driving cars. Participants who did not have a valid driver’s license were significantly more likely to be interested in using a self-driving car than those who currently had a valid driver’s license. No significant interaction effects were observed between age and possession of a driver’s license.


The Influence of Feelings While Driving Regular Cars on the Perception and Acceptance of Self-Driving Cars

Introduction: Negative emotions that driving may engender in some people have also been found to been connected to a greater likelihood of crashes. Removing human error from driving is one of the greatest potential benefits of self-driving cars, as driver error could be directly or indirectly responsible for as many as 94% of all traffic accidents. The rapidly growing population of older drivers may especially benefit from self-driving cars.

Previous work has found that people’s risk and benefit perceptions as well as trust in the technology are related to its acceptance. . In this research, we specifically examine how people’s feelings around driving traditional cars may affect their perceptions of risk and benefit of and trust in self-driving cars. Further, we investigate how these feelings, perceptions, and trust in turn influence people’s acceptance of the technology.

Laypeople often evaluate the risks and benefits of new technologies differently than experts, and their perceptions of risk are also shaped by their perceptions of benefits the technology may offer. For laypeople, risk perception tends to decrease when benefit perception increases, and vice versa. The characteristics of the technology itself can be captured by two orthogonal dimensions: dread risk and unknown risk. Dread risks include people’s perceptions of the potential for lack of control, catastrophic outcomes, and fatalities. Unknown risks include perceived newness, lack of scientific knowledge, unobservable consequences, and delay of effects. Individual-level factors that affect people’s perceptions of risk include knowledge and affective associations. People’s levels of knowledge about a technology should affect the extent to which they understand both its risks and benefits.

As noted above, affect, in the form of a subtle feeling of positivity or negativity, can serve as a decision heuristic that people use in situations of uncertainty and limited knowledge, known as the affect heuristic. The basis of these feelings is often prior experiences or thoughts related to the decision at hand but it could also be a less relevant emotional state such as current mood.

The nature or valence of the affect plays a role in how it is weighed in judgments. In particular, people tend to attend to or weight negative information or emotions more heavily than positive ones when making evaluations. The affect heuristic also serves as one explanation for the inverse relationship between risk and benefit assessments. If people’s emotional responses are more positive, they tend to judge risks to be lower and benefits to be higher; the more negative people’s affective reactions are, the more likely they are to judge risks to be higher and benefits to be lower. People may be particularly more likely to rely on their affective reactions as a common source to generate both their risk and benefit evaluations when they lack expertise within a given domain.

The affect heuristic suggests that affect shapes people’s willingness to adopt new technologies to the extent that the technology is novel, its performance is uncertain, and its impacts are unknown. Perceived usefulness or the perceived potential benefits has been shown in some empirical work to be a more significant factor in explaining adoption than ease of use. Other factors that have been identified as significant for understanding technology adoption include the relevance of people’s previous experiences (including with the technology) and system reliability—the ability of the system to work without failure. Emotion is also a factor. Further, individual characteristics such as age, gender, lifestyle, and comfort levels with different technologies may also affect people’s willingness to adopt new technologies.

Studies have found that people’s degree of acceptance varies by individual characteristics, with younger, male, or more tech savvy people generally more interested in using self-driving cars than older, female, or less tech savvy people. People’s hesitations around the acceptance of automated vehicle technologies may also be tied to their feelings around driving itself, and many people report driving to be positive for them. For example, in a study that compared all levels of automation (from manual [fully human controlled] to fully automated), participants found manual driving the most enjoyable. Yielding control was a major barrier to adoption of self-driving cars among regular commuters (Howard & Dai, 2013). Because self-driving cars represent a fundamental change in the driving task, people’s current feelings about driving traditional vehicles may shape how they assess changes or alternatives to it.

The present study focuses on how feelings experienced while driving influence risk and benefit perceptions as well as trust in self-driving cars and how, in turn, these perceptions affect the acceptance of these vehicles. We approached this question in an exploratory manner and formulated the following research question: How do feelings related to human-operated driving influence risk and benefit perceptions of, as well as trust in, self-driving cars? Note: For participant and exact questions, see the paper itself.

Results: Higher risk perception was predicted by less experience with vehicle automation technologies, higher levels of positive affect (control), higher levels of negative affect experienced while driving, and being female. Higher benefit perception was related to having fewer years as a driver, greater self-reported knowledge of self-driving cars, more experience with vehicle automation technologies, lower levels of positive affect (control), higher levels of positive affect (enjoyment), higher levels of negative affect, and being male. Trust in self-driving cars was related to having fewer years as a driver, greater self-reported knowledge of self-driving cars, more experience with advanced vehicle technologies, no knowledge of any accidents involving a self-driving car, positive affect (enjoyment) experienced while driving, and being male.

As for interest in using a self-driving car, risk perception, benefit perception and trust were all significant predictors, but benefit perception had the largest effect size among the three.

Discussion: Our results indicate that feelings experienced while driving regular cars inform people’s risk and benefit perceptions of as well as their trust in self-driving cars. We asked about people’s affective experiences driving traditional vehicles—not self-driving cars; nevertheless, people’s feelings about the more familiar driving of current vehicles carried over to their assessments of self-driving cars. Also, one’s attitudes about the status quo should inform perceptions of change to it. People who experienced high levels of negative affect had both higher risk and higher benefit perceptions of self-driving cars. This is contrary to what we would expect from research on the affect heuristic. Because positive affect is associated with more automatic processing, people who have more positive associations with driving may also be less inclined to deliberate about potential risks associated with self-driving cars.

Our results further underscore the significance of benefit perception for understanding technology acceptance. As self-driving cars are still more conceptual than tangible, their usefulness may not be 14 Raue et al. obvious to many, but so too may the risks of such technologies not be fully understood. As the technology continues to mature and becomes more widely adopted, it may be especially important to communicate to the public about its benefits and risks, so that communities can make better decisions about how they want to use and interact with the technology.

Planning

Week Task 1 Task 2 Task 3 Task 4 Objectives (end of the week)
Week 1 Choose subject Make a planning Collect information Update the wiki-page Subject chosen
Week 2 Define research question Literature research Concrete planning Update the wiki-page Research question specified
Week 3 Literature review Define subtopics Literature study Update the wiki-page Subtopics defined
Week 4 Make survey Plan meetings in smaller groups Write hypothesis Update the wiki-page Survey started
Week 5 Send out survey Contact professors Switch subtopics Update the wiki-page Contact made
Week 6 Analysing survey Make final report Write conclusion/discussion survey Update the wiki-page Survey finished
Week 7 Finish final report Start making the presentation/powerpoint Update the wiki-page Final report finished
Week 8 Peer review Last preparations for presentation Finalize the wiki-page Presentation

Planning per week

Week 1

Name Total [h] Break-down
Laura Smulders 8.5 Meetings [3h], Starting lecture [1h], Research [1h], 5 relevant references [2h], Start/discuss problem statement & objectives [1.5h]
Sam Blauwhof 8.5 Meetings [3h], Starting lecture [1h], Research [1h], 5 relevant references [2h], Start/discuss Approach, Milestones and deliverables [1.5h]
Joris van Aalst 9 Meetings [3h], Starting lecture [1h], Research [2h], 5 relevant references [2h], Start/discuss User part [1h]
Roel van Gool 8 Meetings [3h], Starting lecture [1h], Research [1.5h], 5 relevant references [2h], Check references [0.5h]
Roxane Wijnen 8 Meetings [3h], Starting lecture [1h], Research [1h], 5 relevant references [2h], Start/discuss user requirements [1h]

Week 2

Name Total [h] Break-down
Laura Smulders 7 Meetings [3h], Summarize 5 relevant articles [4h]
Sam Blauwhof 7.5 Meetings [3h], Summarize 5 relevant articles [4.5h]
Joris van Aalst 8 Meetings [3h], Summarize 5 relevant articles [5h]
Roel van Gool 8 Meetings [3h], Summarize 5 relevant articles [5h]
Roxane Wijnen 7.5 Meetings [3h], Summarize 5 relevant articles [4.5h]

Week 3

Name Total [h] Break-down
Laura Smulders 7 Meetings [3h], Problem statement [3h], Update Wiki [1h]
Sam Blauwhof 7.5 Meetings [3h], Safety - traffic behaviour [4.5h]
Joris van Aalst 7.5 Meetings [3h], Perspective of private end-user [4.5h]
Roel van Gool 8 Meetings [3h], Ethical theories [5h]
Roxane Wijnen 7 Meetings [3h], Responsibility [4h]

Week 4

Name Total [h] Break-down
Laura Smulders 12.5 General meetings [2h], Meeting with Sam & Roel [2.5h], Update Wiki [1h], Hypothesis [2h], Planning [1.5h], Literature study [3.5h]
Sam Blauwhof 11 General meetings [2h], Meeting with Laura & Roel [2.5h], Survey with Joris [2.5h], Literature study [4h]
Joris van Aalst 10.5 General meetings [2h], Meeting with Roxane [2h], Survey with Sam [2.5h], Literature study [4h]
Roel van Gool 10.5 General meetings [2h], Meeting with Laura & Sam [2.5h], Research platforms survey [0.5h], Literature study [5.5h]
Roxane Wijnen 8 General meetings [2h], Meeting with Joris [2h], Literature study [4h]

Week 5

Name Total [h] Break-down
Laura Smulders 11 General meetings [2h], Meeting with Sam & Roel [2h], Survey with Roxane & Roel [2.5h], Define relevant factors & Literature study [2h], Update Wiki & Planning [0.5h], Finish survey feedback Raymond Cuijpers [2h]
Sam Blauwhof 8 General meetings [2h], Meeting with Laura & Roel [2h], Literature study [4h]
Joris van Aalst 7 General meetings [2h], Meeting with Roxane [1.5h], Literature study [3.5h]
Roel van Gool 12 General meetings [2h], Meeting with Laura & Sam [2h], Survey with Roxane & Laura [2.5h], Contact with Raymond Cuijpers [0.5h], Literature study [3h], Finish survey feedback Raymond Cuijpers [2h]
Roxane Wijnen 9 General meetings [2h], Meeting with Joris [1.5h], Survey with Laura & Roel [2.5h], Literature study [3h]

Week 6

Name Total [h] Break-down
Laura Smulders 9 General meetings [2h], Review Responsibility [2h], Meeting with Roxane [1h], Methods survey [3h], Update Wiki [0.5h], Update planning [0.5h]
Sam Blauwhof 9.5 General meetings [2h], Review Ethical theories [2.5h], Meeting with Roel [1.5h], Meeting with Joris [1h], Introduction survey [2.5h]
Joris van Aalst 11 General meetings [2h], Review Safety [2h], Meeting with Sam [1h], Research statistics [1.5h], Results survey [4.5h]
Roel van Gool 12.5 General meetings [2h], Privacy [5h], Meeting with Sam [1h], Results survey [4.5h]
Roxane Wijnen 11.5 General meetings [2h], Review Perspective of private end-user [6h], Meeting with Laura [1h], Meeting with Joris [1h], Research statistics [1.5h]

Week 7

Name Total [h] Break-down
Laura Smulders Meetings [],
Sam Blauwhof Meetings [],
Joris van Aalst Meetings [],
Roel van Gool Meetings [],
Roxane Wijnen Meetings [],

Week 8

Name Total [h] Break-down
Laura Smulders Meetings [],
Sam Blauwhof Meetings [],
Joris van Aalst Meetings [],
Roel van Gool Meetings [],
Roxane Wijnen Meetings [],