PRE2020 3 Group11: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
 
(334 intermediate revisions by 5 users not shown)
Line 1: Line 1:
== '''The acceptance of self-driving cars''' ==
<font size = '6'>The acceptance of self-driving cars</font>
<div style="font-family: 'Arial'; font-size: 16px; line-height: 1.5; max-width: 1100px; word-wrap: break-word; color: #333; font-weight: 400; box-shadow: 0px 25px 35px -5px rgba(0,0,0,0.75); margin-left: auto; margin-right: auto; padding: 70px; background-color: white; padding-top: 25px;">
----
 
 
= Group members =
 
{| border=1 style="border-collapse: collapse; width: 60%; height: 14em;"
! style="width: 10em;" | Name
! style="width: 10em;" | Studentnumber
! style="width: 10em;" | Email
|-
| Laura Smulders
| 1342819
| L.a.smulders@student.tue.nl
|-
| Sam Blauwhof
| 1439065
| S.e.blauwhof@student.tue.nl
|-
| Joris van Aalst
| 1470418
| J.v.aalst@student.tue.nl
|-
| Roel van Gool
| 1236549
| R.p.v.gool@student.tue.nl
|-
| Roxane Wijnen
| 1248413
| R.a.r.wijnen@student.tue.nl
|}
 
= Introduction =


----


== Problem statement ==
== Problem statement ==
What are the relevant factors that contribute to the acceptance of self-driving cars for the private end-user?


Self-driving cars are believed to be more safe than manually driven cars. However, they can not be a 100% safe. Because crashes and collisions are unavoidable, self-driving cars should be programmed for responding to situations where accidents are highly likely or unavoidable (Sven Nyholm, Jilles Smids, 2016). There are three moral problems involving self-driving cars. First, the problem of who decides how self-driving cars should be programmed to deal with accidents exists. Next, the moral question who has to take the moral and legal responsibility for harms caused by self-driving cars is asked. Finally, there is the decision-making of risks and uncertainty.  
Self-driving cars are generally believed to be safer than manually driven cars. However, they can not be 100% safe. Because crashes and collisions are unavoidable, self-driving cars should be programmed to respond to situations where accidents are highly likely or unavoidable (Nyholm & Smids, 2016). Among others, there are three moral problems involving self-driving cars. First, there is the problem of who decides how self-driving cars should be programmed to deal with accidents. Secondly, the moral question of who has to take the moral and legal responsibility for harms caused by self-driving cars is asked. Lastly, there is the morality of the decision-making under risks and uncertainty.  


There is the trolley problem, which is a moral problem because of human perspective on moral decisions made by machine intelligence, such as self-driving cars. For example, should a self-driving car hit a pregnant woman or swerve into a wall and kill its four passengers? There is also a moral responsibility for harms caused by self-driving cars. Suppose, for example, when there is an accident between an autonomous car and a conventional car, this will not only be followed by legal proceedings, it will also lead to a debate about who is morally responsible for what happened (Sven Nyholm, Jilles Smids, 2016).  
A problem closely associated with the morality of self-driving cars is the trolley problem; For example, in the case of an unavoidable accident where the car has to choose between crashing into a kid, or into a wall, harming its four passengers, which should the car choose? When this choice is made, there is also the question of who is morally responsible for harms caused by self-driving cars. Suppose, for example, when there is an accident between an autonomous car and a conventional car. This will not only be followed by legal proceedings, it will also lead to a debate about who is morally responsible for what happened (Nyholm & Smids, 2016).  


A lot of uncertainty is involved with self-driving cars. The self-driving car cannot acquire certain knowledge about the truck’s trajectory, its speed at the time of collision, and its actual weight. Second, focusing on the self-driving car itself, in order to calculate the optimal trajectory, the self-driving car needs to have perfect knowledge of the state of the road, since any slipperiness of the road limits its maximal deceleration. Finally, if we turn to the elderly pedestrian, again we can easily identify a number of sources of uncertainty. Using facial recognition software, the self-driving car can perhaps estimate his age with some degree of precision and confidence. But it may merely guess his actual state of health (Sven Nyholm, Jilles Smids, 2016).
A lot of uncertainty is involved with the decision-making process of self-driving cars. The self-driving car cannot acquire certain knowledge about the trajectory of other road vehicles, their speed at the time of collision, and their actual weight. Second, focusing on the self-driving car itself, in order to calculate the optimal trajectory the self-driving car needs to have perfect knowledge of the state of the road, since any slipperiness of the road limits its maximal deceleration. Finally, if we turn to the case of an elderly pedestrian in the trolley problem, again we can easily identify a number of sources of uncertainty. Using facial recognition software, the self-driving car can perhaps estimate his age with some degree of precision and confidence. But it may merely guess his actual state of health (Nyholm & Smids, 2016).


The decision-making about self-driving cars is more realistically represented as being made by multiple stakeholders; ordinary citizens, lawyers, ethicists, engineers, risk assessment experts, car-manufacturers, government, etc. These stakeholders need to negotiate a mutually agreed-upon solution (Sven Nyholm, Jilles Smids, 2016). This report will focus on the relevant factors that contribute to the acceptance of self-driving cars with the main focus on the private end-user. Taking into account the ethical theories: utilitarianism, kantianism, virtue ethics, deontology, ethical plurism, ethical absolutism and ethical relativism, the moral and legal responsibility, safety, security, privacy and the perspective of the private end-user.
The decision-making of self-driving cars is realistically represented as being made by multiple stakeholders; ordinary citizens, lawyers, ethicists, engineers, risk assessment experts, car-manufacturers, government, etc. These stakeholders need to negotiate a mutually agreed-upon solution (Nyholm & Smids, 2016). Whatever this mutually agreed-upon solution will be, all parties will have to account for the general acceptance of their implemented solution if they wish for self-driving cars to be successfully deployed. This report will focus on the relevant factors that contribute to the acceptance of self-driving cars with the main focus on the private end-user. Among other things, into account are taken some ethical theories which could be a guideline for the decisions the car has to make: utilitarianism, kantianism, virtue ethics, deontology, ethical plurism, ethical absolutism and ethical relativism. Aside from ethical theories, other influences on acceptance will also be treated in this report.


== State-of-the-art/Hypothesis ==


== State-of-the-art/Hypothesis ==


== Survey ==
The developments and advances in the technology of autonomous vehicles have recently brought self-driving vehicles to the forefront of public interest and discussion. In response to the rapid technological progress of self-driving cars, governments have already begun to develop strategies to address the challenges that may result from the introduction of self-driving cars (Schoettle & Sivak, 2014). The Dutch national government aims to take the lead in these developments and prepare the Netherlands for their implementation. The Ministry of Infrastructure and the Environment has opened the public roads to large-scale tests with self-driving passenger cars and trucks. The Dutch cabinet has adopted a bill which in the near future will make it possible to conduct experiments with self-driving cars without a driver being physically present in the vehicle (mobility, public transport and road safety, etc.).
https://doi.org/10.1016/j.tranpol.2018.03.004
 
The end-consumers (the actual drivers) will eventually decide whether self-driving cars will successfully materialize on the mass market. However, the perspective of the end user is not often taken into account, and the general lack of research in this direction is the reason this research is conducted. Therefore, our research question is: "What are the relevant factors that contribute to the acceptance of self-driving cars for the private end-user?" User resistance to change has been found to be an important cause for many implementation problems (Jiang, Muhanna, & Klein, 2000), so it is very likely that the implementation of the self-driving car will not be trivial as people may be resistant to accept the new technology. It is likely that a significant percentage of drivers may not be comfortable with full autonomous driving, as people might experience driving to be adventurous, thrilling and pleasurable (Steg, 2005). There is also the question whether self-driving cars could be seen as providing the ultimate level of autonomy when making people dependent on the technology. Given that self-driving cars could be tracked steadily could lead to privacy issues. Another potential cause for barriers towards self-driving cars is the risk of ‘misbehaving computer system’. With autonomous vehicles, criminals or terrorists might be able to hack into and use their cars for illegal purposes. Furthermore, the unavoidable rate of failure and crashes could lead to mistrust. Especially as people tend to underestimate the safety of technology while putting excessive trust in human capabilities like their own driving skills (König & Neumayr, 2017).
 
In several recent surveys on the topic of self-driving vehicles, the public has expressed some concern regarding owning or using vehicles with this technology. Looking at a survey of public opinion on autonomous and self-driving vehicles in the U.S., the U.K., and Australia, the majority of respondents had previously heard of self-driving vehicles, had a positive initial opinion of the technology, and had high expectations about the benefits of the technology (Schoettle & Sivak, 2014). However, the majority of respondents expressed high levels of concern about riding in self-driving cars, security issues related to self-driving cars, and self-driving cars not performing as well as actual drivers. Respondents also expressed high levels of concern about vehicles without driver controls (Schoettle & Sivak, 2014). In the survey "Users’ resistance towards radical innovations: The case of the self-driving car", findings are that people who used a car more often tended to be less open to the benefits of self-driving cars. The most pronounced desire of respondents was to have the possibility to manually take over control of the car whenever wanted. This indicates that the drivers want to be enabled to decide when to switch to self-driving mode and have the option to resume control in situations when the driver does not trust the technology. In the survey the most severe concern involving the car and the technology itself was the fear of possible attacks by hackers (König & Neumayr, 2017).
 
This report will focus on the relevant factors that contribute to the acceptance of self-driving cars for the private end-user. A survey is conducted to get more insight into the private end-user of self-driving cars. Together with the literature research and the survey conducted on the topic of self-driving vehicles, these relevant factors will be the ethical theories, the moral and legal responsibility, safety, privacy and the perspective of the private end-user.


doi: 10.1109/TEM.2018.2877307.
= Relevant factors =


https://doi.org/10.1007/978-3-319-58530-7_1


== Ethical theories ==
== Ethical theories ==
A key feature of self-driving cars is that the decision making process is taken away from the person in the driver’s seat, and instead bestowed upon the car itself. From this several ethical dilemmas emerge, one of which is essentially a version of the trolley problem. When an unavoidable collision will occur, it is important to define the desired behaviour of the self-driving car. It might be the case that in such a scenario, the car has to choose whether to prioritize the life and health of its passengers or the people outside of the vehicle. In real life such cases are relatively rare [reference 1] , but the ethical theory underlying that decision will have possibly an impact on the acceptance of the technology. Self-driving vehicles that decide who might live and who might die are essentially in a scenario where some moral reasoning is required in order to produce the best outcome for all parties involved. Given that cars seem not to be capable of moral reasoning, programmers must choose for them the right ethical setting on which to base such decisions on. However, ethical decisions are not often clear cut. Imagine driving at high speed in a self-driving car, and suddenly the car in front comes to a sudden halt. The self-driving car can either suddenly break as well, possibly harming the passengers, or it can swerve into a motorcyclist, possibly harming them. One could argue that since the motorcyclist is not at fault, the self-driving car should prioritize their safety. After all, the passenger made the decision to enter the car, putting at least some responsibility on them. On the other hand, people who buy might buy the self-driving car will have an expectation to not be put in avoidable danger. No matter the choice of the car, and the underlying ethical theory that it is (possibly) based on, it is likely that the behaviour and decision-making of the car has more chance of being socially accepted if it can morally be justified. Therefore in this section there is first highlighted some possible ethical theories, and then we will discuss some relevant aspects that surround the implementation of all ethical theories.


- Different ethical theories explanation
A key feature of self-driving cars is that the decision making process is taken away from the person in the driver’s seat, and instead bestowed upon the car itself. From this drastical change several ethical dilemmas emerge, one of which is essentially an adapted version of the trolley problem. When an unavoidable collision occurs, it is important to define the desired behaviour of the self-driving car. It might be the case that in such a scenario, the car has to choose whether to prioritize the life and health of its passengers or the people outside of the vehicle. In real life such cases are relatively rare (Nyholm, 2018; Lin, 2016), but the ethical theory underlying that decision will possibly have an impact on the acceptance of the technology. Self-driving vehicles that decide who might live and who might die are essential in a scenario where some moral reasoning is required in order to produce the best outcome for all parties involved. Given that cars seem not to be capable of moral reasoning, programmers must choose for them the right ethical setting on which to base such decisions on. However, ethical decisions are not often clear cut. Imagine driving at high speed in a self-driving car, and the car in front comes to a sudden halt. The self-driving car can either suddenly break as well, possibly harming the passengers, or it can swerve into a motorcyclist, possibly harming them. This scenario can be regarded as an adapted version of the trolley problem. One could argue that since the motorcyclist is not at fault, the self-driving car should prioritize their safety. After all, the passenger made the decision to enter the car, putting at least some responsibility on them. On the other hand, people who buy might buy the self-driving car will have an expectation to not be put in avoidable danger. No matter the choice of the car, and the underlying ethical theory that it is (possibly) based on, it is likely that the behaviour and decision-making of the car has more chance of being socially accepted if it can morally be justified. Therefore in this section there is first highlighted some possible ethical theories, and then we will discuss some relevant aspects that surround the implementation of all ethical theories.
 


- Explicitly choose ethical setting vs neural nets (which would effectively choose one, being a black box)
'''Ethical theories under consideration'''
Although there are not a lot of actions a car could take in the above-described scenario, there are a lot of ethical theories that can help to inform the car to make such a decision. The most prominent ethical theories that might prima facie be useful are utilitarianism, deontology, virtue ethics, contractualism, and egoism. These are the ethical theories that will be treated in this section.
Utilitarianism considers consequences of actions, as opposed to the action itself. This means that the correct moral decision or action in any scenario is the one that produces the most good. Although "good" is a subjective term, in most versions of utilitarianism this usually refers to the net happiness or welfare increase for all associated parties (Driver, 2014). Circumstances or the intrinsic nature of an action is not taken into account, as opposed to Deontology. Deontology does not judge the morality of an action based on its consequences, but on the action itself. Deontology posits that moral actions are those actions which have been taken on the grounds of a set of pre-determined rules, which hold universally and absolutely. This means that for a deontologist, some actions are wrong or right no matter their outcome.


- Ethical knob and letting the user set the ethical setting
The third major normative ethical theory is virtue ethics. Virtue ethics emphasizes the virtues, or moral character, as opposed to rules or consequences. Virtues are seen as positive or "good" character traits. Examples of such traits are courage, or modesty. A moral person should do actions which realize these traits, and therefore moral actions are those which cause a persons’ virtues to be realized.


- Game theory
Other than the three major classical normative ethics theories, there are two more prima facie relevant theories, the first of which is egoism. Normative egoism posits that the only actions that should be taken morally are those actions that maximize the individuals self-interest. An egoist only considers the benefit and detriments other people experience in so far as those experiences will influence the self-interest of the egoist. Although it may not seem like it, egoism is very similar to utilitarianism, except that utilitarianism focusses on the maximum happiness of all people involved, and egoism only focusses on the maximum happiness of the individual.


- Relevance of ethical theories
The last ethical theory that can be applied to the adapted trolley problem is (social) contractualism. Contractualism does not make any claims about the inherent morality of actions, but rather posits that a moral action is one that is mutually agreed upon by all parties affected by the action. What this agreement should look like exactly differs per version of contractualism: in some versions there must be unanimous consent, while in other versions there must be a simple or a supermajority. A good action is therefore one that can be justified by other relevant parties, and a wrong action is one that cannot be justified by the same.


- Ethical theories that are bad for users might not be popular


- Knowing the theory might change people’s behaviours (pedestrians who know the car will prioritize them might not pay as much attention; a perfect car might do everything in its power to avoid collision with them)
'''Ethical theories applied to adapted trolley problem.'''
First, let us apply utilitarianism to the adapted trolley problem. On a micro level a self-driving car with a utilitarian ethical setting would first want to minimize the amount of deaths, and then minimize the total number of severe injuries sustained by all people who are affected by a collision. This seems simple enough, but there are, among others, two issues with this implementation of a utilitarian setting. If for instance the technology is so advanced that it can target people based on if they are for instance wearing a helmet or not, then it would be safer for the car to collide with a biker wearing a helmet over a biker who is not wearing one, assuming all else is equal. Now the biker with a helmet is targeted, even though they are the one putting in effort to be safe. This is unfair, and if this is implemented, then it is possible some people will stop trying to take safety measures seriously, in order to not be targeted by a utilitarian self-driving car. This would ultimately reduce the overall safety on the road, which is exactly the opposite of what a utilitarian wants. Note that it is unclear whether such recognision technology will even be deployed in self-driving cars, and therefore the question arises whether this is a relevant problem at all. This paper does not make claims on the likelyhood such technology will be implemented, but instead assumes it is possible in order to make (ethical) claims on the subject. In reality the technology might not be so precise, but it is better to be prepared for the case that it is.
 
The second problem is that it seems that although people want other road-users in self-driving cars to adopt a utilitarian setting, they themselves would rather buy cars that give preferential treatment to passengers (Nyholm, 2018; Bonnefon et al., 2016). "In other words, even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves" (Bonnefon et al., 2016). Therefore if self-driving cars are only sold with a utilitarian ethical setting, then less people might be inclined to buy them, again reducing the overall safety on the road.
 
There are multiple possible counters to these two issues that a "true" utilitarian might propose. To counter the first problem, the utilitarian would simply not program the car to make a distinction between people who wear a helmet or those who do not wear a helmet. A distinction would also not be made in similar scenarios, since this solution is not only relevant to cases where helmets are involved. Of course, there are also be scenarios where the safer of the two options will be chosen by the self-driving car, assuming the same amount of people are at risk in both options. The difference between a valid safe choice and an invalid safe choice is that some safety measures are explicitly taken (such as the decision to put on a helmet), while others are more a byproduct of another decision (such as riding a bus versus driving a car). Riding a bus might be safer than driving a car, but most people who are passengers in a bus do not choose to be because of safety reasons. They might not have a car, or they do so out of concern of climate change. Since people in this scenario did not choose to ride a bus because of safety reasons, it is likely they will also not stop riding the bus because of a slightly increased chance of being hit by a self-driving car. Of course, this is only a thought-experiment, but if this also holds true in practice, then a utilitarian would find it acceptable for the self-driving car to choose the safer option in the bus vs car scenario, whereas in the helmet vs no helmet scenario the utilitarian would not find it acceptable to choose the safer option.
 
To counter the second problem, the "true" utilitarian would ultimately want to reduce death and or/harm by reducing the amount of traffic accidents. If in practice that means that a significant number of people will not buy a self-driving car with a utilitarian setting, then the utilitarian would rather the self-driving cars be sold with an egoistic setting that gives passengers preferential treatment. This way, even though when an accident involving a self-driving car occurs it will be more deadly than with a utilitarian setting, accidents will overall decrease since more self-driving cars will be present.
 
There are more problems with a utilitarian approach to self-driving cars, but they are unrelated to the two micro vs macro utilitarian problems we just treated. One of these problems has to do with discrimination. In an unavoidable collision scenario where the self-driving car has to either hit an adult man or a child, the adult has more chance of survival. Is the car therefore justified in choosing the man? A utilitarian would say the car is indeed justified, except if this decision has been found to cause consumers to be turned away from purchasing and using self-driving cars. Prima facie, this would not seem to be the case, but there is no major literature on this topic that gives any definitive or exploratory answer as far as we could find. Mercedes did announce that their self-driving cars would prioritize passengers over bystanders, but this was met with heavy backlash, causing Mercedes to retract the statement (Nyholm, 2018). In some countries, it has already been made law that this type of discrimination based on age, gender or type of road-user in self-driving cars is illegal, such as in Germany (Adee, 2016). Once again, as in the helmet example, it might be the case that self-driving cars will not be equipped with such precise recognition software, in which case the above described ethical problem is not relevant. Still, it is good to be prepared for the case that it is.
 
The deontological ethical setting would not allow for a choice to be made that explicitly harms or kills a person, no matter the potential amount of saved lives. Therefore when faced with an unavoidable (possibly) deadly collision, the car would simply not make a decision at all, and events would play out "naturally". In essence, this makes the actual "chosen" collision somewhat random. As in the original trolley problem, the moral entity, in this case the car (or more accurately, the programmer who programs the ethics into the car), would simply not intervene at all. Deontologists are of the opinion that there is a difference between doing and allowing harm, and by not letting the car intervene in an unavoidable accident, both the passenger(s) and the programmer(s) are absolved of any moral responsibility. Some people might be happy with such a setting, since many people could not fathom being (morally) responsible for the deaths of others. By entering a self-driving car with a utilitarian ethical setting, the passenger(s) cannot be absolved of some moral responsibility in the case of an accident, since they made a conscious decision to buy a car that has been implemented to make explicit decisions. The same can not be said of passengers that enter a deontological self-driving car. Prima facie it seems likely that people do not want to be morally responsible in the case of an accident, and implementing a deontological ethical setting might therefore help acceptance of the technology.
 
A virtue ethics response to the adapted trolley problem is very hard to come up with. An ethical setting based on virtue ethics would want the car to make a decision that improves the virtues of the moral entity. Therefore the decision that the car makes depends on which virtue we would want to improve. Take for instance bravery. One could posit that it is brave to take up danger to yourself if it means that other people will be safer for it. If we assume the moral entity/entities to be the passenger(s), then the self-driving car would always choose to put the passengers in danger, since this would improve on their bravery. There are two problems with this approach: firstly, it is hard to optimize any decision the car makes, since it is impossible to find a decision that always improves on all virtues. Also, what are those virtues in the first place? Is it for instance virtuous to sacrifice yourself if you leave behind a family? Secondly, since the car is not actually a moral agent, whose virtues should the decision the car makes improve upon? The programmers’ or the passengers’? This is unclear. If the programmers’ virtues should be improved, then it seems prima facie extremely unlikely that people would be willing to buy cars that might sacrifice themselves to improve the virtue(s) of a programmer they never met. If the passengers’ virtues should be improved, then people might be slightly more sympathetic, but even then I assume most people do not want to sacrifice their lives to improve upon an abstract notion of virtue and morality.
 
If we take the perspective of self-driving car buyers and users, the ethical egoist response is to prioritize the lives of the passengers above all else. As said in the "utilitarian" part of this section, people who buy and use the car seem to prefer a self-driving car that always puts the lives of themselves above others. This setting could also possibly be regarded as the setting of a "true" utilitarian. There is another possible benefit to this ethical setting, namely that they are more predictable. If self-driving cars become very prevalent, that means that any self-driving car must always account for the decisions other self-driving cars are making. Therefore, if all self-driving cars prioritize themselves, their road behaviour becomes more predictable to other self-driving cars. However, this argument is theoretical in nature, and there are some game theorists who do not agree (Tay, 2000). The moral argument against ethical egoism is that it seems, and indeed is incredibly selfish. An ethical egoist might sacrifice hundreds of lives to save themselves. However, a "true" ethical egoist is not always extremely selfish, since extremely selfish behaviour is not tolerated by others. A "true" ethical egoist would therefore also consider the feelings of other people, since their thoughts and decisions may influence the reward any egoist may get out of any given situation. In the case of unavoidable (deadly) accidents however, no matter the feelings of others, an egoist that values their own life above all else will not care for the feelings of others, since there can be nothing more important now or in the future than their own life.
 
Up until now we have considered only the perspective of buyers and users of self-driving cars, but the actual moral agent is the programmer (or a collection of people in the company that employs the programmer). Their egoist response would be based on how often they are planning to use the self-driving car for which they design the software. If they do not plan to use it at all, then the ethical egoist response of the programmer would be to implement a utilitarian ethical setting, since the programmer will be on average safer. If they however plan to use the self-driving car a lot, then the ethical egoist response is to implement an ethical setting that prioritizes the passenger. However, since this report is mostly concerned with the perspective of the end-user, the above described perspective is ultimately not very relevant.
A contractualist ethical setting is one that is agreed upon by all relevant parties. Unanimous consent seems impossible to get, so in practice this would probably a simple democratic vote, where an arrangement of ethical settings, or a combination of ethical settings are proposed. Each possible affected person can take a vote on these settings, and the democratic winner(s) will be implemented. The tough question is: who is affected by the decisions of a self-driving car? Self-driving cars can potentially drive across whole continents; from Portugal to China, or from Canada to Argentina. If the decisions of these self-driving cars can influence events in a collection of multiple countries, should people in all these countries be part of the decision making process? If so, should there be a global vote on the specific ethical settings that can be implemented? Or if the vote is done nationally, does that mean that the ethical setting of a car must be changed when the self-driving cars enters a country where citizens voted on a different ethical setting? In practice this seems very difficult to implement. If any or all of these contractualist ethical settings are practically possible, then this setting almost completely solves the responsibility aspect of self-driving cars: if all relevant parties can vote, then society as a whole can be held ethically and legally responsible. Since responsibility might be one of the factors that contribute to the acceptance of self-driving cars, having a realistic solution to the issue of responsibility will likely positively impact public perception of self-driving cars.
 
'''Letting the user decide the ethical setting or not. Also, all cars the same setting or not?'''
It is clear that there is no ethical setting that is perfect for every scenario. For various different reasons, some authors advocate for people to be able to choose their own ethical settings. One can imagine an "ethical knob", which has different programmable ethical settings. An ethical knob might be on a scale from altruistic, to egoistic, with an impartial setting in them middle. Maybe there will even be a deontological setting which does not intervene in unavoidable accidents. There are several reasons to implement such an ethical knob. People might want to be able to buy cars that mirror their own moral mindset. Millar (2014) observes that self-driving cars can be regarded as moral proxies, which implement moral choices. Implementing a moral knob also makes it easier to assign responsibility to someone in the case of an unavoidable accident (Sandberg & Bradshaw‐Martin, 2013; Lin, 2014), since the passengers of the car have explicitly chosen the decision of the car. This might impact acceptance of the technology both positively and negatively. Prima facie it seems that people who want to buy self-driving cars might want to be able to choose their own ethical setting, but it is unknown if people would also want to choose their preferred ethical setting if this causes them to be legally and/or morally responsible. Also, other relevant road parties might not accept self-driving car passengers to choose their own ethical setting, since it is likely they will choose an egoistic setting, which negatively impacts their own road experience. This is especially true if the car is equipped with an "extremely egoistic" setting in which the value of the passengers life is worth considerably more than other people's lives. It seems likely people will not accept a self-driving car making such decisions, so perhaps manufacturers will limit how egoistic the ethical knob can be turned. Likely, these kind of ethical settings would be very unpopular, perhaps even with people who might benefit from an extremely egoistic setting. Indeed, it is already been explored in surveys that people generally want other self-driving cars to reduce overall harm (Bonnefon et al, 2016). An (extremely) egoistic ethical setting is the direct opposite of such a utilitarian setting.
 
The same can be said for an ethical knob that can not only be turned by the user to fit their moral convictions, but can even be modified by the user to fit other kind of preferences. An ethical knob that is able to discriminate on gender, or race might be technologically possible to make, but users’ should not be allowed to let their self-driving cars be racist or sexist. Discrimination based on race or sex is illegal in many countries, so these ethical settings, if even possible to implement, will likely be outlawed anyway, as Germany has already done. To gauge which kind of settings are regarded as unacceptable, a contractualist might propose a democratic vote to gauge which kind of settings are regarded as unacceptable. The free choice of people to make their own-self driving car will then be limited by the democratic choice of all relevant road users. Such an arrangement might prove to be an acceptable middle ground between no ethical knob or a completely customizable ethical knob. However, whether this would actually lead to increased acceptance of the technology over the other two options has not been settled or much explored in academic context.


== Responsibility ==
== Responsibility ==


One very important factor in the development and sale of automated vehicles is the question of who is responsible when things go wrong. In this section we will look in detail at all factors involved and come up with certain solutions. As brought up by Marchant and Lindor, there are three questions that need to be analysed. Firstly, who will be liable in the case of an accident? Secondly, how much weight should be given to the fact that autonomous vehicles are supposed to be safer than conventional vehicles in determining who of the involved people should be held responsible? Lastly, will a higher percentage of crashes be caused because of a manufacturing ‘defect’, compared to crashes with conventional vehicles where driver error is usually attributed to the cause (Marchant & Lindor, 2012)?
Although automated vehicles seemed a distant future a mere twenty years ago, they are becoming a reality right now. For some years companies like, for example, Google have run trials with automated vehicles in actual traffic situations and have driven millions of kilometers autonomously. However, between December 2016 and November 2017, for example, Waymo's self-driving cars drove about 350.000 miles and human driver retook the wheel 63 times. This is an average of about 5.600 miles between every disengagement. Uber has not been testing its self-driving cars long enough in California to be required to release its disengagement numbers (Wakabayashi, 2018). Though this research has been ground-breaking, there have also been some incidents in the past years. In 2016 a Tesla driver was killed while using the car’s autopilot because the vehicle failed to recognize a white truck (Yadron & Tynon, 2016). In 2018 a self-driving Volvo in Arizona collided with a pedestrian who did not survive the accident. It was believed to be the first pedestrian death associated with self-driving technology. When an Uber self-driving car and a conventional vehicle collided in Tempe in March 2017, city police said that extra safety regulations were not necessary, as the conventional car was was at fault, not the self-driving vehicle (Wakabayashi, 2018).
 
One very important factor in the development and sale of automated vehicles is the question of who is responsible when things go wrong. In this section we will look in detail at all factors involved and come up with some solutions. As brought up by Marchant and Lindor (2012), there are three questions that need to be analysed. Firstly, who will be liable in the case of an accident? Secondly, how much weight should be given to the fact that autonomous vehicles are supposed to be safer than conventional vehicles in determining who of the involved people should be held responsible? Lastly, will a higher percentage of crashes be caused because of a manufacturing ‘defect’, compared to crashes with conventional vehicles where driver error is usually attributed to the cause (Marchant & Lindor, 2012)?
 
'''Current legislation'''
 
If we take a look at how responsibility works for conventional vehicles, we find that responsibility is usually addressed to the driver due to failure to obey to the traffic regulations (Pöllänen, Read, Lane, Thompson, & Salmon, 2020). This can be as small and common as driving too fast or losing attention for a fraction of a moment, something nearly everyone is guilty of doing at some point. Where this usually does not matter, sometimes it can lead to catastrophical results. This moment of misfortune still holds the driver responsible. As Nagel (1982) theorized, between driving a little too fast and killing a child that crosses the street unexpectedly, and there being no child, there is only bad luck. The consequence, however, is vast for the child, but also for the driver (Nagel, 1982). This reasoning could also be applied to automated vehicles. If an accident happens it is just bad luck for the driver, and he will without doubt be liable. However, looking at the fact that this depends on luck, and the fact that most autonomous vehicles allow for restricted to no control, this option is not considered as a plausible one (Hevelke & Nida-Rümelin, 2015).
 
'''Blame attribution'''
 
A couple of studies have shown that the level of control is crucial in blame attribution. McManus and Rutchick (2018) showed that people attribute less blame to a driver in a fully automated vehicle in comparison to a situation where the driver selected a different algorithm (e.g. to behave selfishly) or drove manually (McManus & Rutchick, 2018). Another study (Li, Zhao, Cho, Ju, & Malle, 2016) investigated blame attribution between the manufacturer, government agencies, the driver and pedestrians. They found that blame is reduced for drivers when the vehicle is fully autonomous, whereas the blame for the manufacturer or government agencies increased.
 
'''The manufacturer'''
 
It would be obvious to say the manufacturer of the car is responsible. They designed the car, so if it makes a mistake, they are to blame. However, there are different types of defects in the manufacturing process. Firstly, there is a defect in manufacturing itself, where the product did not end up as it was supposed to, even though rules are followed with care. This error is very rare, since manufacturing these days is done with a very low error rate (Marchant & Lindor, 2012). A second defect lies in the instructions. When it is failed to adequately instruct and warn, this could result in a consumer defect. A third defect, and the most significant for autonomous vehicles, is that of design. This holds that the risks of harm could have been prevented or reduced with an alternative design (Marchant & Lindor, 2012).


The manufacturer
Any flaw in the system that might cause the car to crash, the manufacturers could have known or did know beforehand. If they then sold the car anyway, there is no question in that they are responsible. However, by holding the manufacturer responsible in every case, it would immensely discourage anyone to start producing these autonomous cars. Especially with technology as complex as autonomous driving systems, it would be nearly impossible to make it flawless (Marchant & Lindor, 2012). In order to encourage people to manufacture autonomous vehicles and still hold them responsible, a balance needs to be found between the two. This is necessary, because removing all liability would also result in undesirable effects (Hevelke & Nida-Rümelin, 2015). In short, there needs to be found a way to hold the manufacturer liable enough that they will keep improving their technology.
It would be obvious to say the manufacturer of the car is responsible. They designed the car, so if it makes a mistake, they are to blame. Any flaw in the system that might cause the car to crash, the manufacturers could have known or did know beforehand. If they then sold the car anyway, there is no question in that they are responsible. However, by holding the manufacturer responsible in every case, it would immensely discourage anyone to start producing these autonomous cars. Especially with technology as complex as autonomous driving systems, it would be nearly impossible to make it flawless (Marchant & Lindor, 2012).
 
In order to encourage people to manufacture autonomous vehicles and still hold them responsible, a balance needs to be found between the two. This is necessary, because removing all liability would also result in undesirable effects (Hevelke & Nida-Rümelin, 2015).
'''Semi-autonomous vehicles'''
 
As stated above, there have been studies on blame attribution in fully autonomous vehicles, and those with certain pre-selected algorithms. A semi-autonomous vehicle (with a duty to intervene) has not been discussed yet. A good analogy for a semi-autonomous vehicle would be that of an auto-piloted airplane. The plane flies itself, though it is the responsibility of the pilot to intervene when something goes wrong (Marchant & Lindor, 2012). So, it could be suggested that regarding responsibility, in case of an accident to hold the driver of the vehicle responsible. If the car is designed in such a way that the driver has the ability to take over and intervene, this could really be used in an argument against the driver. There is an argument in what the utility of the automated vehicle will be if they are designed like this. After all, when the driver has a duty to intervene, the vehicle can no longer be summoned when needed, it can no longer be used as a safe ride home when drunk, or when tired (Howard, 2013). However, as long as the vehicles will still reduce accidents overall, saying the driver has a duty to intervene or not would still be a better option than using conventional vehicles (Hevelke & Nida-Rümelin, 2015). It could be that the accident rate is dropped even more when the driver actually does have a duty to intervene, due to the fact that it can now intervene when it for example sees something the car doesn’t see. It would also mean that there is more of a transitioning phase when introducing the automated vehicles, instead of them suddenly being fully automatic.
 
On the other hand, asking the driver to intervene in a fully automated vehicle is questionable. It would assume that the driver can intervene at all times, and this is not always the case due to human error in reaction time or danger anticipation (Hevelke & Nida-Rümelin, 2015). It would be difficult to recognize whether or not the automated vehicle will fail to respond correctly, and thus unclear when the driver needs to intervene. In this case it would be unrealistic to expect the driver to predict a dangerous situation. When implementing this reasoning, another problem is possible to arise: the driver might intervene when it should not have, resulting in an accident (Douma & Palodichuk, 2012). Next to that, as argued by Hevelke & Nida-Rümelin (2015), it seems impossible to ask a driver to pay attention all the time to be possible to intervene, while an actual accident is quite rare. All in all, it would be unreasonable to put responsibility on a driver that did not – or could not – intervene.
 
'''Shared liability'''
 
As is previously discussed, the responsibility of an accident can be placed on the individual driving the autonomous vehicle. For a number of reasons this was not ideal. An alternative would be to create a shared liability. People that drive cars everyday (especially when not necessary) take the risk of possibly causing an accident. They still make the choice to drive the car (Husak, 2004). You can extrapolate this thinking to the use of automated vehicles. If people choose to drive an automated vehicle, they in turn participate in the risk of an accident happening due to the autonomous vehicle. The responsibility of an accident is therefore shared with everyone else in the country also using the automated vehicle. In that sense the driver itself did not do something wrong, it did not intervene too late, it simply shoulders the burden with everyone else. A system that could work with this line of thinking is the entering of a tax or mandatory insurance (Hevelke & Nida-Rümelin, 2015).
 
So, it seems there are a couple of options. The manufacturer can be fully responsible; however, this could result in the intermittence of autonomous vehicle manufacturing. On the other hand, it is desirable that the manufacturer does have some sort of liability, so they keep investing to improve the vehicle. At the same time, giving the driver full responsibility only seems to be able to work in the beginning phase of autonomous vehicles. When they are still in development, and drivers really do have a duty to intervene. When the vehicles are more sophisticated and able to fully drive autonomously, the responsibility can be shared with all people through a tax or insurance.


== Safety ==
== Safety ==
One of the main factors deciding whether self-driving cars will be accepted is the safety of them. Because who would leave their life in the hands of another entity, knowing it is not completely safe. Though almost everyone gets into buses and planes without doubt or fear. Would we be able to do the same with self-driving cars? Cars have become more and more autonomous over the last decades. Furthermore, self-driving cars will operate in unstructured environments, this adds a lot of unexpected situations. (Wagner M., Koopman P. (2015))
One of the main factors deciding whether self-driving cars will be accepted is the safety of them. Because who would leave their life in the hands of another entity, knowing it is not completely safe. Though almost everyone gets into buses and planes without doubt or fear. Would we be able to do the same with self-driving cars? Cars have become more and more autonomous over the last decades. Furthermore, self-driving cars will operate in unstructured environments, this adds a lot of unexpected situations (Wagner et al., 2015).
 
 
'''Traffic behavior'''


'''Software'''
The car's safety will be determined by the way it is programmed to act in traffic. Will it stop for every pedestrian? If it does, pedestrians will know and might cross roads wherever they want. Furthermore, will it take the driving style of humans? And how does the driving style of automated vehicles influence trust and acceptance?


''Traffic behaviour''
According to the research of Elbanhawi, Simic and Jazar (2015), two factors are relevant for driving comfort: naturalness and apparent safety. The relationship between these two factors can be seen as operating between so-called safety margins (Summala, 2007).


The cars safety will be determined by the way it is programmed to act in traffic. Will it stop for every pedestrian? If it does pedestrians will know and cross roads wherever they want. Will it take the driving style of humans? How does the driving behavior of automated vehicles influence trust and acceptance?
In a research two different designs were presented to a group of participants. One was programmed to simulate a human driver, whilst the other one was communicating with its surroundings in a way that it could drive without stopping or slowing down. The research showed no significant different in trust of the two automated vehicles. However, it did show that the longer the research continued, the more trust grew (Oliveira et al., 2019). It is therefore to say that the driving behavior does not necessarily influence the trust, but the overall safety of the driving behavior determines it.


In a research two different designs were presented to a group of participants. One was programmed to simulate a human driver, whilst the other one is communicating with it’s surroundings in a way that it could drive without stopping or slowing down. The research showed no significant different in trust of the two automated vehicles. However, it did show that the longer the research continued the trust grew. (Oliveira, L., Proctor, K., Burns, C. G., & Birrell, S. (2019)) It is therefore to say that the driving behaviour does not necessarily influence the acceptance. But the overall safety of the driving behaviour determines this.
A driving style related to that of humans may however still be beneficial to the acceptance. For example, the car should be able to mimic a human driving the car (Elbanhawi, Simic & Jazar, 2015). This may reduce the hesitation towards self-driving cars and more people driving one (Hartwich, Beggiato & Krems, 2018). However, research conducted by Liu, Wang & Vincent (2020) concluded that people want self-driving vehicles at least four to five times as safe as human-driven vehicles. So, although people would like them to drive human-like, the risks shouldn’t be human-like. This could be explained by the fact that legal problems would be more complicated when an accident occurs, and safety is a major advantage of self-driving cars. If people don’t have that advantage, they may rather enjoy the pleasures of driving themselves.


''Errors''


Despite what we think, humans are quite capable of avoiding car crashes. It is inevitable that a computer never crashes, think about how often your laptop freezes. A slow response of a mini second can have disastrous consequences. Software for self-driving vehicles must be made fundamentally different. This is one of the major challenges currently holding back the development of fully automated cars. On the contrary automated air vehicles are already in use. However, software on automated aircraft is much less complex since they have to deal with fewer obstacles and almost no other vehicles.
'''Errors'''


''Hackers''
Despite what we might think, humans are quite capable of avoiding car crashes. It is inevitable that a computer can crash, think for example about how often your laptop freezes. A slow response of a millisecond can have disastrous consequences. Software for self-driving vehicles must be made fundamentally different. This is one of the major challenges currently holding back the development of fully automated cars. On the contrary, automated air vehicles are already in use. However, software on automated aircraft is much less complex since they have to deal with fewer obstacles and almost no other vehicles (Shladover, 2016).




'''Vs humans'''
'''Cybersecurity'''


Self-driving cars hold the potential of eliminating all accidents, or at least those caused by inattentive drivers. (Wagner M., Koopman P. (2015))
The software driving fully AV will have more than a hundred million lines of code, so it is impossible to predict the security problems. Windows 10 is made of fifty million lines of code and there have been lots of bugs. Doubling the amount of code will result in an even higher probability of unknown vulnerabilities (Parkinson et al., 2017). This complicated code is due to the fact that all self-driving cars have to be interconnected to make use of the most beneficial features of self-driving cars. Self-driving cars will be much more able to react to each other and plan movements ahead if they receive data from other cars through a network. Straub et al. (2017) presented a plan to protect against attacks.
 
To let cars react to each other appropriately and most efficiently, CACC (Cooperative Adaptive Cruise Control) must be made use of (Amoozadehi et al., 2015). This technology lets cars send information to other cars so that they can adapt to movements and speed changes of other cars. This technology comes as already mentioned above with security risks. There exist multiple kinds of attacks: application layer attacks, network layer attacks, system level attacks and privacy leakage attacks. Application layer attacks can influence applications as CACC beaconing. This could degrade the efficiency of cars reacting to each other, or messages could be falsified. This could result in rear-end collisions. Network layer attacks could make using the network impossible for cars, so that CACC doesn’t work at all anymore. DDoS-attacks are an example. System level attacks, on the other hand, don’t use CACC or vehicle-to-vehicle communication. These could be carried out when a person installs malicious software. Privacy leakage attacks are of course well known and actual. This is theft of data which should only be available to the user and maybe the manufacturer (Amoozadehi et al., 2015).
 
 
'''Versus humans'''
 
Self-driving cars hold the potential of eliminating all accidents, or at least those caused by inattentive drivers (Wagner et al., 2015). In a research done by Google it is suggested that the Google self-driving cars are safer than conventional human-driven vehicles. However, there is insufficient information to fully take a conclusion on this. But the results lead us to believe that highly-autonomous vehicles will be more safe than humans in certain conditions. This does not mean that there will be no car-crashes in the future, since these cars will keep on being involved in crashes with human drivers (Teoh et al., 2017).




'''The city'''
'''The city'''
The city is probably one of the most complicated locations for a self-driving car to operate in. It is filled with vulnerable road users, such as pedestrians and bikers which are relatively hard to track. Therefore, freeways are likely to be the first spaces in which the automated cars will be able to operate. This is a much more structured environment with simple rules and less unexpected situations. However, this will not solve the issue of traffic jams at popular destinations. Some might say the ambition is to allow cars, bikes and pedestrians to share road space much more safely, with the effect that more people will choose not to drive. However, and interesting question regarding this is raised by Duranton (2016): "If a driverless car or bus will never hit a jaywalker, what will stop pedestrians and cyclists from simply using the street as they please?" (Duranton, 2016).
Image tracking information could be used to predict the movements of a pedestrian or a cyclist for example. This way, a car doesn’t have to stop for every pedestrian on the sidewalk (Sarcinelli et al., 2019). But still, it doesn’t fix the above-mentioned problem.
Millard-Ball (2016) suggests pedestrian supremacy in cities. He agrees that autonomous vehicles will drive cautiously and therefore slowly in cities. That’s why people will walk more often in cities, because it will become the faster alternative. Travelling between cities will be done by autonomous vehicles, but people will exit the vehicle on a peripheral part of town, before walking to the center. This is not necessarily negative, just a change of culture. Google acknowledges this problem and states that when Google cars cannot operate in existing cities, perhaps new cities need to be created. And the truth is, it has happened in the past. The first suburb of America was developed by rail entrepreneurs who realized that developing suburbs was much more profitable than operating railways (Cox, 2016).
We might need to look at alternative technologies that we need in urban transport. Rather than developing individualist self-driving cars, let’s look at the ‘technology of the network’. How can we connect more people without consuming the space we live in (Duranton, 2016).




'''Trust'''
'''Trust'''


Questions of whether or not to trust a new technology are often answered by testing. (Wagner M., Koopman P. (2015))
For decades, we have trusted safe operation of automated mechanisms around and even inside us. However, in the last few years the autonomy of these mechanisms have drastically increased. As mentioned before, this brings along quite a few safety risks. Questions of whether to trust a new technology are often answered by testing (Wagner et al., 2015).


== Security ==
There has been a survey about trust in fully automated vehicles. Trust was defined as “the attitude that an agent will help achieve an individual’s goal in a situation characterized by uncertainty and vulnerability” (Lee & See, 2004). Within this survey sixty percent of the respondents mentioned to have difficulties trusting automated vehicles. Trust in this context can be seen as the driver’s belief that the computer drives at least as good as a human.


The trust to be able to fully implement these technologies is not where it is supposed to be. We know that trust can build up over time and this is also the case with trusting self-driving cars. The hesitation is the greatest amongst the elderly, whom are also the generation that gain a lot of benefits as well. The good news of this research is that half of the older adults reported back that they are comfortable with the concept of tools that can help the driver. The amount of tools can grow, whilst the driver/passenger can get used to the idea of a completely self-driving car (Abraham et al., 2016).


== Privacy ==
== Privacy ==
Self-driving cars rely on an arrangement of new technologies in order to traverse traffic. Some of these technologies have to take data from its environment and/or the people in the car, which can have a big effect on the privacy of both the users of the car and the people around the car. Since fully autonomous cars are not yet on the market, and have not even been build yet, it is unclear how significant the privacy issues might be that are associated with self-driving cars. At minimum, the use of data that tracks locations seems like a necessary implication for self-driving cars, and thus necessary for a self-driving car to function correctly (Boeglin, 2015). This kind of location tracking is already prevalent in mobile phones, and the privacy issues that accompany it are very well known already (Minch, 2004). In fact, car GPS that is already in use already suffers from this problem. The car can save specific locations, has to plan routes based on current location, and has to access current traffic data. If anyone were to access this information, they would essentially access a record of the movements of a person, and also of activities associated with the destinations. If one knows that the user of the self-driving car visited a psychiatrist, or an abortion clinic, then one can also make an educated guess on the things the user has been going through in their lives.


Besides these personal concerns that come from location tracking, there are also commercial concerns. The company that tracks location data might use the location data of the car to infer personal information of the users, and use this personal information for marketing purposes. We already know that this is possible, since this often happens with tracking mobile phone locations. If a mobile phone user visits a store that sells some product, then Google might use this data to send personalized advertisements to the user. The same could happen with self-driving cars.
According to a paper by Boeglin (2016), whether a vehicle is likely to impose on its passengers' privacy can largely be reduced to whether or not that vehicle is communicative (Boeglin, 2016). A communicative vehicle relays vehicle information to third parties or receives information from external sources. A vehicle that is more communicative will be likely to collect information. Communicative vehicles could take a number of forms, therefore it is hard to gauge how severe the associated privacy risks will be. One kind of communicative self-driving car is a car that exchanges data between itself and other self-driving cars. Both cars can use this data for risk mitigation or crash avoidance. Wireless networks are particularly vulnerable, according to Boeglin (2016). When self-driving cars become more prevalent, they might also be able to communicate with roads or road infrastructure (traffic lights or road sensors) to exchange data that will make both parties more effective. As a result, the traffic authority (e.g. the municipality) will also have access to the records of each self-driving car. Whether people will accept this remains to be seen, and not a lot of research has been done on this subject. One such paper that does explore the general public's opinion on privacy in self-driving cars finds that a majority of people would want to opt out of identifiable data collection, and secondary use collections such as recognition, identification, and tracking of individuals were associated with low likelihood ratings and high discomfort (Bloom et al., 2017).
Self-driving cars that are currently in development are not all communicative types of cars, partly because there does not exist infrastructure yet to support such cars. Privacy risks for non-communicative cars are less prevalent, but not nonexistent. Location tracking will always be an issue, and uncommunicative self-driving cars will still be heavily reliant on sensory data in order to get to the desired destination. This sensory data might still be hacked, but hacking is almost always a negative possibility that infringes on the right of privacy. Self-driving cars are hardly a special case in that regard.
It is largely unclear how users will react to the potential risks to their privacy, since this is a newly emerging technology, and issues such as safety, decision-making and autonomy are usually more pressing issues. We expect that people will not rate privacy as a large concern, and instead will be more concerned with the aforementioned issues. This is especially the case when talking about uncommunicative self-driving cars, which seem to be more prevalent than communicative cars in today's world. We also expect that people largely think of uncommunicative self-driving cars instead of communicative self-driving cars, since communicative cars are a step further into the future than uncommunicative self-driving cars. This probably lowers the perceived level of risk associated with privacy issues among users even more.


== Perspective of private end-user  ==
== Perspective of private end-user  ==


Additional features important for users
The potential revolutionary change that self-driving cars could stir up would affect many areas of life. Apart from improving safety, efficiency and general mobility, it would change current infrastructure and the relationship between humans and machines (Silberg et al., 2012). This section will focus primarily on the user’s attitude towards self-driving cars, specifically perceived benefits and concerns.
 
According to the National Highway Transportation Safety Administration cars are currently in ‘level 3 automation’, in which new cars have automated features, but still require an alert driver to intervene when necessary. ‘Level 4 automation’ would mean that a driver is no longer permitted to intervene (Cox, 2016). Before this level can be reached, the general public would need to feel comfortable with letting go of the steering wheel.
 
'''General attitude'''
 
A research by König & Neumayr (2017) showed that people are generally more worried about self-driving cars when they are older. They also showed that females have more concern than males, and that rural citizens are less interested in self-driving cars than urban citizens (König & Neumayr, 2017). Surprisingly, people who used their car more often seemed less open to the idea of a self-driving car, possibly because the change to self-driving cars would be too radical. Furthermore, the most common desire of people is to have the ability to manually take control of the car when desired. It allows them to still enjoy the pleasures of manually driving and they don’t lose the sense of freedom (Rupp & King, 2010).
 
Another interesting finding by König & Neumayr (2017) was that people who had no car as well as people who already had a car with more advanced automated features showed a more positive attitude towards self-driving cars. Possibly because the people without a car see it as an opportunity to be able to take part in traffic, and people with advanced cars are more familiar with the technology (König & Neumayr, 2017). Lee et al. (2017) also found that people without a driver’s licence were more likely to use a self-driving car (Lee et al., 2017).
 
'''Benefits and concerns'''
 
It is common knowledge that many cars crash due to human error. The World Health Organization (2016) reported that road traffic injuries is the leading cause of death among people between the ages of 15 to 29 (World Health Organization, 2016). Raue et al. (2019) argues that removing the human error from driving is one of the biggest potential benefits of self-driving cars. They also pose that driverless cars could potentially decrease congestion, increase mobility for non-drivers and create more efficient use of commuting time. Next to that, there are also environmental benefits; when vehicles no longer need to be built with a tank-like safety, they are lighter and consume less fuel (Bamonte, 2013; Parida et al., 2018; Raue et al., 2019).
 
König & Neumayr (2017) used a survey to judge people’s attitude towards potential benefits and concerns. They found that people mostly value the fact that a self-driving car could solve transport issues older and disabled people face. This is in accordance with Cox (2016) and Parida et al. (2018), who said the driverless car has the potential to expand opportunity and that it can improve the lives of disabled people and others who are unable to drive (Cox, 2016; Parida et al., 2018). From the survey König & Neumayr (2017) also found that people value the fact that they can engage in other things than driving. Participants did not feel that self-driving cars would give them social recognition, and they did not feel like it would yield to shorter travel times (König & Neumayr, 2017).
 
On the other hand, there are also some concerns indicated by König & Neumayr (2017). Their participants were mostly concerned with legal issues, followed by concerns for hackers. Lee et al. (2017) also found that especially older adults are concerned with self-driving cars being more expensive. Surprisingly, they found that across al sub-groups people did not trust the functioning of the technology (König & Neumayr, 2017; Raue et al., 2019).
 
'''Sharing cars'''
 
While many people look positively towards the implementation of self-driving cars, less people are willing to buy one. Many people don’t want to invest more money in self-driving cars than they do in conventional cars right now (Schoettle & Sivak, 2014). Therefore, a car sharing scheme (e.g. a whole fleet provided by a mobility service company, or a ride sharing scheme) is an option to make self-driving cars more popular. This way people would not have to spend a large sum of money, and they could gradually learn to trust the technology by using the shared self-driving cars first (König & Neumayr, 2017). According to Cox (2016), this is not necessarily true. Since corporate mobility companies will then provide the cars, they have to cover the costs of for example vehicle operation, which will increase the fees for the user (Cox, 2016).
 
So, how would it work when automated vehicles are being used as shared vehicles? Cox (2016) assumes that companies will be providing cars the same way they do now, renting them in short-term or long-term. Especially in large metropolitan areas automated vehicles could substantially shorten a trip, or solve current transportation problems (Cox, 2016; Parida et al., 2018). While cars are being shared, private ownership would still be possible, and people would be able to rent out their own personal cars short-term.
 
One option of sharing cars is to let people share a single ride. This could decrease the number of cars in an urban area and address issues like congestion, pollution or the problem of finding a parking spot (Parida et al., 2018). However, there are certain issues with ridesharing. Because not every person starts and stops in the same place, trips could actually increase in time, making ridesharing less attractive. Lowering the price of ridesharing might not even be enough to attract travellers. Ridesharing does raise another important question: do people want to share a car with strangers? As stated by Cox (2016), personal security concerns will probably only increase and therefore people will not be willing to share a ride with someone they don’t know.
 
An important notion is that vehicles are parked on average more than ninety percent of the time (Burgess, 2012). A driverless car fleet provided by a mobility company could possibly reduce the number of cars in a metropolitan city since the urban area is so densely packed. However, these cars would not be attractive to users living in a more rural area, or people that need to travel outside the urban area (Cox, 2016).
 
In the present day, many people use transit (e.g. train, metro, bus, etc.) in metropolitan areas, though this is not the fastest possible commute. Owen and Levinson (2014) found that many jobs can be reached in about half the time by car than it takes by transit. This is mostly because of the “last mile” problem, the fact that many destinations are beyond walking distance of a transit stop (Owen & Levinson, 2014). Driverless cars can be used to overcome this “last mile” problem, by placing them more at transit stops. However, a fleet of driverless cars can have two consequences on transit. On the one hand it can cause transit users to refrain from using transit because of the improved travel times and door-to-door access. On the other hand, many transit riders have a low income and will probably not be able to pay for a driverless car alternative (Cox, 2016). Though, if the charges of driverless cars are too low this might reduce the attractiveness of transit even more, causing people to use the driverless vehicle for the entire trip (Cox, 2016).
 
'''Acceptance'''
 
Many studies have delved into technology acceptance across various domains, and many different ways to determine the acceptance of self-driving cars are mentioned. Lee et al (2017) found that across all ages, perceived usefulness, affordability, social support, lifestyle fit and conceptual compatibility are significant determinants (Lee et al., 2017; Raue et al., 2019). Raue et al. (2019) found that people’s risk and benefit perceptions as well as trust in the technology relate to the acceptance of self-driving cars (Raue et al., 2019). According to Rogers (1995), to increase the probability of a wide-spread adoption of the innovation, the following factors need to be taken into account: the relative advantage, the compatibility (steering wheel with a disengage button), the trialability (test-drives), the observability (car-sharing fleets), and complexity (introduction to automation) (König & Neumayr, 2017; Rogers, 1995).
 
As found by Lee et al. (2017), older adults are possibly not ready yet to let go of the steering wheel. They found that older generations have a lower overall interest and different behavioural intentions to use. However, people with more experience with technology seemed to be more accepting (Lee et al., 2017). Other supporting studies did find that older adults are more likely to accept new in-vehicle technologies (Son, Park, & Park, 2015; Yannis, Antoniou, Vardaki, & Kanellaidis, 2010). However, Lee et al. (2017) also found that across all ages, people would be more likely to use a self-driving car if they would no longer be able to drive themselves due to aging or illness (Lee et al., 2017).
As for the general public, Raue et al. (2019) looked into common psychological theories to assess people’s willingness to accept the self-driving car. They found that people who are familiar with actions or activities often perceive them to be less risky, and people’s levels of knowledge about a certain technology can affect how they understand it risks and benefits (Hengstler, Enkel, & Duelli, 2016; Raue et al., 2019). In that sense, affect is used as a decision heuristic (i.e. a mental shortcut) in which people rely on the positive or negative feelings associated to a risk (Visschers & Siegrist, 2018). Because negative emotions weigh more heavily against positive emotions, and people are more likely to recall a negative event, negative affect may influence people to judge self-driving cars to be of higher risk and lower benefit. This negative affect can be caused by anything, like for example the loss of control from removing the steering wheel, or knowledge of accidents involving self-driving cars (Raue et al., 2019). Parida et al. (2019) stresses the importance of public attitude and user acceptance of self-driving cars as the global market acceptance heavily relies on it (Parida et al., 2018).
 
= Method =
 
'''Research design'''
 
For this questionnaire, a non-probability convenience sampling method was applied that leveraged the group’s broad networks. Even though convenience sampling means that the sample is not representative, it was a feasible opportunity to reach out to the crucial audience and to enable the collection of relevant data forming first evidence. As the questionnaire was conducted with the general public, there was no strict geographical scope in order to reach as many different people as possible. This allows first indications of driver’s attitudes towards self-driving vehicles not applying to certain regions. The survey is conducted in the Netherlands.
 
 
'''Data collection'''
 
Data was collected over a one-week time frame in March 2021 using an online questionnaire using Microsoft Forms (see appendix), a web-based survey company. This method was chosen for several reasons. Assessed information was widely available among the public. Due to Covid-19, an online approach made it easier to reach people to ensure physical distancing. And by not requiring an interviewer to be present, it reduced both potential bias and cost and time. Microsoft Forms is used because it has a safe environment, and it meets EU privacy standards.
 
Respondents were reached by sending out emails and private messages on social media (e.g. WhatsApp), including both personalized invitational letters, explicitly stating self-driving vehicles as the topic of the research, as well as a direct link to the online questionnaire. A consent form was included on the cover page of the questionnaire where respondents were assured of anonymity and confidentiality. Given the study’s exploratively nature, reaching a large number of respondents was prioritized. The minimum number of respondents favored was 100. Completed surveys were eventually received for 115 respondents.
 
 
'''Measures'''
 
In the questionnaire, several relevant factors related to self-driving vehicles were examined. The main topics addressed in the questionnaire were about general knowledge and taken from our hypothesis, namely:
 
- Familiarity with self-driving vehicles
 
- Expected benefits of self-driving vehicles
 
- Concerns about different implementations of self-driving vehicles
 
- Favored ethical settings in self-driving vehicles
 
- Acceptance of legal responsibility in unavoidable crashes with self-driving vehicles
 
 
''Personal car use and demographics''


While many people look positively towards the implementation of SDC’s, less people are willing to buy one. Also, many people don’t want to invest more money in SDC’s than they do in conventional cars right now. Therefore, a car sharing scheme is preferred by many.
In the first part of the questionnaire, participants had to answer the question whether they have a driver's license. Additionally, the respondents were asked how often they drove a car presented with the answering options ‘(almost) every day, weekly, monthly, annually and never’. Furthermore, demographical questions regarding age and education were asked.
Also many people say that they will still have concerns riding a SDC and they prefer to be able to intervene manually whenever they want or need to. Additionally, people like to take over full control when they like to.


Most important benefits or concerns (in order of relevance).
''Familiarity with self-driving vehicles''


- An SDC could solve transport-problems for older or disabled people.
Participant’s existing knowledge about self-driving vehicles was assessed. Respondents were confronted with a set of rating questions containing even, numerical Likert scales made up of four points ranging from ‘unfamiliar’ (1) to ‘familiar’ (4).  


- People are able to do other things while driving an SDC.
''Expected benefits of self-driving vehicles''


- People are concerned of legal issues caused by SDC’s.
Participants were further asked to rate their agreement with statements reflecting presumed benefits of the use of self-driving vehicles. To allow for a ‘neutral’ opinion, the statements were combined with a 5-point scale ranging from ‘very unlikely’ (1) to ‘very likely’ (5). A 5-point Likert scale is used because in forced choice experiments, consisting of a 4-point Likert scale, choices are contaminated by random guesses.  


- People are concerned of hackers’ attacks at SDC’s.
''Concerns about different implementations of self-driving vehicles''


Strategic implications (in order of relevance).
After the expected benefits of self-driving vehicles, respondents were asked to rate their concerns with statements regarding self-driving vehicles by using a 4-point Likert scale ranging from ‘not concerned’ (1) to ‘very concerned’ (4).


- A feature making the user able to take over full control should be implemented. Female and old users showed the highest agreement.
''Favored ethical settings in self-driving vehicles''
Pros: People are still able to enjoy the pleasures of manually driving and they don’t lose the emotion of freedom.
Cons: The total efficiency of driving will decrease. People will most likely drive less efficient, if they don’t speed. If every car drives autonomously, the cars can communicate better and adapt earlier and better to each other. Other SDC’s can’t predict what a human driver will do. It is likely that more accidents will take place, because SDC’s will most likely be safer. 


- Free test rides should be offered to people.
The preferred ethical setting, in which participants would like to see self-driving vehicles which are on the road, is assessed with 5 ranking options from first choice to last choice. Furthermore, there are statements regarding ethical settings used in self-driving vehicles assessed with a 5-point Likert scale, to allow the neutral opinion, ranging from ‘strongly disagree’ (1) to ‘strongly agree’ (5).


- Salesmen should offer comprehensive information in the showroom.
''Acceptance of legal responsibility in unavoidable crashes with self-driving vehicles''


(König, M., & Neumayr, L. 2017b)
Lastly, participants were asked to rate their agreement with statements about the legal responsibility in unavoidable crashes with self-driving vehicles. Again with a 5-point Likert scale, to allow the neutral opinion, ranging from ‘very unlikely’ (1) to ‘very likely’ (5).


As already said above, more people are willing to accept SDC’s when they don’t have to buy a car themselves. This means that sharing cars will be the new normal. The idea is that you can order one with a mobile app or something like that and it will drive to you by itself. This is only possible if SDC’s become autonomous at the highest level. If they are autonomous, but require a person to intervene when things go wrong, they may not drive without passenger. As also mentioned, people don’t accept fully autonomous cars as much as cars with a possibility to intervene. The problems posed by ridesharing are that not all passengers, who don’t know each other, may travel from the same point to the same point. Also, people may not always feel to comfortable when they travel with strangers. Therefore, people are willing to accept this idea more when they can order a ride for themselves and when it doesn’t stop to pick up others. That way, it will become available again when the ride is finished. This will require more cars on the road in total than when rides are shared, so it only solves part of the traffic problem. The same amount of people need to move themselves at the same time as now and buses or trains will be made less use of, because cars will be more accessible. As world population also increases, ridesharing may be necessary. A solution would be that ordering a private ride will be more expensive. Then, only a part of the population (wealthy businessmen etcetera), would make use of this option and the majority of the people would have to ride with others. Only the existence of this option and the possibility of enjoying a private ride when you really need to, could make it easier for people to accept. One benefit of not owning cars, will be that parking spots within cities won’t be needed anymore. The cars could be deployed from a base outside the city and they can be parked there when not needed.


== References used in report ==
The full text of the questionnaire is included in the appendix.


Sven Nyholm, Jilles Smids. (2016). The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem? Ethical Theory and Moral Practice, 1275–1289.
= Results =


Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics, 21(3), 619–630. https://doi.org/10.1007/s11948-014-9565-5


Marchant, G. E., & Lindor, R. A. (2012). Santa Clara Law Review The Coming Collision Between Autonomous Vehicles and the Liability System THE COMING COLLISION BETWEEN AUTONOMOUS VEHICLES AND THE LIABILITY SYSTEM. Number 4 Article, 52(4), 12–17. Retrieved from http://digitalcommons.law.scu.edu/lawreview
Completed surveys were received for 115 respondents. First demographic questions were asked. Question 2, about the respondent's age, has received 104 responses. Since this is an open question, appropriate intervals are constructed. The youngest person who filled in the survey is 17 years old and the oldest person is 80 years old. Half of the respondents are between 17 and 30 years old, and the other half is between 42 and 80 years old. There were no respondents between the ages of 30 and 42 years old. Since this points to an obvious dichotomy, the following two intervals are used:
 
- 51.3% <31
 
- 39.1% >41
 
- 9.6% no answer
 
 
Question 3, about the respondent's education, has received 115 responses. 2.6% of the respondents have no education or incomplete primary education, 6.1% have a high school diploma, 7.8% of the respondents are currently studying MBO, 32.2% of the respondents are currently studying an HBO or WO and do not have a diploma yet, 29.6%  have an HBO or WO Bachelor diploma, 20.0% have an HBO WO Master diploma and 1.7% has a PhD. Question 4, about having a driver's license, has received 114 responses. 89.5% of the respondents have a driving license and 10.5% have no driving license. Question 5, about the regularity of car use, there were 114 responses. Of the respondents, 28.1% uses their car (nearly) every day, 43.0% weekly, 24.5% monthly, 4.4 % annually and 0.0% never.
 
 
The question about how familiar respondents are with self-driving vehicles, question 6, has received 114 responses. 22.8% of the respondents are unfamiliar with self-driving vehicles, 16.7% are somewhat unfamiliar, 42.1% are somewhat familiar and 18.4% are familiar.
Question 7, about the benefits when using a self-driving car, has received 112 responses in which 1 respondent did not respond to subquestions 1, 2 and 7, 1 respondent to subquestion 2, and 1 respondent to subquestions 3 up to 7. For all the sub-questions of question 7, the possible answers are very unlikely, somewhat unlikely, no opinion/neutral, somewhat likely and very likely. The following percentages per answer occurred, as shown in figure 1 in the result section in the appendix:
 
- Fewer accidents: 0.9%, 14.0%, 4.4%, 54.4%, 26.3%
 
- Decreased severity of accidents: 8%, 19.5%, 16.8%, 45.1%, 10.6%
 
- Fewer traffic jams: 2.6%, 11.4%, 7%, 39.5%, 39.5%
 
- Shorter travel-length: 7.9%, 26.3%, 29.8%, 23.7%, 12.3%
 
- Lower vehicle emission: 1.8%, 12.3%, 19.3%, 36.0%, 30.7%
 
- Better fuel-savings: 0.9%, 7.9%, 6.1%, 38.6%, 46.5%
 
- Lower insurance rates: 5.3%, 13.3%, 26.5%, 37.2%, 17.7%
 
 
In question 8, about the concerns related to self-driving vehicles, the first 39 responses were omitted and in total 76 valid responses were received. 1 respondent did not respond to subquestion 3, 1 respondent to subquestion 4, 1 respondent to subquestion 6, 3 respondents to subquestion 12, 1 respondent to subquestions 2 up to 6, 8 up to 10 and 12, 1 respondent to subquestions 1 up to 12 and 1 respondent to subquestions 2 up to 12. For all the subquestions of question 8, the possible answers are not concerned, slightly concerned, concerned and very concerned. The following percentages per answer occurred, as shown in figure 2 in the appendix:
 
- Driving in a vehicle with autonomous technology: 32.0%, 48.0%, 14.7%, 5.3%
 
- Safety-consequences of device-malfunction or system failure:  9.6%, 43.8%, 41.5%, 15.1%
 
- Legal liability for drivers/owners: 13.9%, 43.1%, 36.1%, 6.9%
 
- System security (against hackers): 12.5%, 33.3%, 40.3%, 13.9%
 
- Data privacy (location and destination): 31.5%, 36.9%, 17.8%, 13.7%
 
- Interaction with non-self driving vehicles: 26.4%, 22.2%, 38.9%, 12.5%
 
- Interaction with pedestrians or cyclists: 13.5%, 39.2%, 33.8%, 13.5%
 
- Learning to use self-driving cars: 64.4%, 24.6%, 9.6%, 1.4%
 
- System performance under bad weather conditions:         36.1%, 50.0%, 9.7%, 4.2%
 
- Confused self-driving vehicles in unpredictable conditions: 8.2%, 43.9%, 35.6%, 12.3%
 
- Driving ability of self-driving vehicles compared to humans: 46.0%, 39.2%, 13.5%, 1.3%
 
- Driving in a vehicle without a human able to intervene: 12.9%, 15.7%, 44.3%, 27.1%
 
 
Question 9, about the ethical settings preferred in self-driving vehicles on the road, has received 115 responses. This question is based on ranking 5 different options by decreasing preference, from favorite to least favorite, as shown in figure 3 in the appendix. In the first option, self-driving cars should always choose to do the least amount of damage to the least amount of people and to minimize overall harm, has the percentages ranging from favorite to least favorite: 59.1%, 26.1%, 10.4%, 2.6%, 1.7%. The second option, the car should not be allowed to make an explicit choice between human lives and therefore should not be able to intervene in the case of an unavoidable accident and as a result, there will be a random victim, has the following distribution: 16.5%, 25.2%, 13.0%, 24.3%, 20.9%. Option 3, the car should always prioritize the lives and health of the passengers above those of bystanders, has the percentages: 13.0%, 15.7%, 29.6%, 27.8%, 13.9%. The car should always prioritize the lives and health of the bystanders above those of the passengers, option 4, has the percentages: 7.0%, 23.5%, 24.3%, 27.8%, 17.4% and the last option, option 5, the choice that the car makes should be based on what the majority of road-users want, has the percentages: 4.3%, 9.6%, 22.6%, 17.4%, 46.1%. Question 10, about the different issues regarding implementing ethical settings in self-driving vehicles, has received 112 responses. 1 respondent did not respond to subquestions 2 and 3 and 2 respondents did not respond to subquestion 3. The possible answers are strongly disagree, disagree, no opinion/neutral, agree and strongly agree, as shown in figure 4 in the appendix. For the theses, self-driving vehicles must not be sold with an ethical setting that the user can adjust. Instead, this setting must be determined by, for example, the government or the manufacturer, the percentages for the answers chosen are 0.9%, 7.1%, 13.3%, 33.6%, 45.1%. The thesis, I would rather buy a self-driving car with an adjustable ethical setting, than a self-driving car with an unadjustable ethical setting has the percentages: 30.4%, 33%, 18.8%, 14.3%, 3.6%. For the last thesis, when I will use a self-driving car, the specific ethical setting would be important for me, the percentages are 8.1%, 13.5%, 23.4%, 37.8%, 16.2%.
 
 
Question 11, about the liability and responsibility when crashing with self-driving cars, has received 110 responses. 2 respondents did not respond to subquestion 2, 1 respondent did not respond to subquestion 4, 1 respondent did not respond to subquestion 5 and 1 person did not respond to subquestions 2, 3, 4 and 5. The blank answers are not included in the following percentages. For all the subquestions in question 11, the possible answers are very unlikely, somewhat unlikely, no opinion/neutral, somewhat likely and very likely. The following percentages per answer occurred, as shown in figure 5 in the appendix. Manufacturers are fully liable when a self-driving car causes an accident, even if this discourages them to produce self-driving cars: 2.6%, 26.1%, 8.7%, 38.3%, 24.3%. Manufacturers are partially liable, in order to make them produce self-driving cars while encouraging them to correct errors: 2.7%, 9.7%, 13.3%, 52.2%, 22.1%. I am liable when my self-driving car causes an accident, even if I cannot intervene (fully autonomous vehicle): 49.1%, 27.2%, 9.6%, 11.4%, 2.6%. I am liable when my self-driving car causes an accident, because I have the possibility to intervene (semi-autonomous vehicle): 2.7%, 9.7%, 12.4%, 46.0%, 29.2%. Everyone with a self-driving car is liable when a self-driving car causes an accident, by means of mandatory insurance or tax: 3.6%, 17.0%, 32.1%, 28.6%, 18.8%.
 
 
When the answers of the respondents are grouped by age, as mentioned in question 8, 46,43% of the age below 31 is not concerned about driving in a self-driving vehicle. However, only 23,08% of the age above 41 is not concerned. This is showed in table 1 in the results appendix. Also, only 21,43% of the age below 31 and 42,1% of the age above 41 is concerned or very concerned with data privacy, which is shown in table 2 in the appendix. When the answers are grouped by car usage, 26,92% of the respondents who use cars every day and only 16,67% of the respondents who use cars monthly are concerned or very concerned about driving with autonomous technology. This is shown in table 3 in the appendix. Additionally, shown in table 4 in the appendix, 43,75% of the respondents who use cars every day and only 21,88% of the respondents who use cars monthly strongly disagree with an adjustable ethical setting in self-driving cars. Furthermore, 88,09% of the respondents who rate themselves somewhat familiar or familiar with self-driving vehicles and only 33,97% of the respondents who rate themselves somewhat unfamiliar or unfamiliar are not concerned with driving in self-driving cars, as shown in table 5 in the appendix.
 
= Discussion and conclusion =
 
Overall, we can conclude that people believe in the advantages and have a positive attitude towards self-driving vehicles, especially when people are more familiar with self-driving vehicles, which is in line with the literature. Respondents who drive cars more often are more concerned to drive with autonomous technology and are less open to the benefits, a finding supported by König & Neumayr (2017).
 
What also can be concluded is that self-driving vehicles will be much more accepted when the option to intervene will be implemented, which is in line with findings from Rupp & King (2010), who stated that people don't want to lose their sense of freedom. Additionally, system security has to be reliable for the acceptance of self-driving vehicles. Although many people see the advantage of fewer accidents, a system security breach is feared. Also, more than half of the respondents are concerned about self-driving vehicles getting confused in unpredictable situations. In the future, this aspect will possibly be improved because of the developments of artificial intelligence.
 
Older people are more concerned about driving in a self-driving vehicle and about the driving ability of the self-driving car compared to the human driving ability, which is in line with results from other surveys pointing out that elderly see less advantages than younger people (Lee et al., 2017). This might be due to the fact that older people are more used to conventional cars and less used to automated cars, artificial intelligence and technology. Also, younger people are less worried about data privacy concerns. However, still, more than 50% of all people are worried or very worried about data privacy, which does not support our hypothesis that people would express less concern for privacy. Existing privacy laws should be adapted to this new technology and new laws should be implemented for more acceptance of self-driving cars. When comparing education levels, higher educated people are less concerned about driving in a self-driving car, legal liability and the driving ability of the self-driving car compared to human driving ability. Lower educated people are less worried about the interaction with conventional cars and pedestrians or cyclists than higher educated people.
 
As for ethical settings, respondents prefer self-driving cars to always choose to do the least amount of damage to the least amount of people and to minimize overall harm, a setting that corresponds to utilitarianism. This corresponds with results from similar surveys on ethical settings in self-driving cars from the perspective of the private end-user, such as (Bonnefon et al., 2016). In this survey, 76% of respondents (n=2000) preferred for self-driving cars to sacrifice one passenger rather than killing ten bystanders, which shows a clear preference for a utilitarian setting. However, from similar literature, it can be concluded that people prefer to buy cars that give preferential treatment to themselves (Bonnefon et al. (2016); Nyholm (2018)).
 
Respondents preferred the contractualism setting, where the choice the car makes should be based on what the majority of the road users want, the least. There is no specific preference for the other three settings. The deontological setting, that the car should not make a choice between human lives and a victim, but that it will fall randomly, is the second most preferred, but also the second least preferred setting. Due to the fact that people do not like non-human entities such as technology making life-or-death decisions, respondents probably preferred this ethical setting because randomness has an element of fairness associated with it.
 
There were no preferences for the ethical setting where the car would always prioritize the life and health of the bystanders over that of the occupants (virtue ethics), and to always prioritize the life and health of the occupants over that of bystanders (egoism). Respondents wanted the ethical setting to be set by the manufacturer, but the type is important. Giving the responsibility to the manufacturer, all self-driving vehicles would be programmed with the same ethical setting, which is safer on the road. Different research also shows that people don’t want an ‘ethical knob’ (Li et al., 2016). Moreover, 78.7% of the respondents agree that self-driving vehicles must not be sold in an ethical setting that the user can adjust and respondents who drive cars more often (strongly) disagree with an adjustable ethics setting.
 
Furthermore, 74.3% of the respondents would use a self-driving car when manufacturers are partially liable, in order to make them produce self-driving cars while encouraging them to correct errors. However, respondents could be biased due to the formulation of the answer given. This answer includes the word ‘encourage’ which is positively formulated and the answer with full liability includes the word ‘discourage’, which is negatively formulated. 75.2% of the respondents would use a self-driving car when they are liable while they can intervene, which is the highest response. Again, the implementation of the option to intervene can be advised. Respondents who drive cars more often are less likely to use autonomous vehicles if they are liable themselves.
 
The overall discussion about self-driving vehicles, however, should come to an end. The technique of self-driving vehicles is almost there and the cars should be implemented as soon as possible for safety on the road. The minimal damage done by self-driving vehicles is calculated based on what the technology is able to do at the moment. The developments in artificial intelligence are at the moment not far enough to distinguish between an 80 years old man and a 5 years old child. People cannot build and program that specific into an artificial intelligence system. Helmet or no helmet, jeans or motorcycle clothing, it is not recognizable for AI. And how often does it happen that a driver in a conventional car has to choose between an 80 years old man and a 5 years old child? People do not even know what to decide themselves, so artificial intelligence does not know either. It is such a small chance that the choice really has to be made and a situation like this arises. Those examples are so hypothetical, that the chance they occur is nihil. Human errors are removed by introducing self-driving vehicles so it is a good innovation. However, why are people so focused on these ethical issues that hinder its acceptance? These ethical dilemmas greatly inhibit innovation of self-driving vehicles. The discussion has been going on for more than thirty years now. The best option is to implement random choice in accidents or to save the driver himself so the car has as little damage as possible. It is the easiest to introduce and that is how people drive themselves now. Also, the implementation of self-driving vehicles should be done evenly because self-driving vehicles and conventional cars together on the road is problematic.
 
== Survey limitations ==
 
The major part of our respondents are highly educated. We have too little low educated respondents to compare between levels of education. We can only compare with certainty between age groups, frequency of car use and familiarity with self-driving vehicles. However, comparisons are made between levels of education, but these are not very reliable. The absence of respondents between 31-40 years can have a negative effect on the accuracy of our survey because the opinion of this age group might differ from the other two age groups. Also smaller and more specific age groups may have worked out better if there were more respondents. Currently the elderly are in the same category as people who are in their forties for example. Also, just 10.5% of the respondents did not have a driver’s license, which is very little. People without a driver’s license may have different opinions because the use of cars will be more accessible for them. This group is not represented enough. 22.8% of the respondents indicate to be unfamiliar with self-driving vehicles. If they picked this answer appropriately, it means that they have never heard of self-driving vehicles and therefore they are unaware of the possible advantages or disadvantages or what a self-driving car is. This is almost a quarter of the respondents and they might have given somewhat random answers. No one responded that they never use a car and there were few that use one on a yearly basis. This is why these answers were taken into the same group as monthly.
 
A few issues could have had an impact on the outcome of our survey. A bias in the results of the survey could be due to participant bias (the tendency of people to give the answer that is socially desirable, or possibly desirable to the experimenter). Although it was anonymous, this could influence the answers. Especially for the ethical settings, people could feel more obliged to choose for the answer what would be most accepted and politically correct. Additionally, some questions were not answered by some respondents. These answers were treated as if they did not exist. This could have been avoided by making the questions compulsory, so one can only submit the form when every question is answered. Percentages were calculated with the number of answers to a specific question, not by the number of respondents in general. Differences in numbers of total answers can negatively contribute to the accuracy of research. Also, there are great differences in the completion time of the survey, ranging from 2 minutes and 10 seconds to 14 minutes and 45 seconds. To fill in the survey seriously, 2 minutes is too short and questions could be filled in inappropriately. There is also a major problem with survey question 8: when the form was first opened, question 8 included five possible answers. These answers were: ‘not worried’, ‘somewhat worried’, ‘neutral’, ‘worried’, ‘very worried’. When 39 respondents had already submitted the form, the ‘neutral’ option was removed from the list of possible answers because it did not fit in this scale. Neutral can be interpreted as 'not worried'. It does not fit between ‘somewhat worried’ and ‘worried’. Answers of these first 39 respondents were omitted in the results, because this confusing scale of answers can have negatively affected the accuracy of the answers.
 
== Future work ==
 
In future research this same set-up might be repeated, only now with proper subgroups. We lacked a substantial amount of older adults, lower educated people, and people without a driver's licence. Future research could also delve more into opinions of other stakeholders like manufacturers, or the government. It would also be wise to have this research on a large scale with government support, so that it might influence and speed up the introduction and acceptance of self-driving cars. Also, most research on this topic, including our own, is exploratory in nature. This is mainly because the technology is quite new and not yet readily available, but now that self-driving technology is becoming more and more a reality, it is time for large-scale non-exploratory research. Almost none of the references listed give any concrete recommendations for implementing a specific ethical theory, or give recommendations on how to promote the acceptance. If the technology is to be accepted by the broader public, such recommendations argued from an academic perspective are a step, or even a big leap, in the right direction.
 
= References =
 
Abraham, H., Lee, C., Brady, S., Fitzgerald, C., Mehler, B., Reimer, B. & Coughlin, J.F. (2016). Autonomous Vehicles, Trust, and Driving Alternatives: A survey of consumer preferences. Agelab Life tommorow, 0. Geraadpleegd van https://bestride.com/wp-content/uploads/2016/05/MIT-NEMPA-White-Paper-2016-05-30-final.pdf
 
Adee, S. (2016, September 21). Germany to create world's first highway code for driverless cars. Newscientist. https://www.newscientist.com/article/mg23130923-200-germany-to-create-worlds-first-highway-code-for-driverless-cars/
 
Amoozadeh, M., Raghuramu, A., Chuah, C. N., Ghosal, D., Zhang, H. M., Rowe, J., & Levitt, K. (2015). Security vulnerabilities of connected vehicle streams and their impact on cooperative driving. IEEE Communications Magazine, 53(6), 126–132. https://doi.org/10.1109/mcom.2015.7120028
 
Bamonte, T. J. (2013). Autonomous Vehicles - Drivers for Change. Retrieved March 23, 2021, from https://www.roadsbridges.com/sites/rb/files/05_autonomous vehicles.pdf
 
Boeglin, J. (2015). The costs of self-driving cars: reconciling freedom and privacy with tort liability in autonomous vehicle regulation. Yale JL & Tech., 17, 171.


Wagner M., Koopman P. (2015) A Philosophy for Developing Trust in Self-driving Cars. In: Meyer G., Beiker S. (eds) Road Vehicle Automation 2. Lecture Notes in Mobility. Springer, Cham. https://doi.org/10.1007/978-3-319-19078-5_14
Bonnefon, J.F., Shariff, A. & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576. https://doi.org/10.1126/science.aaf2654


Oliveira, L., Proctor, K., Burns, C. G., & Birrell, S. (2019). Driving Style: How Should an Automated Vehicle Behave? Information, 10(6), 219. MDPI AG. Retrieved from http://dx.doi.org/10.3390/info1006021
Bloom, C., Tan, J., Ramjohn, J., & Bauer, L. (2017). Self-driving cars and data collection: Privacy perceptions of networked autonomous vehicles. In Thirteenth Symposium on Usable Privacy and Security ({SOUPS} 2017) (pp. 357-375).


Shladover, S. (2016). THE TRUTH ABOUT “SELF-DRIVING” CARS. Scientific American, 314(6), 52-57. doi:10.2307/26046990
Burgess, S. (2012, June 23). Parking: It’s What Your Car Does 90 Percent of the Time. Autoblog. Retrieved from https://www.autoblog.com/2012/06/23/parking-its-what-your-car-does-90-percent-of-the-time/?guccounter=1


Cox, W. (2016). Driverless Cars and the City: Sharing Cars, Not Rides. Cityscape: A Journal of Policy Development and Research, 18(3). Retrieved from http://www.newgeography.com/content/003899-plan-bay-area-telling-people-what-do


Douma, F., & Palodichuk, S. A. (2012). Criminal Liability Issues Created by Autonomous Vehicles. Santa Clara Law Review, 52(4), 1157–1169. Retrieved from http://digitalcommons.law.scu.edu/lawreview


Driver, J. (2014). The History of Utilitarianism (Stanford Encyclopedia of Philosophy). Retrieved April 7, 2021, from https://plato.stanford.edu/entries/utilitarianism-history/


== 26 References ==
Duranton, G. (2016). Transitioning to Driverless Cars. Cityscape, 18(3), 193-196. Retrieved February 7, 2021, from http://www.jstor.org/stable/26328282


Elbanhawi, M., Simic, M., & Jazar, R. (2015). In the passenger seat: investigating ride comfort measures in autonomous cars. IEEE Intelligent transportation systems magazine, 7(3), 4-17.


Hartwich, F., Beggiato, M., & Krems, J. F. (2018). Driving comfort, enjoyment and acceptance of automated driving – effects of drivers’ age and driving style familiarity. Ergonomics, 61(8), 1017–1032. https://doi.org/10.1080/00140139.2018.1441448


Greenblatt, N. A. (2016). Self-driving cars and the law. IEEE Spectrum, 46-51. doi:10.1109/MSPEC.2016.7419800
Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust-The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105–120. https://doi.org/10.1016/j.techfore.2015.12.014


Holstein, T., Dodic-Crnkovic, G., & Pellicione, P. (2018). Ethical and Social Aspects of Self-Driving Cars. Retrieved from https://arxiv.org/abs/1802.04103
Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics, 21(3), 619–630. https://doi.org/10.1007/s11948-014-9565-5


Nielsen, T. A., & Haustein, S. (2018). On sceptics and enthusiasts: What are the expectations towards self-driving cars? Transport Policy, 49-55. Retrieved from https://doi.org/10.1016/j.tranpol.2018.03.004
Howard, D. (2013). Robots on the Road: The Moral Imperative of the Driverless Car. Retrieved March 7, 2021, from Science Matters website: http://donhoward-blog.nd.edu/2013/11/07/robots-on-the-road-the-moral-imperative-of-the-driverless-car/#.U1oq-1ffKZ1


Stilgoe, J. (2018). Machine learning, social learning and the governance of self-driving cars. Social Studies of Science, 25-56. Retrieved from https://doi.org/10.1177/0306312717741687
Husak, D. (2004). Vehicles and Crashes: Why is this Moral Issue Overlooked? Social Theory and Practice, 30(3), 351–370. Retrieved from https://www.jstor.org/stable/23562447?seq=1


Wagner, M., & Koopman, P. (2015). A Philosophy for Developing Trust in Self-driving Cars. Road Vehicle Automation 2, 163-171. Retrieved from https://link.springer.com/chapter/10.1007/978-3-319-19078-5_14
Jiang, J.J., Muhanna, W.A., & Klein, G. (2000). User resistance and strategies for proming acceptance across system types. Information & Management, 37(1), 25-36


Sven Nyholm, Jilles Smids. (2016). The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem? Ethical Theory and Moral Practice, 1275–1289.
König, M., & Neumayr, L. (2017). Users’ resistance towards radical innovations: The case of the self-driving car. Transportation Research Part F: Traffic Psychology and Behaviour, 44, 42–52. doi:10.1016/j.trf.2016.10.013


Nyholm, S. R. (2018). The ethics of crashes with self-driving cars: a roadmap I.
Lee, J. D. & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392


Chandiramani, J. R. (2017). Decision Making under Uncertainty for Automated Vehicles in Urban Situations. Master of Science Thesis.
Lee, C., Ward, C., Raue, M., D’Ambrosio, L., & Coughlin, J. F. (2017). Age differences in acceptance of self-driving cars: A survey of perceptions and attitudes. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10297 LNCS, 3–13. https://doi.org/10.1007/978-3-319-58530-7_1


Ibo van de Poel, Lambèr Royakkers. (2011). Ethics, Technology, and Engineering an introduction. Wiley-Blackwell.
Li, J., Zhao, X., Cho, M., Ju, W., & Malle, B. (2016). From trolley to autonomous vehicle: Perceptions of responsibility and moral norms in traffic incidents with self-driving. Society of Automotive Engineers World Congress.


Sam Levin, Nicky Woolf. (2016). Tesla driver killed while using autopilot was watching Harry Potter, witness says. The Guardian. https://www.theguardian.com/technology/2016/jul/01/tesla-driver-killed-autopilot-self-driving-car-harry-potter
Lin, P. (2016). Why Ethics Matters for Autonomous Cars. 10.1007/978-3-662-48847-8_4.


Alexander Hevelke, Julian Nida-Rümelin. (2014). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics
Liu, P., Wang, L., & Vincent, C. (2020). Self-driving vehicles against human drivers: Equal safety is far from enough. Journal of Experimental Psychology: Applied, 26(4), 692–704.
Joshua Greene. (2013). Moral Tribes.


Noah J. Goodall. (2016). Ethical Decision Making During Automated Vehicle Crashes
Nagel, T. (1982). Moral Luck. Oxford University Press.


Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The Social Dilemma of Autonomous Vehicles. Science, 1573-1576.
Nyholm, S. & Smids, J. (2016). The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem? Ethical Theory and Moral Practice, 1275–1289.


Katarzyna de Lazari-Radek, Peter Singer. Utilitarianism: A Very Short Introduction (2017), p.xix, ISBN 978-0-19-872879-5.  
Nyholm, S. (2018). The ethics of crashes with self‐driving cars: A roadmap, II. Philosophy Compass. 13:e12506. https://doi.org/10.1111/phc3.12506


Hevelke, A. & Nida-Rümelin, J. Sci Eng Ethics (2015) 21: 619. https://doi.org/10.1007/s11948-014-9565-5
Marchant, G. E., & Lindor, R. A. (2012). Santa Clara Law Review The Coming Collision Between Autonomous Vehicles and the Liability System THE COMING COLLISION BETWEEN AUTONOMOUS VEHICLES AND THE LIABILITY SYSTEM. Number 4 Article, 52(4), 12–17. Retrieved from http://digitalcommons.law.scu.edu/lawreview


Shladover, S. (2016). THE TRUTH ABOUT “SELF-DRIVING” CARS. Scientific American, 314(6), 52-57. doi:10.2307/26046990
McManus, R., & Rutchick, A. (2018). Autonomous vehicles and the attribution of moral responsibility. Social Psychological and Personality Science, 1–8.


Duranton, G. (2016). Transitioning to Driverless Cars. Cityscape, 18(3), 193-196. Retrieved February 7, 2021, from http://www.jstor.org/stable/26328282
Millar, J. (2014). Technology as moral proxy: Autonomy and paternalism by design. IEEE Ethics in Engineering, Science and
Technology Proceedings, IEEE Explore. Online Resource, Doi: https://doi.org/10.1109/ETHICS.2014.6893388


Cox, W. (2016). Driverless Cars and the City: Sharing Cars, Not Rides. Cityscape, 18(3), 197-204. Retrieved February 7, 2021, from http://www.jstor.org/stable/26328283
Millard-Ball, A. (2016). Pedestrians, Autonomous Vehicles, and Cities. Journal of Planning Education and Research, 38(1), 6–12. https://doi.org/10.1177/0739456x16675674


Stone, J. (2017). Who’s at the wheel: Driverless cars and transport policy. ReNew: Technology for a Sustainable Future, (139), 38-41. Retrieved February 7, 2021, from https://www.jstor.org/stable/90002086
Minch, Robert. (2004). Privacy Issues in Location-Aware Mobile Devices. Robert P. Minch. 10.1109/HICSS.2004.1265320.  


Frey, T. (2012). DEMYSTIFYING THE FUTURE: Driverless Highways: Creating Cars That Talk to the Roads. Journal of Environmental Health, 75(5), 38-40. Retrieved February 7, 2021, from http://www.jstor.org/stable/26329536
Mobility, public transport and road safety. (n.d.). Retrieved from Government of the Netherlands: https://www.government.nl/topics/mobility-public-transport-and-road-safety/self-driving-vehicles


Focussed on acceptance of the technology:
Oliveira, L., Proctor, K., Burns, C. G., & Birrell, S. (2019). Driving Style: How Should an Automated Vehicle Behave? Information, 10(6), 219. MDPI AG. Retrieved from http://dx.doi.org/10.3390/info1006021


König, M., & Neumayr, L. (2017b). Users’ resistance towards radical innovations: The case of the self-driving car. Transportation Research Part F: Traffic Psychology and Behaviour, 44, 42–52. https://doi.org/10.1016/j.trf.2016.10.013
Owen, A., & Levinson, D. (2014). Access Accros America: Transit 2014, Final Report. Minneapolis, MN.


Nees, M. A. (2016). Acceptance of Self-driving Cars. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1), 1449–1453. https://doi.org/10.1177/1541931213601332
Parida, S., Franz, M., Abanteriba, S., & Mallavarapu, S. (2018). Autonomous Driving Cars: Future Prospects, Obstacles, User Acceptance and Public Opinion. Advances in Intelligent Systems and Computing, 786, 318–328. https://doi.org/10.1007/978-3-319-93885-1_29


S. Karnouskos, "Self-Driving Car Acceptance and the Role of Ethics," in IEEE Transactions on Engineering Management, vol. 67, no. 2, pp. 252-265, May 2020, doi: 10.1109/TEM.2018.2877307.
Parkinson, S., Ward, P., Wilson, K. & Miller, J. (2017). "Cyber Threats Facing Autonomous and Connected Vehicles: Future Challenges." IEEE Transactions on Intelligent Transportation Systems, 18(11), pp. 2898-2915. doi: 10.1109/TITS.2017.2665968.


Lee C., Ward C., Raue M., D’Ambrosio L., Coughlin J.F. (2017) Age Differences in Acceptance of Self-driving Cars: A Survey of Perceptions and Attitudes. In: Zhou J., Salvendy G. (eds) Human Aspects of IT for the Aged Population. Aging, Design and User Experience. ITAP 2017. Lecture Notes in Computer Science, vol 10297. Springer, Cham. https://doi.org/10.1007/978-3-319-58530-7_1
Pöllänen, E., Read, G. J. M., Lane, B. R., Thompson, J., & Salmon, P. M. (2020). Who is to blame for crashes involving autonomous vehicles? Exploring blame attribution across the road transport system. Ergonomics, 63(5), 525–537. https://doi.org/10.1080/00140139.2020.1744064


Raue, M., D’Ambrosio, L. A., Ward, C., Lee, C., Jacquillat, C., & Coughlin, J. F. (2019). The Influence of Feelings While Driving Regular Cars on the Perception and Acceptance of Self-Driving Cars. Risk Analysis, 39(2), 358–374. https://doi.org/10.1111/risa.13267
Raue, M., D’Ambrosio, L. A., Ward, C., Lee, C., Jacquillat, C., & Coughlin, J. F. (2019). The Influence of Feelings While Driving Regular Cars on the Perception and Acceptance of Self-Driving Cars. Risk Analysis, 39(2), 358–374. https://doi.org/10.1111/risa.13267


Rogers, E. M. (1995). Diffusion of Innovations (4th ed.). Retrieved from https://books.google.nl/books?hl=nl&lr=&id=v1ii4QsB7jIC&oi=fnd&pg=PR15&dq=Rogers,+E.+M.+(1995).+Diffusion+of+innovations.+New+York.&ots=DMTurPTs7S&sig=gXeTkHXQsnxXXpy5dprofoJMhRQ#v=onepage&q=Rogers%2C
Rupp, J. D., & King, A. G. (2010). Autonomous Driving - A Practical Roadmap.
Sandberg, A., & Bradshaw‐Martin, H. (2013). In J. Romportl, et al. (Eds.), What do cars think of trolley problems: Ethics for
autonomous cars? . Beyond AI: Artificial Golem Intelligence, Conference Proceedings. https://www.beyondai.zcu.cz/
files/BAI2013_proceedings.pdf:12
Sarcinelli, R., Guidolini, R., Cardoso, V., Paixão, T., Berriel, R., Azevedo, P., De Souza, A., Badue, C., & Oliveira-Santos, T. (2019). Handling pedestrians in self-driving cars using image tracking and alternative path generation with Frenét frames. Computers & Graphics, 84, 173–184. https://doi.org/10.1016/j.cag.2019.08.004
Schoettle, B. & Sivak, M. (2014). A Survey of Public Opinion about Autonomous and Self-Driving Vehicles in the U.S., the U.K., and Australia. Michigan: The University of Michigan. https://deepblue.lib.umich.edu/bitstream/handle/2027.42/108384/103024.pdf?sequence=1&isAllowed=y
Schoettle, B., & Sivak, M. (2014). Public opinion about self-driving vehicles in China, India, Japan, the U.S. and Australia. Retrieved from http://www.umich.edu/~umtriswt
Shladover, S. (2016). THE TRUTH ABOUT “SELF-DRIVING” CARS. Scientific American, 314(6), 52-57. doi:10.2307/26046990


== Summaries References ==
Silberg, G., Wallace, R. ., Matuszak, G., Plessers, J., Brower, C., & Subramanian, D. (2012). Self-driving cars: The next revolution. KPMG LLP & Center of Automotive Research.


Son, J., Park, M., & Park, B. B. (2015). The effect of age, gender and roadway environment on the acceptance and effectiveness of Advanced Driver Assistance Systems. Transportation Research Part F: Traffic Psychology and Behaviour, 31, 12–24. https://doi.org/10.1016/j.trf.2015.03.009


Steg, L. (2005). Car use: Lust and must. Instrumental, symbolic and affective motives for car use. Transportation Research part A: Policy and Practice, 39(2), 147-162


'''Self-driving cars and the law'''
Straub, J., McMillan, J., Yaniero, B., Schumacher, M., Almosalami, A., Boatey, K., & Hartman, J. (2017). CyberSecurity considerations for an interconnected self-driving car system of systems. 2017 12th System of Systems Engineering Conference, SoSE 2017. https://doi.org/10.1109/SYSOSE.2017.7994973


The law assumes that a human being is in the driver’s seat of a car. This poses a problem for the inevitable futuristic implementation of self-driving cars. No current laws state who is responsible when an accident happens. Also, roads aren’t adapted to the needs of SDC’s. The biggest part of testing the cars takes place in the United States. A human being must be behind the wheel to intervene before possible accidents there. New laws in favor of these cars must be made soon, because companies won’t fully invest before they know the necessary regulations exist. Car companies are afraid for lawsuits, because they will be extremely expensive, the verdict will be hard to predict because no laws exist and a single lawsuit can lead to a recall of all cars. Finally, car companies are afraid for high punitive damage awards.
Teoh, E. R. & Kidd, D. G. (2017). Rage against the machine? Google’s self-driving cars versus human drivers. Journal of Safety Research, 63, 57–60. https://doi.org/10.1016/j.jsr.2017.08.008


The solution, according to the writer, would be that computer drivers should be treated equally as human drivers. Only its conduct needs to be considered, not the thoughts, just like a judge can’t see what a human driver thought when he caused an accident. This means that a computer driver would be found liable when it runs a red light for example. Not when it drives as safe as it can and still cause an accident. The carmaker would be responsible, because they are the ones determining the actions of the car. Afterwards, the carmaker would invest much money in improving safety, because of bad publicity reasons. Judges have much precedent, because cases in which human beings were involved, can be used when a computer is involved and insurances would be lower than for a normal car.
Visschers, V. H. M., & Siegrist, M. (2018). Differences in risk perception between hazards and between individuals. In Psychological Perspectives on Risk and Risk Analysis: Theory, Models, and Applications (pp. 63–80). https://doi.org/10.1007/978-3-319-92478-6_3


Changes in public policy have to be made as well. A human can see traffic lights and signs et cetera. Camera’s of a car would be able to see and detect them as well, but it would be much easier if radio frequency transmitters were to be implemented. That way, a car can just receive a signal without the chance of not visually detecting it.
Wagner M. & Koopman P. (2015) A Philosophy for Developing Trust in Self-driving Cars. In: Meyer G., Beiker S. (eds) Road Vehicle Automation 2. Lecture Notes in Mobility. Springer, Cham. https://doi.org/10.1007/978-3-319-19078-5_14
The rollout of autonomous vehicles has to be speeded, because much oil could be saved, for they drive more efficiently. Also many accidents can be prevented, because they are more reliable than a human being. They rely on electronic signals, which can be processed much faster. Also, SDC’s can learn from mistakes that another car has made, through updates. Human beings can only learn from their own mistakes. Probably when they are implemented, we wouldn’t own cars anymore, but just order one when you need one. Privacy will be a big concern, since manufacturers will know your exact location and your destination. Camera’s are likewise to be installed internally to prevent vandalism. At last, parking spaces in central areas would not be needed anymore, so we have more space.


Wakabayashi, D. (2018). Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam. The New York Times, Technology. https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html


'''Ethical and Social Aspects of Self-Driving Cars'''
World Health Organization. (2016). Road traffic injuries.


It is hard to say which choices a SDC has to make when dangerous situations occur. The trolley-problem is a commonly used model to describe which choices it can make. There exist a few problems why we cannot fully depend on this problem. There are a few ethical theories which we can use, but it’s hard to say which one is correct and they all have different conclusions. Design choices and ethical programming influence each other. A cheaper camera has a negative effect on accurate quick decision-making for example. Self-driving cars are cars which are able to operate without the presence of a human being. This is the highest level of autonomous cars. Self-adaptive software can make sure that a car learns all the time and is not dependent on slow updates. Most of the functionality in the automotive domain is based on software. This relies on computer vision, machine learning and parallel computing. A problem is that calculations are based on an abstract representation of the real world, formed by all things sensed by camera’s etcetera. Engineers have to choose which data they will use, as a camera can see an obstacle when a radar doesn’t.  
Yannis, G., Antoniou, C., Vardaki, S., & Kanellaidis, G. (2010). Older Drivers’ Perception and Acceptance of In-Vehicle Devices for Traffic Safety and Traffic Efficiency. Journal of Transportation Engineering, 136(5), 472–479. https://doi.org/10.1061/(ASCE)TE.1943-5436.0000063


Safety is the most important requirement of SDC’s. A drivers license for SDC’s is a suggestion. Also an independent organization should be able to check the code. Testing is very important for making sure it is safe enough. Economic aspects happen to be the highest priority of companies and cheap equipment could lead to wrong decision-making.
= Appendix =
Security is also very important, because when hackers hack into the device, safety will be affected crucially. There exist eight basic principles for security. Should there be a threshold for safety? Should the vehicle be connected to the network or not? Connection makes it easier to prevent accidents and operate more efficiently whereas no connection makes it almost impossible to hack.


Privacy is another requirement. There already exists much legislation on privacy and these cars have to meet them as well.
=== '''Survey Acceptatie van volledig zelfrijdende auto's''' ===
Trust and transparency are other requirements. It is hard to determine to which organizations/people data has to be disclosed. More data-sharing can make it easier for companies to learn from others. Reliability, responsibility and accountability and quality assurance are the final requirements.
There are public interests which manufacturers have to take into account. Giving more choice to people can make the people more responsible for the cars actions. New selling points have to be thought of. Will exterior be just as important as it is right now?
There are no simple answers to each safety question, but this was the same when normal cars were introduced. Also with those, safety couldn’t be guaranteed. The unsolvable trolley problem must not be tried to be solved.


'''Consent Form'''


'''On sceptics and enthusiasts: What are the expectations towards self-driving cars?'''
Instemming onderzoeksdeelname voor onderzoek ‘Acceptatie van volledig zelfrijdende auto’s’.
Dit document geeft u informatie over het onderzoek ‘Acceptatie van volledig zelfrijdende auto’s’.


Willingness to accept SDC’s differs between groups of age, gender, country etc. It is unknown what makes this difference and therefore more research on acceptance is needed. This paper studies the acceptance of automated driving and related expectations in the Danish population. The parts of the questionnaire were: car access and travel patterns, interest and attitudes towards SDC’s, expectations towards fully automated vehicles and personal background information. People were divided into the groups of scepticism, enthusiasm and car stress for each part of the questionnaire. Most people are sceptic, followed by indifferent stressed and enthusiasts form the smallest group by extent. Enthusiasts are younger people who live in more urban areas, whereas sceptics are often older and live in more rural areas. Powerlessness and freedom are emotions related to not accepting a SDC.  
Voordat het experiment begint is het belangrijk dat u kennis neemt van de werkwijze die bij dit experiment gevolgd wordt en dat u instemt met vrijwillige deelname. Leest u dit document a.u.b. aandachtig door.


Enthusiasts live more in urban areas because they are more familiar with driving in congested areas and therefore feel the need towards more efficient driving methods. Although old people might not be able to use conventional driving methods, they are still not willing to accept SDC’s. The differences must be studied and conclusions must be drawn about how we can implement SDC’s so that more people will enjoy them. For example, we can consider to keep manual options, so sceptics won’t lose the joy of driving. For all the results, see the paper.
''Doel en nut van het experiment''


Het doel van dit onderzoek is te meten welke relevante factoren bijdragen aan de acceptatie van de volledig zelfrijdende auto voor privégebruik. 
Het onderzoek wordt uitgevoerd door de studenten Laura Smulders, Sam Blauwhof, Joris van Aalst, Roel van Gool en Roxane Wijnen van de Technische Universiteit Eindhoven, onder supervisie van dr. ir. M.J.G. van de Molengraft.


'''Machine learning, social learning and the governance of self-driving cars'''
''Procedure''


Developers of SDC’s should aim to make them safer. It is hard to do that, because innovation is very unsure. You don’t know exactly what your product will be, so it is hard to know exactly what you want to be safe to do what. The final design can be different from the current design so you’re testing a different thing. The focus should be on social learning. The system needs to learn from the society and the society needs to learn from the system. Much can also be learned from historical cases. The algorithmic architecture of the programming begins with if-else rules, but situations are too complex to use only this approach. It should learn from vast datasets out of the real world. There could be problems because regulations aren’t necessarily based on needs in the real world and may be arbitrary. Developers aren’t even capable of seeing how the system is learning from the data. Therefore explicit problems need to be defined beforehand. Self-driving and autonomous cars are misnomers, because they are never autonomous. They are driven by social goals. Technology can never have an own will. You must make people aware of the limits. This is why the German government has asked Tesla to rename the Autopilot-function due to failure. It’s dangerous for people to think it’s completely safe. Tesla never connected the failure to their own shortages. But they did install technological alternatives when they noticed some weren’t good enough. Autonomous cars aren’t as independent as people tend to believe. They should be well-trained and therefore it would be positive to democratize the learning, so that every company can maximize the outcome and the safety.
Dit onderzoek vult u online in via uw webbrowser. In dit onderzoek wordt u een aantal vragen gesteld over de volgende relevante factoren: gebruikersperspectief, veiligheid, ethische instelling, verantwoordelijkheid. Ook worden er wat additionele demografische vragen gesteld.


''Duur''


'''A Philosophy for Developing Trust in Self-driving Cars'''
Het onderzoek duurt ongeveer 5-10 minuten.


Cars become more automated and this will reduce the rates of accidents. They can eliminate accidents due to inattentive drivers. However, humans are able to react way better to situations they are not explicitly trained for. The world contains is an unstructured network and even thousand test-miles cannot eliminate some failures. Inductive inference is crucial in building solid software. This is for example machine learning. A computer can learn for itself what the clearest feature of pedestrians are and how to react to them.
''Vrijwilligheid''


Situations which occur not very often are hard to take into account for programmers and it is not easy to learn for them through experience. According to Popper a theory is only meaningful when it is falsifiable, because one needs only one negative example to falsify a theory. According to the author, one single accident makes the safety case more meaningful. No confirmatory tests should be executed. Rather, the goal should be a negative test result, so we know what to improve. Field testing costs too much money to do for a long time and simulation testing doesn’t fit either, because one will never simulate situations he doesn’t expect to take place. Fuzz testing is a well-fitting alternative, but it is not very efficient, because it uses random values of which a great part aren’t very interesting to test. The Ballista project uses dictionaries of interesting values to test and therefore is more likely to find big vulnerabilities. The conclusion is that the tester should aim to find flaws, instead of never-ending evidence the system works at all times.
Uw deelname is geheel vrijwillig. U kunt zonder opgave van redenen weigeren mee te doen aan het onderzoek en uw deelname op welk moment dan ook afbreken door de browser af te sluiten. Ook kunt u nog achteraf (binnen 24 uur) weigeren dat uw gegevens voor het onderzoek mogen worden gebruikt. Dit alles blijft te allen tijde zonder nadelige gevolgen.


''Vertrouwelijkheid''


'''The ethics of crashes with self‐driving cars: A roadmap, I'''
Wij delen geen persoonlijke informatie over u met mensen buiten het onderzoeksteam. De informatie die we met dit onderzoeksproject verzamelen wordt gebruikt voor het schrijven van wetenschappelijke publicaties en wordt slechts op groepsniveau gerapporteerd. Alles gebeurt geheel anoniem en niets kan naar u teruggevoerd worden. Alleen de onderzoekers kennen uw identiteit en die informatie wordt zorgvuldig afgesloten bewaard.


Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then follows an assessment of recent empirical work on lay‐people's attitudes about crash algorithms relevant to the ethical issue of crash optimization. Finally, the article discusses what traditional ethical theories such as utilitarianism, Kantianism, virtue ethics, and contractualism imply about how cars should handle crash scenarios.
''Nadere inlichtingen''


It might seem like a good idea to always hand over control to a human driver in any accident scenario. However, typical human reaction‐times are too slow for this to always be a good idea (Hevelke & Nida‐Rümelin, 2015) Jason Millar argues that a person's car should function as a “proxy” for their ethical outlook. People should therefore be able to choose their own ethics settings (Millar, 2014; see also Sandberg & Bradshaw‐Martin, 2013). Similarly, Giuseppe Contissa and colleagues argue that self‐driving cars should be equipped with an “ethical knob,” so that whoever is currently using the car can set it to their preferred settings. (Contissa, Lagioia, & Sartor, 2017) Jan Gogoll and Julian Müller, in contrast, argue that we all have self‐interested reasons to want everyone's cars to be programmed according to the same settings. (Gogoll & Müller, 2017). One advantage to giving people a certain degree of choice here is that this might make it easier to hold them responsible for any bad outcomes that crashes involving their vehicles might give rise to (Sandberg & Bradshaw‐Martin, 2013; cf. Lin, 2014).
Als u nog verdere informatie wilt over dit onderzoek of voor eventuele klachten, dan kunt u zich wenden tot Roel van Gool (roel.vangool@gmail.com).


One of the questions this raises is whether the vast literature on the trolley problem might be a useful source of ideas about how to deal with the ethics of crashing self‐driving cars. Together with Jilles Smids, I have put forward three reasons for being skeptical about relying very heavily on the trolley problem literature here (Nyholm & Smids, 2016). Firstly, in the trolley literature, we are typically asked to imagine that the only morally relevant factors are a very small set of factors. . Any bigger and more complex sets of considerations are imagined away. Secondly, in most trolley discussions, we are asked to set all questions of moral and legal responsibility aside, and only focus on the choice between the one and the five. In actual traffic ethics, we cannot ignore questions about responsibility. Thirdly, in trolley discussions, a fully deterministic scenario is imagined. It is assumed that we know with certainty what the outcomes of our available choices would be. In contrast, when we are prospectively programming self‐driving cars for how to deal with accident scenarios, we do not know what scenarios they will face. We must make risk‐assessments. (Nyholm & Smids, 2016).
''Instemming onderzoeksdeelname''
Emperical ethics: minimize overall harm. . However, when surveyed about what kinds of cars they themselves would want to use, people tend to favor cars that would save them in an accident scenario. People appear to have inconsistent or paradoxical attitudes. In the finding mentioned above, many people want others to have harm‐minimizing cars, while themselves wanting to have cars that would favor them.


“Top‐down” approach. That is, we can consider what utilitarians (or consequentialists more broadly), Kantians (or deontologists more broadly), virtue ethicists, or contractualists would recommend regarding this topic. Utilitarian ethics is about maximizing overall happiness, while minimizing overall suffering. Kantian ethics is about adopting a set of basic principles (“maxims”) fit to serve as universal laws, in accordance with which all are treated as ends‐in‐themselves and never as mere means. Virtue ethics is about cultivating and then fully realizing a set of basic virtues and excellences. Contractualist ethics is about formulating guidelines people would be willing to adopt as a shared set of rules, based on nonmoral or self‐interested reasons, in a hypothetical scenario where they would be making an unforced agreement about how to live together. A utilitarian would be mindful of the fact that people might be scared of taking rides in “utilitarian” cars, instead preferring cars programmed to prioritize their passengers. . The lesson from Kantian ethics might be that we should choose rules we would be willing to have as universal laws applying equally to all—so as to make everything fair, and not give some people an unjustified advantage in crash‐scenarios. ? It is hard to come up with any virtue ethical ideas about how self‐driving cars should crash (cf. Gurney, 2016). But virtue ethics might help when we think about the ethics of automated driving more generally. Perhaps a lesson from a virtue ethical perspective is that we should try to design and program cars in ways that help to make people act carefully and responsibly when they 6 of 10 NYHOLM use self‐driving cars.
Door onderstaand 'Volgende' aan te klikken geeft u aan dat u dit document en de werkwijze hebt begrepen en dat u ermee instemt om vrijwillig deel te nemen aan dit onderzoek van de bovengenoemde studenten van de Technische Universiteit Eindhoven.




'''The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?'''
'''Demografie'''


We identify three important ways in which the ethics of accidentalgorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how selfdriving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty.
Wat is uw leeftijd? (Open vraag)


According to Frances Kamm, the basic philosophical problem is this: why are certain people, using certain methods, morally permitted to kill a smaller number of people to save a greater number, whereas others, using other methods, are not morally permitted to kill the same smaller number to save the same greater number of people? (Kamm 2015)
Wat is uw geslacht? (Man, Vrouw, Anders)
The morally relevant decisions are prospective decisions, or contingency-planning, on the part of human beings. In contrast, in the trolley cases, a person is imagined to be in the situation as it is happening, split-second decision-making. It is unlike the prospective decision-making, or contingency-planning, we need to engage in when we think about how autonomous cars should be programmed to respond to different types of scenarios we think may arise. The decision-making about self-driving cars is more realistically represented as being made by multiple stakeholders – for example, ordinary citizens, lawyers, ethicists, engineers, risk-assessment experts, car-manufacturers, etc. These stakeholders need to negotiate a mutually agreed-upon solution. . In one case, the morally relevant decision-making is made by multiple stakeholders, who are making a prospective decision about how a certain kind of technology should be programmed to respond to situations it might encounter. And there are no limits on what considerations, or what numbers of considerations, might be brought to bear on this decision. In the other case, the morally relevant decision-making is done by a single agent who is responding to the immediate situation he or she is facing – and only a very limited number of considerations are taken into account.


Responsibility: Suppose, for example, there is a collision between an autonomous car and a conventional car, and though nobody dies, people in both cars are seriously injured. This will surely not only be followed by legal proceedings. It will also naturally – and sensibly – lead to a debate about who is morally responsible for what occurred. Forward-looking responsibility is the responsibility that people can have to try to shape what happens in the near or distant future in certain ways. Backward-looking responsibility is the responsibility that people can have for what has happened in the past, either because of what they have done or what they have allowed to happen. (Van de Poel 2011) Applied to riskmanagement and the choice of accident-algorithms for self-driving cars, both kinds of responsibility are highly relevant.


Uncertainties: the self-driving car cannot acquire certain knowledge about the truck’s trajectory, its speed at the time of collision, and its actual weight. Second, focusing on the self-driving car itself, in order to calculate the optimal trajectory, the self-driving car needs (among other things) to have perfect knowledge of the state of the road, since any slipperiness of the road limits its maximal deceleration. Finally, if we turn to the elderly pedestrian, again we can easily identify a number of sources of uncertainty. Using facial recognition software.
Wat is uw hoogst behaalde opleidingsniveau?


- Geen opleiding/ onvolledige basisonderwijs


'''Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis'''
- Middelbaar diploma


Autonomous cars are involved around legal, but also moral questions. Patrick Lin is concerned that any security gain will constitute a trade-off with human lives. The second question is whether it would be morally okay to put liability on the user based on a duty to pay attention to the road and traffic and to intervene when necessary to avoid accidents. It should depend on whether or not the driver would ever have a chance to intervene.
- Middelbaar beroepsonderwijs (MBO)
In this article, two options are discussed: driver with a duty to intervene, or a driver with no duty (and thus no control). For the first option, if the driver never had a real chance of intervening, he should not be held responsible. However this holds only for the new cars, and they would still not be accessible to blind etc.
For the second option where the driver has no control, it makes more sense to hold them accountable. However, this would make more sense in some kind of tax or insurance.
Manufacturers should not be freed of their liability completely (take the Ford Pinto case as an example).


- Hoger beroepsonderwijs of wetenschappelijk onderwijs zonder diploma (HBO/WO)


'''Ethical decision making during automated vehicle crashes'''
- Bachelor diploma (HBO/WO)


Three arguments were made in this paper: automated vehicles will almost certainly crash, even in ideal conditions; an automated vehicle’s decisions preceding certain crashes will have a moral component; and there is no obvious way to effectively encode human morality in software.
- Master diploma (HBO/WO)
A three-phase strategy for developing and regulating moral behavior in automated vehicles was proposed, to be implemented as technology progresses. The first phase is a rationalistic moral system for automated vehicles that will take action to minimize the impact of a crash based on generally agreed upon principles, e.g. injuries are preferable to fatalities. The second phase introduces machine learning techniques to study human decisions across a range of real-world and simulated crash scenarios to develop similar values. The rules from the first approach remain in place as behavioral boundaries. The final phase requires an automated vehicle to express its decisions using natural language, so that its highly complex and potentially incomprehensible-to-humans logic may be understood and corrected.


- Doctor, PhD


'''The social dilemma of autonomous vehicles'''


When it becomes possible to program decision-making based on moral principles into machines, will self-interest or the public good predominate? In a series of surveys, Bonnefon et al. found that even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles (see the Perspective by Greene). Respondents would also not approve regulations mandating self-sacrifice, and such regulations would make them less willing to buy an autonomous vehicle.
Heeft u een auto rijbewijs? (Ja/Nee)


Hoe vaak gebruikt u een auto?


'''The truth about ‘self-driving’ cars'''
- (Bijna) elke dag
They are coming but not in the way you may have been led to think. Selfdriving cars have many issues: taking save turns, changing road surfaces, snow and ice and avoid traffic cops, crossing guards & emergency vehicles. And automatic stopping for pedestrians will make us people rather walk or take the subway.
We have a very unrealistic expectation of self driving cars. They will not happen the way you have been told.


[[File:Sam.png|400 px|]]
- Wekelijks


We are currently only arriving at level 3 cars. CEO of Nissan said fully automated cars (level 5) will be on the road by 2020. This isn’t true, level 4 cars may arrive in the next decade. Defining automated driving: much more complex than we think. Despite the popular perception, human drivers are remarkably capable of avoiding crashes. Mind how often your laptop freezes / is slow. This will inevitably lead to crashes, so there is a major software problem.
- Maandelijks


Software on aircraft is much less complex, since they have to deal with less obstacles and other vehicles. Also, the testing of the automated cars will have lots of problems. A lot of people will have to be subject of crashes statistically over a long period of time. Also, there is boundary money-wise, since the cars must stay affordable for the public.
- Jaarlijks
Some people think AI will give us self-driving cars. However, the problem with that is that it is non-deterministic. The possibility of having 2 cars with the same assembly but after a year automation systems will have different behaviour. It is out of our control.


Writer: Fully automated cars will not be here until 2075. In level 3 cars there is a problem with the driver zoning out. This problem so hard, some car manufacturers will not even try level 3. So outside of traffic jam assistants level 3 will probably never happen. Level 4 will happen eventually, but on certain parts of roads and with certain weather conditions. These scenarios might not sound as futuristic as having your own personal electronic chauffeur, but they have the benefit of being possible and soon.
- Nooit




'''Transitioning to driverless cars'''
'''Gebruikersperspectief'''


Despite some nuances, the future looks mostly bright. The questions are how to get there, and what the transition to a full system of driverless cars will look like. A lot of the discussion so far has focused on insurance and ethical issues. Who is responsible in case of accidents? If the computer has to choose a victim in a collision, who will it be, its own passenger or a passenger in another car? These questions are interesting, but it is hard to imagine they will be major stumbling blocks. New technologies have brought new risks for many years, and ways have been found to spread those risks and define new forms of protection and liability. The ethical question probably makes for interesting debates in an introduction to ethics class at a university, but it is unlikely to have much practical relevance.
Hoe bekend bent u met zelfrijdende auto’s?
Driverless cars will be much safer than cars are now.


A good case can be made that the key transitional problems will be instead about the political economy of the regulation of driverless cars and the cohabitation between driverless cars and cars driven by human beings.
''Onbekend, Enigszins onbekend, Enigszins bekend, Bekend''
For car producers or would-be car producers, two strategies are possible. The first is incremental and consists of making cars gradually less reliant on drivers. That has been the strategy of most incumbent car producers. The incremental strategy presents one major problem, however. Partially driverless cars may be safer, but the true timesaving benefits of driverless cars will occur only when cars become completely driverless. With this scenario, the transition is likely to be extremely long, and how the last step about getting rid of the wheel will take place is unclear.


The alternative strategy is rupture and the direct development of cars without a steering wheel; that is the Google, Inc. strategy. It is an appealing but difficult proposition on several counts. It will require maximum software sophistication right from the start. If anything, processes will get easier with more driverless cars. Some technical issues seem extremely tricky to resolve.
Incumbent car manufacturers that are betting on incremental change, not cars without wheels right from the start, will probably do everything they can to prevent fully driverless cars from being able operate.


Realizing that its radical innovation will be a hard sell, Google appears to want to make it even more radical. If Google cars cannot operate in existing cities, perhaps new cities need to be created for them. That probably sounds like a mad idea to many, but history teaches us that it may not be as crazy as it sounds. What was possibly the first suburb of America, the Main Line of Philadelphia, Pennsylvania, was developed by rail entrepreneurs who realized that developing suburbs was much more profitable than operating railways.
Hoe waarschijnlijk denkt u dat de volgende voordelen optreden bij het gebruik van volledig zelfrijdende auto’s?


''Zeer onwaarschijnlijk, Eenigszins onwaarschijnlijk, Geen mening/neutraal, Enigszins waarschijnlijk, Zeer waarschijnlijk''


'''Driverless Cars and the City: Sharing Cars, Not Rides'''
- Minder ongelukken


The world of driverless cars heralds revolutionary changes, but for cities (metropolitan areas) the process will be evolutionary. No “Big Bang” will happen, but it will slowly evolve. Driverless cars will not significantly impact urban form, but will expand opportunity and quality of life for the disabled and other people who are unable to drive.
- Verminderde ernst van ongelukken


Who’s at the wheel: Driverless cars and transport policy
- Minder verkeersopstoppingen
Many of the claims for the benefits of driverless technologies rely on the complete transformation of the existing vehicle fleet. But the transition will not be smooth or uniform: winners and losers in the competition between the different interest groups will depend on many factors.


Freeways are likely to be the first spaces in which the new vehicles will be able to operate. In any case, problems of congestion and competition for space at any popular destination will not be resolved. The ambition is to allow cars, bikes and pedestrians to share road space much more safely than they do today, with the effect that more people will choose not to drive. But, if a driverless car or bus will never hit a jaywalker, what will stop pedestrians and cyclists from simply using the street as they please?
- Kortere reizen


Some analysts are even predicting that the new vehicles will be slower than conventional driving, partly because the current balance of fear will be upset. While this might be attractive to cyclists, will it affect the marketability of Google’s new products? With huge reserves of cash and consequent lobbying power, Google and its ilk will be in a strong position to demand concessions from governments and road authorities. You can just imagine the pitch: we can save you billions on public transport operations, but we need fences to keep bikes and pedestrians out of the way of our vehicles in busy urban centres. Lost in the enthusiasm for the new, is the simple reality of the limited availability of urban space. New technologies of driverless trains may reduce costs and allow us to improve the quality of the service, but only if that is the focus of investment and innovation.
- Lagere voertuigemissies


I would urge readers of ReNew to turn their minds to the real alternative technologies we need in urban transport. Rather than follow the individualist model which directs our attention to the technology of the vehicle, let’s turn our attention to the ‘technology of the network’. How can we build on the insights of the Europeans and Canadians and use the potentials of IT and electronics to build better collective transport systems that connect all of us to the life of the city without consuming all the space we need to live and grow.
- Betere brandstofbesparing


- Lagere verzekeringstarieven


'''Driverless Highways: Creating Cars That Talk to the Roads'''


The art of road building has been improving since the Roman Empire. The highways today remain as little more than dumb surfaces with no data flowing between vechicles and the road.
'''Veiligheid'''
China already has restrictions on the limit of vehicles that can be licensed in Shanghai and Beijing. Going driverless brings some exciting new options.
Driverless cars will be a very disruptive technology. To compensate for the loss of a driver, vehicles will need to become more aware of their surroundings. With cameras you create a symbiotic relationship that is far different than human-to-road relationship, which is largely emotion based. An intelligent car coupled with an intellegent road is a powerfull source.


- Lane compression
Hoe bezorgd bent u over de volgende kwesties gerelateerd aan volledig zelfrijdende voertuigen?


- Distance compression
''(Niet bezorgd, Enigszins bezorgd, Neutraal, Bezorgd, Zeer bezorgd)''


- Time compression
- Autorijden in een voertuig met zelfrijdende technologie


On-demand transportation. All car parts and component need to be designed to be more durable and longer lasting. Shifting from driver to rider. More fancy dashboards, movies, music and massage interfaces. China doesn’t need more cars, it needs more transportation.
- Veiligheids consequenties van apparaatstoring of systeem mislukking


Conclusion:
- Wettelijke aansprakelijkheid voor chauffeurs/eigenaren bij ongelukken
We all love to drive, but humans are the inconsistent variable in this demanding area of responsibility. Driving requires constant vigilance, constant alertness, and constant involvement. Once we take the driver out of the equation, however, we solve far more problems than the wasted time and energy needed to pilot the vehicle. But vehicle design is only part of the equation. Without reimagining the way we design and maintain highways, driverless cars will only achieve a fraction of their true potential.


- Systeembeveiliging (tegen hackers)


'''Users’ resistance towards radical innovations: The case of the self-driving car'''
- Gegevensprivacy (locatie en bestemming)


The advent of self-driving cars could eliminate the driver from the driving equation, having the potential to substantially improve safety, time and fuel efficiency as well as mobility in general. The introduction of such a radically new technology is surrounded by a high degree of uncertainty and possibly not all stakeholders would welcome the change. As a result, the wide-spread acceptance and hence adoption of this new technology is far from certain and will thus be analyzed comprehensively in this paper. Given that it will be the end-consumers (the actual drivers) who will eventually decide whether self-driving cars will successfully materialize on the mass market the lack of wider empirical evidence for the user perspective forms the rationale for our research.
- Interactie met niet-zelfrijdende voertuigen


User resistance to change has been found to be a crucial cause for many implementation problems. The assumption that a possibly disruptive innovation such as the self-driving car could lead to major resistance on behalf of the public is based on the fact that people regularly react with caution and wariness to ‘new things’ and ‘change’ or, in extreme cases, even fight them.
- Interactie met voetgangers en fietsers


Possible causes of resistance: Regarding the desired level of automation, Khan, Bacchus, and Erwin (2012, p. 88) hypothesize that “it is likely that a significant percentage of drivers may not be comfortable with full autonomous driving.”
- Zelfrijdende voertuigen leren gebruiken


1.  People might experience driving to be “adventurous, thrilling and pleasurable” (Steg, 2005, p. 148). Mokhtarian and Salomon (2001, p. 695) argue that travel “is not only derived demand”, but may be “desired for its own sake”. While self-driving cars might post significant advantages for many segments of the population, driving enthusiasts might not be among the people adopting this new technology.
- Systeemprestaties in slecht weer


2.  Similarly, analyzing reasons why people do not use public transportation, Böhm et al. (2006, p. 4) make a distinction between “moving” and “being moved”, highlighting the latter as “dependent”. This poses the question whether self-driving cars could be seen as providing the ultimate level of autonomy, as people are free to engage in any activity once relieved from the task of driving or, psychologically, making people dependent on technology.
- Verwarde zelfrijdende voertuigen door onvoorspelbare situaties


3.  Further, as people regularly view their cars as source of power and similar attributes, “it is uncertain whether this close identification of personal autonomy with a person’s vehicle may be different with regard to use of autonomous vehicles” (Glancy, 2012, p. 1188).
- Rijvermogen van het zelfrijdende voertuig in vergelijking tot menselijk rijvermogen


4.  Other users might resist self-driving technology not because they value the driving task but because they simply do not trust “a machine making decisions for them” (Rupp & King, 2010, p. 3).
- Rijden in een voertuig zonder dat de bestuurder kan ingrijpen


5.  There are also privacy issues.


6.  Another potential cause for barriers towards self-driving technology is the risk of a “misbehaving computer system” (Douma & Palodichuk, 2012, p. 1164). With autonomous vehicles, criminals or terrorists might be able to hack into and use their cars for illegal purposes such as drug trafficking or, even worse, terroristic attacks (Douma & Palodichuk, 2012).
'''Ethische instelling'''


7.   Further, the unavoidable rate of failure (or crashes), no matter how small, could foster initial mistrust, especially as people tend to underestimate the safety of technology while putting excessive trust in human capabilities like their own driving skills
In sommige onvermijdbare ongelukken zal de auto een keuze moeten maken tussen verschillende mensenlevens, bijvoorbeeld tussen die van de bestuurder en voetgangers. Het zou zelfs mogelijk zijn een instelling aan een zelfrijdende auto toe te voegen die bepaalt welke keuze de auto zou moeten maken in het geval van een ongeluk. De onderstaande vragen gaan over deze ethische instelling.
 
Results: This study is an explorative study, since scientific research about self-driving cars is still in its infancy.
Met welke ethische instelling ziet u het liefst zelfrijdende auto’s op de weg? Rangschik van favoriet (1) naar minst favoriet (5):
A non-probability convenience sampling method was applied. Data were collected over a two-week time frame in July 2015 using a quantitative self-completion online questionnaire.
 
Discussion: there were considerable differences between sub-groups with older respondents to be more worried about self-driving cars than younger respondents, females to have more concerns than males, and rural respondents to value self-driving cars less than urban participants. Surprisingly, people who used a car more often tended to be less open. Correspondingly, and across all sub-groups, the most pronounced desire of respondents was to have the possibility to manually take over control of the driving task whenever wanted, which entails the necessity to keep the steering wheel. It is thus seen as crucial to include an overriding function in the initial versions of self-driving cars. It stood out that the more participants knew about self-driving cars, the more positive their attitude towards these vehicles tended to be. Thus, a lack of knowledge about the functioning of the product will most certainly lead to non-adoption.
1. De auto zou altijd het leven en gezondheid van de inzittende(n) moeten prioriteren boven die van omstander(s).  
 
2. De auto zou altijd het leven en gezondheid van de omstander(s) prioriteren boven die van de inzittende(n).


3. De auto zou altijd moeten kiezen om de minste hoeveelheid schade aan te richten bij de minste hoeveelheid mensen, of ze nu omstander of inzittend zijn.


'''Acceptance of Self-driving Cars'''
4. De auto zou geen expliciete keuze mogen maken tussen mensenlevens, en zou dus bij een onvermijdbaar ongeluk niet moeten ingrijpen. Als gevolg zal er dus willekeurig een slachtoffer vallen.


One study (Payre, Cestac, & Delhomme, 2014) reported that driving while impaired from alcohol, drugs, or medications was a major dimension of acceptance of self-driving vehicles, and other studies have suggested that people expect to be able to engage in a wide variety of secondary tasks in self-driving cars (Kyriakidis, Happee, & De Winter, 2014; Pettersson & Karlsson, 2015).
5. De keuze die de auto maakt zou gebaseerd moeten zijn op wat het merendeel van de weggebruikers wilt.


These emerging expectations may reflect overconfidence in our ability to automate the driving task. The implementation of autonomous vehicles faces considerable unresolved challenges. Unless automation of driving can be implemented with perfect or near-perfect reliability—an outcome that seems implausible, especially during anticipated transitional phases of deployment during which self-driving cars will share roads with traditional vehicles (Sivak & Schoettle, 2015)—the human likely will retain a supervisory role during automated driving. Human operators of autonomous vehicles seem to be in danger of being allocated an especially mundane function: to continuously maintain awareness of the driving scenario in anticipation of very infrequent occasions when human intervention will be necessary. Even if appropriate interfaces can be designed to keep drivers in the loop, it remains unclear whether consumers would accept an automated vehicle that could perform all driving tasks, did perform most driving tasks, yet demanded a high amount of monitoring workload.


Highly idealized portrayals have begun to foster expectations that self-driving cars will require little or no human intervention and will create a windfall of work, leisure, or social time during transit. Initial deployment of self-driving cars could be slowed or harmed if the technology is received with disappointment. Trust in automation is influenced by expectations and attitudes that develop before a person uses a system (Hoff & Bashir, 2015), thus it will be important to understand acceptance before the arrival of self-driving cars on markets (see Payre et al., 2014). To the extent that idealized portrayals of vehicle automation already have begun to influence acceptance, they may also be encouraging unrealistic expectations about automation performance that could be counterproductive to acceptance in the long run.
''Note to ourselves: from top to bottom they represent the following ethical theories: egoism, virtue ethics (kinda), utilitarianism, deontology, contractualism.''


In this experiment, an online sample of participants read either a realistic or an idealized description of a close friend or family member’s experiences during the first six months of ownership of a self-driving car. The realistic vignette emphasized that the driver felt the need to monitor the vehicle during automated operations and occasionally needed to resume manual control to prevent accidents. The idealistic scenario described a vehicle with perfect reliability that did not require human monitoring or intervention and had won the driver’s trust. A novel, 24-item scale assessed acceptance of self-driving cars in both vignette conditions and a control condition. The idealized portrayal was hypothesized to increase overall acceptance of self-driving cars.


Participants completed an instrument created for this experiment, the Selfdriving Car Acceptance Scale (SCAS). The SCAS featured 24 statements that were written to assess the extent to which participants were accepting of self-driving cars. Responses were made on a 7-point Likert scale with the anchors “strongly disagree” and “strongly agree.”
In hoeverre bent u het met de volgende stellingen eens:
People may be more accepting of self-driving cars under idealized rather than (arguably more) realistic scenarios during the initial deployment of the technology. The effect of the idealized depiction was small, but it suggested that idealized descriptions may be able to affect acceptance of self-driving cars before people interact with them.


''Zeer oneens, Oneens, Geen mening/neutraal, Eens, Zeer eens''


'''Self-Driving Car Acceptance and the Role of Ethics'''
- Zelfrijdende auto’s moeten niet verkocht worden met een door de gebruiker instelbare ethische instelling. In plaats daarvan moet deze instelling vastgesteld worden door bijvoorbeeld de regering of de fabrikant.


In the scope of unavoidable accidents, what is the effect of different ethical frameworks governing self-driving car decision-making, on their acceptance?
- Ik zou liever een zelfrijdende auto kopen met een instelbare ethische instelling dan een zelfrijdende auto met een onaantastbare ethische instelling.
Research question: In the scope of unavoidable accidents, what is the effect of different ethical frameworks governing self-driving car decision-making, on their acceptance? To exemplify the ethics impact on the acceptance of self-driving cars, one has to consider the situation of an eminent fatal accident involving pedestrians and car passengers. One could argue that innocent passengers ought to be spared, and hence the car passengers should bear the risk of being fatally injured. This most probably would be seen positively by the majority of the people in a city, especially the nondrivers. However, the question that is raised is if anyone would then buy such a car if s/he knows s/he is in high danger; probably not. Subsequently, that may result in a decrease in the sales of self-driving cars, and they will never reach a critical mass. Hence, the envisioned benefits coupled with their existence (e.g., overall reduction of accidents) would also not be materialized as expected.


The ethics embedded in the decisionmaking of a self-driving car, especially in the case of unavoidable accidents, would most probably impact their acceptance by the public. Also, the nature of the ethics, i.e., the ethical framework utilized may also play a role, something that is not sufficiently investigated.
- Als ik een een zelfrijdende auto gebruik, zou voor mij de specifieke ethische instelling van belang zijn.  
In this work quantitative positivist research is carried out, and the empirical data is collected via a questionnaire. With respect to the process followed, first, the ethical frameworks are selected and described. Ethical frameworks are posed in the unavoidable accident context and a model that hypothesizes their link to the acceptance of self-driving cars is proposed. Subsequently, a survey with questions that capture the identified factors (ethical frameworks) is constructed and empirical data is collected. The sampling frame is general, the initial scope is university students (at Master’s level) as they pose a good mix of technology savviness and will be able to easily understand the context in which self-driving cars will have to operate. The following frameworks were selected as representative: Utilitarianism, Deontology, Relativism, Absolutism (monism), and Pluralism.
Utilitarianism is a normative ethical framework that considers as the best action, the one that maximizes a utility function by considering the positive and negative consequences of the choices pertaining to the decision.


Deontology is a normative ethical framework and considers that there are rules that have an absolute quality in them, which means that they cannot be overridden. As such, deontologists reject that what matters are the consequences of an action, and focus that what matters is the kind of action to be taken.
Ethical Relativism is a meta-ethical framework where it is argued that “all norms, values, and approaches are valid only relative to (i.e., within the domain of) a given culture or group of people”. Hence, in this framework, it is proposed that a society’s practices can be judged only by its own moral standards.
Ethical absolutism or ethical monism is a meta-ethical framework that is on the antipodal point of the ethical relativism. This framework, also referred to as “doctrine of unity”, can be described as follows: “There are universally valid moral rules, norms, beliefs, practices, etc. [. . . that] define what is right and good for all at all times and in all places – those that differ are wrong”.


Ethical pluralism is a meta-ethical framework that rejects absolutism (that there is only one correct moral truth) and relativism (that there is no correct moral truth) as unsatisfactory and proposes that there is a plurality of moral truths. It is sometimes referred to as “doctrine of multiplicity”. The ethical pluralist argues that indeed there are universal values (as indicated in absolutism) however, instead of considering that there is only a single set always applicable, it considers that there are many which can be interpreted, understood and applied in diverse contexts (as indicated in ethical relativism).
'''Verantwoordelijkheid'''


A closer look at Utilitarianism results shown in Figure 3 reveals that most people consider that an assessment of some kind ought to be done by the self-driving car and be integrated into its decision algorithms.
In hoeverre zou u het volledig zelfrijdende voertuig gebruiken in de volgende stellingen?


Deontology implies that there is an expectation that the self-driving cars carry out their duties with good intentions independent of consequences. As seen in Figure 4, the prevalent view is that cars should treat all people on an equal basis (hence not assigning values to individual people as utilitarianism suggests), as well as trying to protect the innocent pedestrians.
''Zeer onwaarschijnlijk, Enigszins onwaarschijnlijk, Geen mening/neutraal, Enigszins waarschijnlijk, Zeer waarschijnlijk''


Absolutism (monism) propagates the existence of global moral values, norms, beliefs, and practices that are praised by the those who agree while they are condemned by those who disagree. Such views propagate group beliefs and may create tensions in society, as shown in the wide-spread of replies in question A4 in Figure 5, whether life is sacred and knowingly killing people by a machine would be acceptable. As shown in figure 5 there is a strong positioning that the car should have such ethics, and take life and death decisions independently if its owner agrees to it or not. This has several implications, as it would mean that self-driving cars would behave differently than their owners might wish, and raises concerns if cars that do so would actually be bought by people who disagree with their car’s decisions in critical situations.
1. Fabrikanten zijn volledig aansprakelijk als een zelfrijdende auto een ongeluk maakt, ook als dit ze ontmoedigt om zelfrijdende auto’s te produceren.


Relativism affirms tolerance and is bound to culture, time, society, which may ease the acceptance of decisions taken by self-driving cars in critical situations. As shown in Figure 6, people consider that the self-driving car ought to take into account such ethics in its decisions. Such considerations may reflect the diversity of cultures and philosophies found in the world, but may also create “deadlocks” where specific decisions of the self-driving car, cannot be praised or condemned.
2. Fabrikanten zijn gedeeltelijk aansprakelijk zodat ze wel produceren, maar altijd aangemoedigd zijn om fouten te verbeteren.


Pluralism, propagating the plurality of moral truths, provides a balance among the highly heterogeneous world, tolerance and basic human values such as human rights. Hence, ethical differences may be approached at a global scale. This is also reflected in the views captured in Figure 7, where a mix of aspects is shown, e.g., the owner’s or society’s moral views should be considered, while law and global ethical values are ought also to be respected. Therefore, the pluralism framework is seen as a good candidate for decision-making in self-driving cars. However, due to the multiple perspectives that need to be incorporated, it is also a highly complex one, and hence not easy to realize it.
3. Ik ben zelf aansprakelijk als mijn zelfrijdende auto een ongeluk maakt, ook al kan ik niet zelf ingrijpen (volledig autonome auto).


Finally, the survey also measured some aspects of the self-driving car acceptance as shown in Figure 8, from which it is evident that there is a need for ethics to be embedded in self-driving cars. People seem to trust self-driving cars, and therefore they would opt to buy them once they are available, and may prefer them over the normal (non-self-driving) ones. Overall there is a very strong view, that the society needs self-driving cars, as their benefits for a safer and more inclusive society cannot be overseen.
4. Ik ben zelf aansprakelijk als mijn zelfrijdende auto een ongeluk maakt, omdat ik de mogelijkheid heb om in te grijpen (semi-autonome auto).
the overall strong support for all frameworks means that there is no clear suggestion, at least from this research, that there should be a preference for a specific framework in the self-driving cars, and no one-size-fits-all solution can be proposed. On the contrary, since all of them seem to have an impact, different parts of the society and people may have different needs and preferences. One thing is clear; that the ethical frameworks considered in this research need to be investigated in-depth, not only qualitatively, but also with mass-scale quantitative surveys as part of the overall research priorities set for AI.


Future directions: It is high time to investigate in detail the ethical angle of issues that pertain to the acceptance of self-driving cars, especially from the diverse viewpoints of the multiple stakeholders involved in their lifecycle. As such, an intersectional analysis pertaining law, society, economy, culture, etc. may be the proper way to move forward and tackle issues raised in this work.
5. Iedereen met een zelfrijdende auto is aansprakelijk als een zelfrijdende auto een ongeluk maakt door middel van een verplichte verzekering of belasting.


Some challenges are:
-          will people adjust their road behaviour because of reliance on automation?


-          If the ethics of the car conflict with the ethics of the buyer, will they actually buy/use the car?
For the full questionnaire, click the link:
https://forms.office.com/Pages/DesignPage.aspx?fragment=FormId%3DR_J9zM5gD0qddXBM9g78ZIQEJ0K6qk1Epl7wQE_GwFJUQzRFUEg3RFVEMFVFVDY4NFVMQVJaRUgxQi4u%26Token%3D7a2197128d054f1d9d81e3056e2eafde


-          Is there bias in learning algorithms for self-driving cars, especially in regard to ethics?
=== Results ===


-          Should all cars have the same ethical setting?
In this appendix only the tables that have the most relevant and obvious results are listed for clarity. For the full list of tables grouped per demographic subcategory, see: https://docs.google.com/document/d/1NU_mgnyudpMwVYj0RVxQWUABamYle2OQiNXvVHWjn-g/edit?usp=sharing


-          How do we stop ‘’hackers’’ from making their own preferential ethical setting?


-          How do we stop the fact that it is likely that the cost of the car implies better ethical software?
''Table 1:''


-          How do we tackle privacy concerns?
[[File:Concern.png|500 px|]]


-          Who is liable for the ethical decisions of the car?


-          How would two cars with different ethical settings negotiate their outcome?
''Table 2:''


[[File:data.png|500 px|]]


'''Differences in Acceptance of Self-driving Cars: A Survey of Perceptions and Attitudes'''


Introduction: There is a significant body of research around technology acceptance across various domains. Numerous studies have built on to earlier models such as the Technology Acceptance Model (TAM) [1] and the Diffusion of Innovations Theory [2]. In TAM, perceived usefulness and perceived ease-of-use are main factors that affect a user’s attitudes toward using technology, which then influences the user’s behavioral intentions and actual usage, as illustrated in Fig. 1. In the Diffusion of Innovations Theory, five characteristics – relative advantage, compatibility, complexity, trialability and observability – are the key factors that underlie adoption.
''Table 3:''


Age-related changes in physical and cognitive capabilities, however, can lead to declines in mobility and driving abilities [14, 15], leading many older adults to stop driving altogether. For this reason, they may be the primary beneficiaries of self-driving cars. Older adults, however, have knowledge of and experiences with technology that may differ from younger generations, which may cause them to perceive and accept self-driving cars differently.
[[File: car usage.png|500 px|]]


While research on technology adoption and transportation safety has begun to explore determinants of acceptance and age effects with regards to new automotive technologies, how different generations perceive and accept self-driving cars is not yet fully understood. In this study, a large-scale survey was conducted to investigate older adults’ perceptions of and attitudes toward self-driving cars, and how their perspectives differ from other generations.


Results: The following factors were significant predictors of self-driving car acceptance: perceived usefulness, affordability, social support, lifestyle fit and conceptual compatibility. Across ages, those who perceived self-driving cars to be more practical, affordable, accepted by peers, and compatible with their lifestyles and conceptual mental models were more interested in getting and using them. Furthermore, attitudinal interest in self-driving cars strongly predicted behavioral intentions to use them.
''Table 4:''


Age was negatively associated with perceptions, attitudes and behavioral intentions toward the acceptance and use of self-driving cars. Older participants perceived self-driving cars as significantly less useful and more difficult to use compared to younger participants. Older adults were also more likely to think that self-driving cars would be more expensive and more difficult to find where to purchase or access. Older adults indicated that they believed self-driving cars were less likely to be backed up with technical support, less likely to provide emotional benefits, less likely to be approved by their peers, less reliable, less likely to work with other technologies they have, and less likely to fit with their lifestyles and mental models, compared to younger participants. Strong inverse relationships with age were also found for overall level of interest in using a self-driving car and likelihood of purchasing one in the future, indicating that older adults are currently less interested in self-driving cars and less likely to use one when it becomes available. Millennials were most favorable toward the use of self-driving cars. The silent generation (born before 1945) said they were not likely to consider using a self-driving car in any case.
[[File:adjustable setting.png|500 px|]]


Across ages, however, participants indicated that they would be more likely to use a self-driving car if they were no longer able to drive and less likely to use one if they were capable of driving.


In addition to age, experience with technology in general was strongly associated with self-driving car acceptance. Participants who self-reported greater experience with technology in general and higher confidence in use of new technologies were significantly more interested in self-driving cars and more likely to purchase one in the future. hose who self-reported being more knowledgeable of new technologies were significantly more likely to purchase a self-driving car in the future if they were no longer able to drive. The findings suggest that while self-driving car acceptance varies across generations, as shown in Table 4, age may have an indirect effect on acceptance through experience with technology in general. Additionally, current drivers and non-drivers showed minor differences in their attitudes toward using self-driving cars. Participants who did not have a valid driver’s license were significantly more likely to be interested in using a self-driving car than those who currently had a valid driver’s license. No significant interaction effects were observed between age and possession of a driver’s license.
''Table 5:''


[[File:familiar.png|500 px|]]


'''The Influence of Feelings While Driving Regular Cars on the Perception and Acceptance of Self-Driving Cars'''


Introduction: Negative emotions that driving may engender in some people have also been found to been connected to a greater likelihood of crashes. Removing human error from driving is one of the greatest potential benefits of self-driving cars, as driver error could be directly or indirectly responsible for as many as 94% of all traffic accidents. The rapidly growing population of older drivers may especially benefit from self-driving cars.
''Figure 1:''


Previous work has found that people’s risk and benefit perceptions as well as trust in the technology are related to its acceptance. . In this research, we specifically examine how people’s feelings around driving traditional cars may affect their perceptions of risk and benefit of and trust in self-driving cars. Further, we investigate how these feelings, perceptions, and trust in turn influence people’s acceptance of the technology.
[[File:Screenshot_10.png|500 px]]


Laypeople often evaluate the risks and benefits of new technologies differently than experts, and their perceptions of risk are also shaped by their perceptions of benefits the technology may offer. For laypeople, risk perception tends to decrease when benefit perception increases, and vice versa.
The characteristics of the technology itself can be captured by two orthogonal dimensions: dread risk and unknown risk. Dread risks include people’s perceptions of the potential for lack of control, catastrophic outcomes, and fatalities. Unknown risks include perceived newness, lack of scientific knowledge, unobservable consequences, and delay of effects. Individual-level factors that affect people’s perceptions of risk include knowledge and affective associations. People’s levels of knowledge about a technology should affect the extent to which they understand both its risks and benefits.


As noted above, affect, in the form of a subtle feeling of positivity or negativity, can serve as a decision heuristic that people use in situations of uncertainty and limited knowledge, known as the affect heuristic. The basis of these feelings is often prior experiences or thoughts related to the decision at hand but it could also be a less relevant emotional state such as current mood.
''Figure 2:''


The nature or valence of the affect plays a role in how it is weighed in judgments. In particular, people tend to attend to or weight negative information or emotions more heavily than positive ones when making evaluations. The affect heuristic also serves as one explanation for the inverse relationship between risk and benefit assessments. If people’s emotional responses are more positive, they tend to judge risks to be lower and benefits to be higher; the more negative people’s affective reactions are, the more likely they are to judge risks to be higher and benefits to be lower. People may be particularly more likely to rely on their affective reactions as a common source to generate both their risk and benefit evaluations when they lack expertise within a given domain.
[[File:Screenshot_6.png|500 px]]


The affect heuristic suggests that affect shapes people’s willingness to adopt new technologies to the extent that the technology is novel, its performance is uncertain, and its impacts are unknown. Perceived usefulness or the perceived potential benefits has been shown in some empirical work to be a more significant factor in explaining adoption than ease of use. Other factors that have been identified as significant for understanding technology adoption include the relevance of people’s previous experiences (including with the technology) and system reliability—the ability of the system to work without failure. Emotion is also a factor. Further, individual characteristics such as age, gender, lifestyle, and comfort levels with different technologies may also affect people’s willingness to adopt new technologies.


Studies have found that people’s degree of acceptance varies by individual characteristics, with younger, male, or more tech savvy people generally more interested in using self-driving cars than older, female, or less tech savvy people. People’s hesitations around the acceptance of automated vehicle technologies may also be tied to their feelings around driving itself, and many people report driving to be positive for them. For example, in a study that compared all levels of automation (from manual [fully human controlled] to fully automated), participants found manual driving the most enjoyable. Yielding control was a major barrier to adoption of self-driving cars among regular commuters (Howard & Dai, 2013). Because self-driving cars represent a fundamental change in the driving task, people’s current feelings about driving traditional vehicles may shape how they assess changes or alternatives to it.
''Figure 3:''


The present study focuses on how feelings experienced while driving influence risk and benefit perceptions as well as trust in self-driving cars and how, in turn, these perceptions affect the acceptance of these vehicles. We approached this question in an exploratory manner and formulated the following research question: How do feelings related to human-operated driving influence risk and benefit perceptions of, as well as trust in, self-driving cars?
[[File:Screenshot_7.png|500 px]]
Note: For participant and exact questions, see the paper itself.


Results: Higher risk perception was predicted by less experience with vehicle automation technologies, higher levels of positive affect (control), higher levels of negative affect experienced while driving, and being female. Higher benefit perception was related to having fewer years as a driver, greater self-reported knowledge of self-driving cars, more experience with vehicle automation technologies, lower levels of positive affect (control), higher levels of positive affect (enjoyment), higher levels of negative affect, and being male. Trust in self-driving cars was related to having fewer years as a driver, greater self-reported knowledge of self-driving cars, more experience with advanced vehicle technologies, no knowledge of any accidents involving a self-driving car, positive affect (enjoyment) experienced while driving, and being male.
''Figure 4:''


As for interest in using a self-driving car, risk perception, benefit perception and trust were all significant predictors, but benefit perception had the largest effect size among the three.
[[File:Screenshot_8.png|500 px]]


Discussion: Our results indicate that feelings experienced while driving regular cars inform people’s risk and benefit perceptions of as well as their trust in self-driving cars. We asked about people’s affective experiences driving traditional vehicles—not self-driving cars; nevertheless, people’s feelings about the more familiar driving of current vehicles carried over to their assessments of self-driving cars. Also, one’s attitudes about the status quo should inform perceptions of change to it.
People who experienced high levels of negative affect had both higher risk and higher benefit perceptions of self-driving cars. This is contrary to what we would expect from research on the affect heuristic. Because positive affect is associated with more automatic processing, people who have more positive associations with driving may also be less inclined to deliberate about potential risks associated with self-driving cars.


Our results further underscore the significance of benefit perception for understanding technology acceptance. As self-driving cars are still more conceptual than tangible, their usefulness may not be 14 Raue et al. obvious to many, but so too may the risks of such technologies not be fully understood. As the technology continues to mature and becomes more widely adopted, it may be especially important to communicate to the public about its benefits and risks, so that communities can make better decisions about how they want to use and interact with the technology.
''Figure 5:''


[[File:Screenshot_9.png|500 px]]


Note that these figures have translated text in them. The original questions and responses are in Dutch and have been translated into English for the purposes of this report.


== Planning ==
= Planning =


{| border=1 style="border-collapse: collapse; width: 80%; height: 14em;"
{| border=1 style="border-collapse: collapse; width: 80%; height: 14em;"
Line 503: Line 771:
|-
|-
! Week 5
! Week 5
| Send out survey || Contact professors|| Literature study || Update the wiki-page || ''' Contact made '''
| Send out survey || Contact professors|| Switch subtopics || Update the wiki-page || ''' Contact made '''
|-
|-
! Week 6
! Week 6
| Analysing survey || Make final report || Write conclusion/recommendation || Update the wiki-page || ''' Final report finished '''
| Analysing survey || Make final report || Write conclusion/discussion survey || Update the wiki-page || ''' Survey finished '''
|-
|-
! Week 7
! Week 7
| Begin filming the presentation || Edit the film for demonstration ||  || Update the wiki-page || ''' Film for demonstration finsihed '''
| Finish final report || Start making the presentation/powerpoint ||  || Update the wiki-page || ''' Presentation finished '''
|-
|-
! Week 8
! Week 8
| Peer review || Last preparations for demonstration || || Finalize the wiki-page || ''' Presentation/demonstration '''
| Peer review || Last preparations for presentation || Finish final report || Finalize the wiki-page || ''' Presentation and final report finished '''
|}
|}


Line 524: Line 792:
|-
|-
| Laura Smulders
| Laura Smulders
| 7
| 8.5
| Meetings [3h], Starting lecture [1h], Research
| Meetings [3h], Starting lecture [1h], Research [1h], 5 relevant references [2h], Start/discuss problem statement & objectives [1.5h]
|-
|-
| Sam Blauwhof
| Sam Blauwhof
| 7
| 8.5
| Meetings [3h], Starting lecture [1h], Research
| Meetings [3h], Starting lecture [1h], Research [1h], 5 relevant references [2h], Start/discuss Approach, Milestones and deliverables [1.5h]
|-
|-
| Joris van Aalst
| Joris van Aalst
| 8.5
| 9
| Meetings [3h], Starting lecture [1h], Research
| Meetings [3h], Starting lecture [1h], Research [2h], 5 relevant references [2h], Start/discuss User part [1h]
|-
|-
| Roel van Gool
| Roel van Gool
| 8
| 8
| Meetings [3h], Starting lecture [1h], Research
| Meetings [3h], Starting lecture [1h], Research [1.5h], 5 relevant references [2h], Check references [0.5h]
|-
|-
| Roxane Wijnen
| Roxane Wijnen
| 7
| 8
| Meetings [3h], Starting lecture [1h], Research
| Meetings [3h], Starting lecture [1h], Research [1h], 5 relevant references [2h], Start/discuss user requirements [1h]
|}
|}


=== Week 2 ===
=== Week 2 ===
Line 552: Line 819:
|-
|-
| Laura Smulders
| Laura Smulders
| 10
| 7
| Meetings [4h],
| Meetings [3h], Summarize 5 relevant articles [4h]
|-
|-
| Sam Blauwhof
| Sam Blauwhof
| 10
| 7.5
| Meetings [4h],  
| Meetings [3h], Summarize 5 relevant articles [4.5h]
|-
|-
| Joris van Aalst
| Joris van Aalst
| 7.5
| 8
| Meetings [4h],  
| Meetings [3h], Summarize 5 relevant articles [5h]
|-
|-
| Roel van Gool
| Roel van Gool
| 9
| 8
| Meetings [4h],  
| Meetings [3h], Summarize 5 relevant articles [5h]
|-
|-
| Roxane Wijnen
| Roxane Wijnen
|  
| 7.5
| Meetings [4h],
| Meetings [3h], Summarize 5 relevant articles [4.5h]
|}
|}


=== Week 3 ===
=== Week 3 ===
Line 580: Line 846:
|-
|-
| Laura Smulders
| Laura Smulders
|  
| 7
| Meetings [3h], Problem statement [2.5h], Update Wiki [1h]
| Meetings [3h], Problem statement [3h], Update Wiki [1h]
|-
|-
| Sam Blauwhof
| Sam Blauwhof
|  
| 7.5
| Meetings [3h], Safety - traffic behaviour [3.5h]
| Meetings [3h], Safety - traffic behaviour [4.5h]
|-
|-
| Joris van Aalst
| Joris van Aalst
|  
| 7.5
| Meetings [3h], Perspective of private end-user [3h]
| Meetings [3h], Perspective of private end-user [4.5h]
|-
|-
| Roel van Gool
| Roel van Gool
|  
| 8
| Meetings [3h], Ethical theories [4h]
| Meetings [3h], Ethical theories [5h]
|-
|-
| Roxane Wijnen
| Roxane Wijnen
|  
| 7
| Meetings [3h], Responsibility [2.5h]
| Meetings [3h], Responsibility [4h]
|}
|}


Line 607: Line 873:
|-
|-
| Laura Smulders
| Laura Smulders
| 11
| 12.5
| General meetings [2h], Meeting with Sam & Roel [2.5h], Update Wiki [1h], Hypothesis [2h], Literature study [3.5h]
| General meetings [2h], Meeting with Sam & Roel [2.5h], Update Wiki [1h], Hypothesis [2h], Planning [1.5h], Literature study [3.5h]
|-
|-
| Sam Blauwhof
| Sam Blauwhof
Line 634: Line 900:
|-
|-
| Laura Smulders
| Laura Smulders
| 10
| 11
| Meetings [4h],  
| General meetings [2h], Meeting with Sam & Roel [2h], Survey with Roxane & Roel [2.5h], Define relevant factors & Literature study [2h], Update Wiki & Planning [0.5h], Finish survey feedback Raymond Cuijpers [2h]
|-
|-
| Sam Blauwhof
| Sam Blauwhof
| 10
| 8
| Meetings [4h],
| General meetings [2h], Meeting with Laura & Roel [2h], Literature study [4h]
|-
|-
| Joris van Aalst
| Joris van Aalst
| 7.5
| 7
| Meetings [4h],  
| General meetings [2h], Meeting with Roxane [1.5h], Literature study [3.5h]
|-
|-
| Roel van Gool
| Roel van Gool
| 9
| 12
| Meetings [4h],  
| General meetings [2h], Meeting with Laura & Sam [2h], Survey with Roxane & Laura [2.5h], Contact with Raymond Cuijpers [0.5h], Literature study [3h], Finish survey feedback Raymond Cuijpers [2h]
|-
|-
| Roxane Wijnen
| Roxane Wijnen
|  
| 9
| Meetings [4h],
| General meetings [2h], Meeting with Joris [1.5h], Survey with Laura & Roel [2.5h], Literature study [3h]
|}
|}


=== Week 6 ===
=== Week 6 ===
Line 662: Line 927:
|-
|-
| Laura Smulders
| Laura Smulders
| 10
| 9
| Meetings [4h],  
| General meetings [2h], Review Responsibility [2h], Meeting with Roxane [1h], Methods survey [3h], Update Wiki [0.5h], Update planning [0.5h]
|-
|-
| Sam Blauwhof
| Sam Blauwhof
| 10
| 9.5
| Meetings [4h],  
| General meetings [2h], Review Ethical theories [2.5h], Meeting with Roel [1.5h], Meeting with Joris [1h], Introduction survey [2.5h]
|-
|-
| Joris van Aalst
| Joris van Aalst
| 7.5
| 11
| Meetings [4h],  
| General meetings [2h], Review Safety [2h], Meeting with Sam [1h], Research statistics [1.5h], Results survey [4.5h]
|-
|-
| Roel van Gool
| Roel van Gool
| 9
| 12.5
| Meetings [4h],  
| General meetings [2h], Privacy [5h], Meeting with Sam [1h], Results survey [4.5h]
|-
|-
| Roxane Wijnen
| Roxane Wijnen
|  
| 11.5
| Meetings [4h],
| General meetings [2h], Review Perspective of private end-user [6h], Meeting with Laura [1h], Meeting with Joris [1h], Research statistics [1.5h]
|}
|}


=== Week 7 ===
=== Week 7 ===
Line 690: Line 954:
|-
|-
| Laura Smulders
| Laura Smulders
| 10
| 7.5
| Meetings [4h],  
| General meetings [3h], Slides presentation [3h], Meeting with Lambèr Royakkers [1h], Update planning [0.5h]
|-
|-
| Sam Blauwhof
| Sam Blauwhof
| 10
| 8
| Meetings [4h],  
| General meetings [3h], Presentation preparation [3h], Meeting with Lambèr Royakkers [1h], Introduction survey [1h]
|-
|-
| Joris van Aalst
| Joris van Aalst
| 7.5
| 8
| Meetings [4h],  
| General meetings [3h], Discussion [4h], Meeting with Lambèr Royakkers [1h]
|-
|-
| Roel van Gool
| Roel van Gool
| 9
| 8
| Meetings [4h],  
| General meetings [3h], Results [5h]
|-
|-
| Roxane Wijnen
| Roxane Wijnen
|  
| 8
| Meetings [4h],
| General meetings [3h], Presentation preparation [3h], Review End-user perspective [2h]
|}
|}


=== Week 8 ===
=== Week 8 ===
Line 718: Line 981:
|-
|-
| Laura Smulders
| Laura Smulders
| 10
| 5.5
| Meetings [4h],  
| General meetings [2h], Discussion [3h], Updating planning [0.5h]
|-
|-
| Sam Blauwhof
| Sam Blauwhof
| 10
| 5
| Meetings [4h],  
| General meetings [2h], Discussion [3h]
|-
|-
| Joris van Aalst
| Joris van Aalst
| 7.5
| 5
| Meetings [4h],  
| General meetings [2h], Discussion [3h]
|-
|-
| Roel van Gool
| Roel van Gool
| 9
| 6
| Meetings [4h],  
| General meetings [2h], Discussion [3h], Ethics [1h]
|-
|-
| Roxane Wijnen
| Roxane Wijnen
|  
| 6
| Meetings [4h],
| General meetings [2h], Last check of spelling and references [3h], Discussion future work [1h]
|}
|}

Latest revision as of 13:47, 8 April 2021

The acceptance of self-driving cars



Group members

Name Studentnumber Email
Laura Smulders 1342819 L.a.smulders@student.tue.nl
Sam Blauwhof 1439065 S.e.blauwhof@student.tue.nl
Joris van Aalst 1470418 J.v.aalst@student.tue.nl
Roel van Gool 1236549 R.p.v.gool@student.tue.nl
Roxane Wijnen 1248413 R.a.r.wijnen@student.tue.nl

Introduction

Problem statement

Self-driving cars are generally believed to be safer than manually driven cars. However, they can not be 100% safe. Because crashes and collisions are unavoidable, self-driving cars should be programmed to respond to situations where accidents are highly likely or unavoidable (Nyholm & Smids, 2016). Among others, there are three moral problems involving self-driving cars. First, there is the problem of who decides how self-driving cars should be programmed to deal with accidents. Secondly, the moral question of who has to take the moral and legal responsibility for harms caused by self-driving cars is asked. Lastly, there is the morality of the decision-making under risks and uncertainty.

A problem closely associated with the morality of self-driving cars is the trolley problem; For example, in the case of an unavoidable accident where the car has to choose between crashing into a kid, or into a wall, harming its four passengers, which should the car choose? When this choice is made, there is also the question of who is morally responsible for harms caused by self-driving cars. Suppose, for example, when there is an accident between an autonomous car and a conventional car. This will not only be followed by legal proceedings, it will also lead to a debate about who is morally responsible for what happened (Nyholm & Smids, 2016).

A lot of uncertainty is involved with the decision-making process of self-driving cars. The self-driving car cannot acquire certain knowledge about the trajectory of other road vehicles, their speed at the time of collision, and their actual weight. Second, focusing on the self-driving car itself, in order to calculate the optimal trajectory the self-driving car needs to have perfect knowledge of the state of the road, since any slipperiness of the road limits its maximal deceleration. Finally, if we turn to the case of an elderly pedestrian in the trolley problem, again we can easily identify a number of sources of uncertainty. Using facial recognition software, the self-driving car can perhaps estimate his age with some degree of precision and confidence. But it may merely guess his actual state of health (Nyholm & Smids, 2016).

The decision-making of self-driving cars is realistically represented as being made by multiple stakeholders; ordinary citizens, lawyers, ethicists, engineers, risk assessment experts, car-manufacturers, government, etc. These stakeholders need to negotiate a mutually agreed-upon solution (Nyholm & Smids, 2016). Whatever this mutually agreed-upon solution will be, all parties will have to account for the general acceptance of their implemented solution if they wish for self-driving cars to be successfully deployed. This report will focus on the relevant factors that contribute to the acceptance of self-driving cars with the main focus on the private end-user. Among other things, into account are taken some ethical theories which could be a guideline for the decisions the car has to make: utilitarianism, kantianism, virtue ethics, deontology, ethical plurism, ethical absolutism and ethical relativism. Aside from ethical theories, other influences on acceptance will also be treated in this report.

State-of-the-art/Hypothesis

The developments and advances in the technology of autonomous vehicles have recently brought self-driving vehicles to the forefront of public interest and discussion. In response to the rapid technological progress of self-driving cars, governments have already begun to develop strategies to address the challenges that may result from the introduction of self-driving cars (Schoettle & Sivak, 2014). The Dutch national government aims to take the lead in these developments and prepare the Netherlands for their implementation. The Ministry of Infrastructure and the Environment has opened the public roads to large-scale tests with self-driving passenger cars and trucks. The Dutch cabinet has adopted a bill which in the near future will make it possible to conduct experiments with self-driving cars without a driver being physically present in the vehicle (mobility, public transport and road safety, etc.).

The end-consumers (the actual drivers) will eventually decide whether self-driving cars will successfully materialize on the mass market. However, the perspective of the end user is not often taken into account, and the general lack of research in this direction is the reason this research is conducted. Therefore, our research question is: "What are the relevant factors that contribute to the acceptance of self-driving cars for the private end-user?" User resistance to change has been found to be an important cause for many implementation problems (Jiang, Muhanna, & Klein, 2000), so it is very likely that the implementation of the self-driving car will not be trivial as people may be resistant to accept the new technology. It is likely that a significant percentage of drivers may not be comfortable with full autonomous driving, as people might experience driving to be adventurous, thrilling and pleasurable (Steg, 2005). There is also the question whether self-driving cars could be seen as providing the ultimate level of autonomy when making people dependent on the technology. Given that self-driving cars could be tracked steadily could lead to privacy issues. Another potential cause for barriers towards self-driving cars is the risk of ‘misbehaving computer system’. With autonomous vehicles, criminals or terrorists might be able to hack into and use their cars for illegal purposes. Furthermore, the unavoidable rate of failure and crashes could lead to mistrust. Especially as people tend to underestimate the safety of technology while putting excessive trust in human capabilities like their own driving skills (König & Neumayr, 2017).

In several recent surveys on the topic of self-driving vehicles, the public has expressed some concern regarding owning or using vehicles with this technology. Looking at a survey of public opinion on autonomous and self-driving vehicles in the U.S., the U.K., and Australia, the majority of respondents had previously heard of self-driving vehicles, had a positive initial opinion of the technology, and had high expectations about the benefits of the technology (Schoettle & Sivak, 2014). However, the majority of respondents expressed high levels of concern about riding in self-driving cars, security issues related to self-driving cars, and self-driving cars not performing as well as actual drivers. Respondents also expressed high levels of concern about vehicles without driver controls (Schoettle & Sivak, 2014). In the survey "Users’ resistance towards radical innovations: The case of the self-driving car", findings are that people who used a car more often tended to be less open to the benefits of self-driving cars. The most pronounced desire of respondents was to have the possibility to manually take over control of the car whenever wanted. This indicates that the drivers want to be enabled to decide when to switch to self-driving mode and have the option to resume control in situations when the driver does not trust the technology. In the survey the most severe concern involving the car and the technology itself was the fear of possible attacks by hackers (König & Neumayr, 2017).

This report will focus on the relevant factors that contribute to the acceptance of self-driving cars for the private end-user. A survey is conducted to get more insight into the private end-user of self-driving cars. Together with the literature research and the survey conducted on the topic of self-driving vehicles, these relevant factors will be the ethical theories, the moral and legal responsibility, safety, privacy and the perspective of the private end-user.

Relevant factors

Ethical theories

A key feature of self-driving cars is that the decision making process is taken away from the person in the driver’s seat, and instead bestowed upon the car itself. From this drastical change several ethical dilemmas emerge, one of which is essentially an adapted version of the trolley problem. When an unavoidable collision occurs, it is important to define the desired behaviour of the self-driving car. It might be the case that in such a scenario, the car has to choose whether to prioritize the life and health of its passengers or the people outside of the vehicle. In real life such cases are relatively rare (Nyholm, 2018; Lin, 2016), but the ethical theory underlying that decision will possibly have an impact on the acceptance of the technology. Self-driving vehicles that decide who might live and who might die are essential in a scenario where some moral reasoning is required in order to produce the best outcome for all parties involved. Given that cars seem not to be capable of moral reasoning, programmers must choose for them the right ethical setting on which to base such decisions on. However, ethical decisions are not often clear cut. Imagine driving at high speed in a self-driving car, and the car in front comes to a sudden halt. The self-driving car can either suddenly break as well, possibly harming the passengers, or it can swerve into a motorcyclist, possibly harming them. This scenario can be regarded as an adapted version of the trolley problem. One could argue that since the motorcyclist is not at fault, the self-driving car should prioritize their safety. After all, the passenger made the decision to enter the car, putting at least some responsibility on them. On the other hand, people who buy might buy the self-driving car will have an expectation to not be put in avoidable danger. No matter the choice of the car, and the underlying ethical theory that it is (possibly) based on, it is likely that the behaviour and decision-making of the car has more chance of being socially accepted if it can morally be justified. Therefore in this section there is first highlighted some possible ethical theories, and then we will discuss some relevant aspects that surround the implementation of all ethical theories.


Ethical theories under consideration Although there are not a lot of actions a car could take in the above-described scenario, there are a lot of ethical theories that can help to inform the car to make such a decision. The most prominent ethical theories that might prima facie be useful are utilitarianism, deontology, virtue ethics, contractualism, and egoism. These are the ethical theories that will be treated in this section. Utilitarianism considers consequences of actions, as opposed to the action itself. This means that the correct moral decision or action in any scenario is the one that produces the most good. Although "good" is a subjective term, in most versions of utilitarianism this usually refers to the net happiness or welfare increase for all associated parties (Driver, 2014). Circumstances or the intrinsic nature of an action is not taken into account, as opposed to Deontology. Deontology does not judge the morality of an action based on its consequences, but on the action itself. Deontology posits that moral actions are those actions which have been taken on the grounds of a set of pre-determined rules, which hold universally and absolutely. This means that for a deontologist, some actions are wrong or right no matter their outcome.

The third major normative ethical theory is virtue ethics. Virtue ethics emphasizes the virtues, or moral character, as opposed to rules or consequences. Virtues are seen as positive or "good" character traits. Examples of such traits are courage, or modesty. A moral person should do actions which realize these traits, and therefore moral actions are those which cause a persons’ virtues to be realized.

Other than the three major classical normative ethics theories, there are two more prima facie relevant theories, the first of which is egoism. Normative egoism posits that the only actions that should be taken morally are those actions that maximize the individuals self-interest. An egoist only considers the benefit and detriments other people experience in so far as those experiences will influence the self-interest of the egoist. Although it may not seem like it, egoism is very similar to utilitarianism, except that utilitarianism focusses on the maximum happiness of all people involved, and egoism only focusses on the maximum happiness of the individual.

The last ethical theory that can be applied to the adapted trolley problem is (social) contractualism. Contractualism does not make any claims about the inherent morality of actions, but rather posits that a moral action is one that is mutually agreed upon by all parties affected by the action. What this agreement should look like exactly differs per version of contractualism: in some versions there must be unanimous consent, while in other versions there must be a simple or a supermajority. A good action is therefore one that can be justified by other relevant parties, and a wrong action is one that cannot be justified by the same.


Ethical theories applied to adapted trolley problem. First, let us apply utilitarianism to the adapted trolley problem. On a micro level a self-driving car with a utilitarian ethical setting would first want to minimize the amount of deaths, and then minimize the total number of severe injuries sustained by all people who are affected by a collision. This seems simple enough, but there are, among others, two issues with this implementation of a utilitarian setting. If for instance the technology is so advanced that it can target people based on if they are for instance wearing a helmet or not, then it would be safer for the car to collide with a biker wearing a helmet over a biker who is not wearing one, assuming all else is equal. Now the biker with a helmet is targeted, even though they are the one putting in effort to be safe. This is unfair, and if this is implemented, then it is possible some people will stop trying to take safety measures seriously, in order to not be targeted by a utilitarian self-driving car. This would ultimately reduce the overall safety on the road, which is exactly the opposite of what a utilitarian wants. Note that it is unclear whether such recognision technology will even be deployed in self-driving cars, and therefore the question arises whether this is a relevant problem at all. This paper does not make claims on the likelyhood such technology will be implemented, but instead assumes it is possible in order to make (ethical) claims on the subject. In reality the technology might not be so precise, but it is better to be prepared for the case that it is.

The second problem is that it seems that although people want other road-users in self-driving cars to adopt a utilitarian setting, they themselves would rather buy cars that give preferential treatment to passengers (Nyholm, 2018; Bonnefon et al., 2016). "In other words, even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves" (Bonnefon et al., 2016). Therefore if self-driving cars are only sold with a utilitarian ethical setting, then less people might be inclined to buy them, again reducing the overall safety on the road.

There are multiple possible counters to these two issues that a "true" utilitarian might propose. To counter the first problem, the utilitarian would simply not program the car to make a distinction between people who wear a helmet or those who do not wear a helmet. A distinction would also not be made in similar scenarios, since this solution is not only relevant to cases where helmets are involved. Of course, there are also be scenarios where the safer of the two options will be chosen by the self-driving car, assuming the same amount of people are at risk in both options. The difference between a valid safe choice and an invalid safe choice is that some safety measures are explicitly taken (such as the decision to put on a helmet), while others are more a byproduct of another decision (such as riding a bus versus driving a car). Riding a bus might be safer than driving a car, but most people who are passengers in a bus do not choose to be because of safety reasons. They might not have a car, or they do so out of concern of climate change. Since people in this scenario did not choose to ride a bus because of safety reasons, it is likely they will also not stop riding the bus because of a slightly increased chance of being hit by a self-driving car. Of course, this is only a thought-experiment, but if this also holds true in practice, then a utilitarian would find it acceptable for the self-driving car to choose the safer option in the bus vs car scenario, whereas in the helmet vs no helmet scenario the utilitarian would not find it acceptable to choose the safer option.

To counter the second problem, the "true" utilitarian would ultimately want to reduce death and or/harm by reducing the amount of traffic accidents. If in practice that means that a significant number of people will not buy a self-driving car with a utilitarian setting, then the utilitarian would rather the self-driving cars be sold with an egoistic setting that gives passengers preferential treatment. This way, even though when an accident involving a self-driving car occurs it will be more deadly than with a utilitarian setting, accidents will overall decrease since more self-driving cars will be present.

There are more problems with a utilitarian approach to self-driving cars, but they are unrelated to the two micro vs macro utilitarian problems we just treated. One of these problems has to do with discrimination. In an unavoidable collision scenario where the self-driving car has to either hit an adult man or a child, the adult has more chance of survival. Is the car therefore justified in choosing the man? A utilitarian would say the car is indeed justified, except if this decision has been found to cause consumers to be turned away from purchasing and using self-driving cars. Prima facie, this would not seem to be the case, but there is no major literature on this topic that gives any definitive or exploratory answer as far as we could find. Mercedes did announce that their self-driving cars would prioritize passengers over bystanders, but this was met with heavy backlash, causing Mercedes to retract the statement (Nyholm, 2018). In some countries, it has already been made law that this type of discrimination based on age, gender or type of road-user in self-driving cars is illegal, such as in Germany (Adee, 2016). Once again, as in the helmet example, it might be the case that self-driving cars will not be equipped with such precise recognition software, in which case the above described ethical problem is not relevant. Still, it is good to be prepared for the case that it is.

The deontological ethical setting would not allow for a choice to be made that explicitly harms or kills a person, no matter the potential amount of saved lives. Therefore when faced with an unavoidable (possibly) deadly collision, the car would simply not make a decision at all, and events would play out "naturally". In essence, this makes the actual "chosen" collision somewhat random. As in the original trolley problem, the moral entity, in this case the car (or more accurately, the programmer who programs the ethics into the car), would simply not intervene at all. Deontologists are of the opinion that there is a difference between doing and allowing harm, and by not letting the car intervene in an unavoidable accident, both the passenger(s) and the programmer(s) are absolved of any moral responsibility. Some people might be happy with such a setting, since many people could not fathom being (morally) responsible for the deaths of others. By entering a self-driving car with a utilitarian ethical setting, the passenger(s) cannot be absolved of some moral responsibility in the case of an accident, since they made a conscious decision to buy a car that has been implemented to make explicit decisions. The same can not be said of passengers that enter a deontological self-driving car. Prima facie it seems likely that people do not want to be morally responsible in the case of an accident, and implementing a deontological ethical setting might therefore help acceptance of the technology.

A virtue ethics response to the adapted trolley problem is very hard to come up with. An ethical setting based on virtue ethics would want the car to make a decision that improves the virtues of the moral entity. Therefore the decision that the car makes depends on which virtue we would want to improve. Take for instance bravery. One could posit that it is brave to take up danger to yourself if it means that other people will be safer for it. If we assume the moral entity/entities to be the passenger(s), then the self-driving car would always choose to put the passengers in danger, since this would improve on their bravery. There are two problems with this approach: firstly, it is hard to optimize any decision the car makes, since it is impossible to find a decision that always improves on all virtues. Also, what are those virtues in the first place? Is it for instance virtuous to sacrifice yourself if you leave behind a family? Secondly, since the car is not actually a moral agent, whose virtues should the decision the car makes improve upon? The programmers’ or the passengers’? This is unclear. If the programmers’ virtues should be improved, then it seems prima facie extremely unlikely that people would be willing to buy cars that might sacrifice themselves to improve the virtue(s) of a programmer they never met. If the passengers’ virtues should be improved, then people might be slightly more sympathetic, but even then I assume most people do not want to sacrifice their lives to improve upon an abstract notion of virtue and morality.

If we take the perspective of self-driving car buyers and users, the ethical egoist response is to prioritize the lives of the passengers above all else. As said in the "utilitarian" part of this section, people who buy and use the car seem to prefer a self-driving car that always puts the lives of themselves above others. This setting could also possibly be regarded as the setting of a "true" utilitarian. There is another possible benefit to this ethical setting, namely that they are more predictable. If self-driving cars become very prevalent, that means that any self-driving car must always account for the decisions other self-driving cars are making. Therefore, if all self-driving cars prioritize themselves, their road behaviour becomes more predictable to other self-driving cars. However, this argument is theoretical in nature, and there are some game theorists who do not agree (Tay, 2000). The moral argument against ethical egoism is that it seems, and indeed is incredibly selfish. An ethical egoist might sacrifice hundreds of lives to save themselves. However, a "true" ethical egoist is not always extremely selfish, since extremely selfish behaviour is not tolerated by others. A "true" ethical egoist would therefore also consider the feelings of other people, since their thoughts and decisions may influence the reward any egoist may get out of any given situation. In the case of unavoidable (deadly) accidents however, no matter the feelings of others, an egoist that values their own life above all else will not care for the feelings of others, since there can be nothing more important now or in the future than their own life.

Up until now we have considered only the perspective of buyers and users of self-driving cars, but the actual moral agent is the programmer (or a collection of people in the company that employs the programmer). Their egoist response would be based on how often they are planning to use the self-driving car for which they design the software. If they do not plan to use it at all, then the ethical egoist response of the programmer would be to implement a utilitarian ethical setting, since the programmer will be on average safer. If they however plan to use the self-driving car a lot, then the ethical egoist response is to implement an ethical setting that prioritizes the passenger. However, since this report is mostly concerned with the perspective of the end-user, the above described perspective is ultimately not very relevant.

A contractualist ethical setting is one that is agreed upon by all relevant parties. Unanimous consent seems impossible to get, so in practice this would probably a simple democratic vote, where an arrangement of ethical settings, or a combination of ethical settings are proposed. Each possible affected person can take a vote on these settings, and the democratic winner(s) will be implemented. The tough question is: who is affected by the decisions of a self-driving car? Self-driving cars can potentially drive across whole continents; from Portugal to China, or from Canada to Argentina. If the decisions of these self-driving cars can influence events in a collection of multiple countries, should people in all these countries be part of the decision making process? If so, should there be a global vote on the specific ethical settings that can be implemented? Or if the vote is done nationally, does that mean that the ethical setting of a car must be changed when the self-driving cars enters a country where citizens voted on a different ethical setting? In practice this seems very difficult to implement. If any or all of these contractualist ethical settings are practically possible, then this setting almost completely solves the responsibility aspect of self-driving cars: if all relevant parties can vote, then society as a whole can be held ethically and legally responsible. Since responsibility might be one of the factors that contribute to the acceptance of self-driving cars, having a realistic solution to the issue of responsibility will likely positively impact public perception of self-driving cars.

Letting the user decide the ethical setting or not. Also, all cars the same setting or not? It is clear that there is no ethical setting that is perfect for every scenario. For various different reasons, some authors advocate for people to be able to choose their own ethical settings. One can imagine an "ethical knob", which has different programmable ethical settings. An ethical knob might be on a scale from altruistic, to egoistic, with an impartial setting in them middle. Maybe there will even be a deontological setting which does not intervene in unavoidable accidents. There are several reasons to implement such an ethical knob. People might want to be able to buy cars that mirror their own moral mindset. Millar (2014) observes that self-driving cars can be regarded as moral proxies, which implement moral choices. Implementing a moral knob also makes it easier to assign responsibility to someone in the case of an unavoidable accident (Sandberg & Bradshaw‐Martin, 2013; Lin, 2014), since the passengers of the car have explicitly chosen the decision of the car. This might impact acceptance of the technology both positively and negatively. Prima facie it seems that people who want to buy self-driving cars might want to be able to choose their own ethical setting, but it is unknown if people would also want to choose their preferred ethical setting if this causes them to be legally and/or morally responsible. Also, other relevant road parties might not accept self-driving car passengers to choose their own ethical setting, since it is likely they will choose an egoistic setting, which negatively impacts their own road experience. This is especially true if the car is equipped with an "extremely egoistic" setting in which the value of the passengers life is worth considerably more than other people's lives. It seems likely people will not accept a self-driving car making such decisions, so perhaps manufacturers will limit how egoistic the ethical knob can be turned. Likely, these kind of ethical settings would be very unpopular, perhaps even with people who might benefit from an extremely egoistic setting. Indeed, it is already been explored in surveys that people generally want other self-driving cars to reduce overall harm (Bonnefon et al, 2016). An (extremely) egoistic ethical setting is the direct opposite of such a utilitarian setting.

The same can be said for an ethical knob that can not only be turned by the user to fit their moral convictions, but can even be modified by the user to fit other kind of preferences. An ethical knob that is able to discriminate on gender, or race might be technologically possible to make, but users’ should not be allowed to let their self-driving cars be racist or sexist. Discrimination based on race or sex is illegal in many countries, so these ethical settings, if even possible to implement, will likely be outlawed anyway, as Germany has already done. To gauge which kind of settings are regarded as unacceptable, a contractualist might propose a democratic vote to gauge which kind of settings are regarded as unacceptable. The free choice of people to make their own-self driving car will then be limited by the democratic choice of all relevant road users. Such an arrangement might prove to be an acceptable middle ground between no ethical knob or a completely customizable ethical knob. However, whether this would actually lead to increased acceptance of the technology over the other two options has not been settled or much explored in academic context.

Responsibility

Although automated vehicles seemed a distant future a mere twenty years ago, they are becoming a reality right now. For some years companies like, for example, Google have run trials with automated vehicles in actual traffic situations and have driven millions of kilometers autonomously. However, between December 2016 and November 2017, for example, Waymo's self-driving cars drove about 350.000 miles and human driver retook the wheel 63 times. This is an average of about 5.600 miles between every disengagement. Uber has not been testing its self-driving cars long enough in California to be required to release its disengagement numbers (Wakabayashi, 2018). Though this research has been ground-breaking, there have also been some incidents in the past years. In 2016 a Tesla driver was killed while using the car’s autopilot because the vehicle failed to recognize a white truck (Yadron & Tynon, 2016). In 2018 a self-driving Volvo in Arizona collided with a pedestrian who did not survive the accident. It was believed to be the first pedestrian death associated with self-driving technology. When an Uber self-driving car and a conventional vehicle collided in Tempe in March 2017, city police said that extra safety regulations were not necessary, as the conventional car was was at fault, not the self-driving vehicle (Wakabayashi, 2018).

One very important factor in the development and sale of automated vehicles is the question of who is responsible when things go wrong. In this section we will look in detail at all factors involved and come up with some solutions. As brought up by Marchant and Lindor (2012), there are three questions that need to be analysed. Firstly, who will be liable in the case of an accident? Secondly, how much weight should be given to the fact that autonomous vehicles are supposed to be safer than conventional vehicles in determining who of the involved people should be held responsible? Lastly, will a higher percentage of crashes be caused because of a manufacturing ‘defect’, compared to crashes with conventional vehicles where driver error is usually attributed to the cause (Marchant & Lindor, 2012)?

Current legislation

If we take a look at how responsibility works for conventional vehicles, we find that responsibility is usually addressed to the driver due to failure to obey to the traffic regulations (Pöllänen, Read, Lane, Thompson, & Salmon, 2020). This can be as small and common as driving too fast or losing attention for a fraction of a moment, something nearly everyone is guilty of doing at some point. Where this usually does not matter, sometimes it can lead to catastrophical results. This moment of misfortune still holds the driver responsible. As Nagel (1982) theorized, between driving a little too fast and killing a child that crosses the street unexpectedly, and there being no child, there is only bad luck. The consequence, however, is vast for the child, but also for the driver (Nagel, 1982). This reasoning could also be applied to automated vehicles. If an accident happens it is just bad luck for the driver, and he will without doubt be liable. However, looking at the fact that this depends on luck, and the fact that most autonomous vehicles allow for restricted to no control, this option is not considered as a plausible one (Hevelke & Nida-Rümelin, 2015).

Blame attribution

A couple of studies have shown that the level of control is crucial in blame attribution. McManus and Rutchick (2018) showed that people attribute less blame to a driver in a fully automated vehicle in comparison to a situation where the driver selected a different algorithm (e.g. to behave selfishly) or drove manually (McManus & Rutchick, 2018). Another study (Li, Zhao, Cho, Ju, & Malle, 2016) investigated blame attribution between the manufacturer, government agencies, the driver and pedestrians. They found that blame is reduced for drivers when the vehicle is fully autonomous, whereas the blame for the manufacturer or government agencies increased.

The manufacturer

It would be obvious to say the manufacturer of the car is responsible. They designed the car, so if it makes a mistake, they are to blame. However, there are different types of defects in the manufacturing process. Firstly, there is a defect in manufacturing itself, where the product did not end up as it was supposed to, even though rules are followed with care. This error is very rare, since manufacturing these days is done with a very low error rate (Marchant & Lindor, 2012). A second defect lies in the instructions. When it is failed to adequately instruct and warn, this could result in a consumer defect. A third defect, and the most significant for autonomous vehicles, is that of design. This holds that the risks of harm could have been prevented or reduced with an alternative design (Marchant & Lindor, 2012).

Any flaw in the system that might cause the car to crash, the manufacturers could have known or did know beforehand. If they then sold the car anyway, there is no question in that they are responsible. However, by holding the manufacturer responsible in every case, it would immensely discourage anyone to start producing these autonomous cars. Especially with technology as complex as autonomous driving systems, it would be nearly impossible to make it flawless (Marchant & Lindor, 2012). In order to encourage people to manufacture autonomous vehicles and still hold them responsible, a balance needs to be found between the two. This is necessary, because removing all liability would also result in undesirable effects (Hevelke & Nida-Rümelin, 2015). In short, there needs to be found a way to hold the manufacturer liable enough that they will keep improving their technology.

Semi-autonomous vehicles

As stated above, there have been studies on blame attribution in fully autonomous vehicles, and those with certain pre-selected algorithms. A semi-autonomous vehicle (with a duty to intervene) has not been discussed yet. A good analogy for a semi-autonomous vehicle would be that of an auto-piloted airplane. The plane flies itself, though it is the responsibility of the pilot to intervene when something goes wrong (Marchant & Lindor, 2012). So, it could be suggested that regarding responsibility, in case of an accident to hold the driver of the vehicle responsible. If the car is designed in such a way that the driver has the ability to take over and intervene, this could really be used in an argument against the driver. There is an argument in what the utility of the automated vehicle will be if they are designed like this. After all, when the driver has a duty to intervene, the vehicle can no longer be summoned when needed, it can no longer be used as a safe ride home when drunk, or when tired (Howard, 2013). However, as long as the vehicles will still reduce accidents overall, saying the driver has a duty to intervene or not would still be a better option than using conventional vehicles (Hevelke & Nida-Rümelin, 2015). It could be that the accident rate is dropped even more when the driver actually does have a duty to intervene, due to the fact that it can now intervene when it for example sees something the car doesn’t see. It would also mean that there is more of a transitioning phase when introducing the automated vehicles, instead of them suddenly being fully automatic.

On the other hand, asking the driver to intervene in a fully automated vehicle is questionable. It would assume that the driver can intervene at all times, and this is not always the case due to human error in reaction time or danger anticipation (Hevelke & Nida-Rümelin, 2015). It would be difficult to recognize whether or not the automated vehicle will fail to respond correctly, and thus unclear when the driver needs to intervene. In this case it would be unrealistic to expect the driver to predict a dangerous situation. When implementing this reasoning, another problem is possible to arise: the driver might intervene when it should not have, resulting in an accident (Douma & Palodichuk, 2012). Next to that, as argued by Hevelke & Nida-Rümelin (2015), it seems impossible to ask a driver to pay attention all the time to be possible to intervene, while an actual accident is quite rare. All in all, it would be unreasonable to put responsibility on a driver that did not – or could not – intervene.

Shared liability

As is previously discussed, the responsibility of an accident can be placed on the individual driving the autonomous vehicle. For a number of reasons this was not ideal. An alternative would be to create a shared liability. People that drive cars everyday (especially when not necessary) take the risk of possibly causing an accident. They still make the choice to drive the car (Husak, 2004). You can extrapolate this thinking to the use of automated vehicles. If people choose to drive an automated vehicle, they in turn participate in the risk of an accident happening due to the autonomous vehicle. The responsibility of an accident is therefore shared with everyone else in the country also using the automated vehicle. In that sense the driver itself did not do something wrong, it did not intervene too late, it simply shoulders the burden with everyone else. A system that could work with this line of thinking is the entering of a tax or mandatory insurance (Hevelke & Nida-Rümelin, 2015).

So, it seems there are a couple of options. The manufacturer can be fully responsible; however, this could result in the intermittence of autonomous vehicle manufacturing. On the other hand, it is desirable that the manufacturer does have some sort of liability, so they keep investing to improve the vehicle. At the same time, giving the driver full responsibility only seems to be able to work in the beginning phase of autonomous vehicles. When they are still in development, and drivers really do have a duty to intervene. When the vehicles are more sophisticated and able to fully drive autonomously, the responsibility can be shared with all people through a tax or insurance.

Safety

One of the main factors deciding whether self-driving cars will be accepted is the safety of them. Because who would leave their life in the hands of another entity, knowing it is not completely safe. Though almost everyone gets into buses and planes without doubt or fear. Would we be able to do the same with self-driving cars? Cars have become more and more autonomous over the last decades. Furthermore, self-driving cars will operate in unstructured environments, this adds a lot of unexpected situations (Wagner et al., 2015).


Traffic behavior

The car's safety will be determined by the way it is programmed to act in traffic. Will it stop for every pedestrian? If it does, pedestrians will know and might cross roads wherever they want. Furthermore, will it take the driving style of humans? And how does the driving style of automated vehicles influence trust and acceptance?

According to the research of Elbanhawi, Simic and Jazar (2015), two factors are relevant for driving comfort: naturalness and apparent safety. The relationship between these two factors can be seen as operating between so-called safety margins (Summala, 2007).

In a research two different designs were presented to a group of participants. One was programmed to simulate a human driver, whilst the other one was communicating with its surroundings in a way that it could drive without stopping or slowing down. The research showed no significant different in trust of the two automated vehicles. However, it did show that the longer the research continued, the more trust grew (Oliveira et al., 2019). It is therefore to say that the driving behavior does not necessarily influence the trust, but the overall safety of the driving behavior determines it.

A driving style related to that of humans may however still be beneficial to the acceptance. For example, the car should be able to mimic a human driving the car (Elbanhawi, Simic & Jazar, 2015). This may reduce the hesitation towards self-driving cars and more people driving one (Hartwich, Beggiato & Krems, 2018). However, research conducted by Liu, Wang & Vincent (2020) concluded that people want self-driving vehicles at least four to five times as safe as human-driven vehicles. So, although people would like them to drive human-like, the risks shouldn’t be human-like. This could be explained by the fact that legal problems would be more complicated when an accident occurs, and safety is a major advantage of self-driving cars. If people don’t have that advantage, they may rather enjoy the pleasures of driving themselves.


Errors

Despite what we might think, humans are quite capable of avoiding car crashes. It is inevitable that a computer can crash, think for example about how often your laptop freezes. A slow response of a millisecond can have disastrous consequences. Software for self-driving vehicles must be made fundamentally different. This is one of the major challenges currently holding back the development of fully automated cars. On the contrary, automated air vehicles are already in use. However, software on automated aircraft is much less complex since they have to deal with fewer obstacles and almost no other vehicles (Shladover, 2016).


Cybersecurity

The software driving fully AV will have more than a hundred million lines of code, so it is impossible to predict the security problems. Windows 10 is made of fifty million lines of code and there have been lots of bugs. Doubling the amount of code will result in an even higher probability of unknown vulnerabilities (Parkinson et al., 2017). This complicated code is due to the fact that all self-driving cars have to be interconnected to make use of the most beneficial features of self-driving cars. Self-driving cars will be much more able to react to each other and plan movements ahead if they receive data from other cars through a network. Straub et al. (2017) presented a plan to protect against attacks.

To let cars react to each other appropriately and most efficiently, CACC (Cooperative Adaptive Cruise Control) must be made use of (Amoozadehi et al., 2015). This technology lets cars send information to other cars so that they can adapt to movements and speed changes of other cars. This technology comes as already mentioned above with security risks. There exist multiple kinds of attacks: application layer attacks, network layer attacks, system level attacks and privacy leakage attacks. Application layer attacks can influence applications as CACC beaconing. This could degrade the efficiency of cars reacting to each other, or messages could be falsified. This could result in rear-end collisions. Network layer attacks could make using the network impossible for cars, so that CACC doesn’t work at all anymore. DDoS-attacks are an example. System level attacks, on the other hand, don’t use CACC or vehicle-to-vehicle communication. These could be carried out when a person installs malicious software. Privacy leakage attacks are of course well known and actual. This is theft of data which should only be available to the user and maybe the manufacturer (Amoozadehi et al., 2015).


Versus humans

Self-driving cars hold the potential of eliminating all accidents, or at least those caused by inattentive drivers (Wagner et al., 2015). In a research done by Google it is suggested that the Google self-driving cars are safer than conventional human-driven vehicles. However, there is insufficient information to fully take a conclusion on this. But the results lead us to believe that highly-autonomous vehicles will be more safe than humans in certain conditions. This does not mean that there will be no car-crashes in the future, since these cars will keep on being involved in crashes with human drivers (Teoh et al., 2017).


The city

The city is probably one of the most complicated locations for a self-driving car to operate in. It is filled with vulnerable road users, such as pedestrians and bikers which are relatively hard to track. Therefore, freeways are likely to be the first spaces in which the automated cars will be able to operate. This is a much more structured environment with simple rules and less unexpected situations. However, this will not solve the issue of traffic jams at popular destinations. Some might say the ambition is to allow cars, bikes and pedestrians to share road space much more safely, with the effect that more people will choose not to drive. However, and interesting question regarding this is raised by Duranton (2016): "If a driverless car or bus will never hit a jaywalker, what will stop pedestrians and cyclists from simply using the street as they please?" (Duranton, 2016). Image tracking information could be used to predict the movements of a pedestrian or a cyclist for example. This way, a car doesn’t have to stop for every pedestrian on the sidewalk (Sarcinelli et al., 2019). But still, it doesn’t fix the above-mentioned problem.

Millard-Ball (2016) suggests pedestrian supremacy in cities. He agrees that autonomous vehicles will drive cautiously and therefore slowly in cities. That’s why people will walk more often in cities, because it will become the faster alternative. Travelling between cities will be done by autonomous vehicles, but people will exit the vehicle on a peripheral part of town, before walking to the center. This is not necessarily negative, just a change of culture. Google acknowledges this problem and states that when Google cars cannot operate in existing cities, perhaps new cities need to be created. And the truth is, it has happened in the past. The first suburb of America was developed by rail entrepreneurs who realized that developing suburbs was much more profitable than operating railways (Cox, 2016).

We might need to look at alternative technologies that we need in urban transport. Rather than developing individualist self-driving cars, let’s look at the ‘technology of the network’. How can we connect more people without consuming the space we live in (Duranton, 2016).


Trust

For decades, we have trusted safe operation of automated mechanisms around and even inside us. However, in the last few years the autonomy of these mechanisms have drastically increased. As mentioned before, this brings along quite a few safety risks. Questions of whether to trust a new technology are often answered by testing (Wagner et al., 2015).

There has been a survey about trust in fully automated vehicles. Trust was defined as “the attitude that an agent will help achieve an individual’s goal in a situation characterized by uncertainty and vulnerability” (Lee & See, 2004). Within this survey sixty percent of the respondents mentioned to have difficulties trusting automated vehicles. Trust in this context can be seen as the driver’s belief that the computer drives at least as good as a human.

The trust to be able to fully implement these technologies is not where it is supposed to be. We know that trust can build up over time and this is also the case with trusting self-driving cars. The hesitation is the greatest amongst the elderly, whom are also the generation that gain a lot of benefits as well. The good news of this research is that half of the older adults reported back that they are comfortable with the concept of tools that can help the driver. The amount of tools can grow, whilst the driver/passenger can get used to the idea of a completely self-driving car (Abraham et al., 2016).

Privacy

Self-driving cars rely on an arrangement of new technologies in order to traverse traffic. Some of these technologies have to take data from its environment and/or the people in the car, which can have a big effect on the privacy of both the users of the car and the people around the car. Since fully autonomous cars are not yet on the market, and have not even been build yet, it is unclear how significant the privacy issues might be that are associated with self-driving cars. At minimum, the use of data that tracks locations seems like a necessary implication for self-driving cars, and thus necessary for a self-driving car to function correctly (Boeglin, 2015). This kind of location tracking is already prevalent in mobile phones, and the privacy issues that accompany it are very well known already (Minch, 2004). In fact, car GPS that is already in use already suffers from this problem. The car can save specific locations, has to plan routes based on current location, and has to access current traffic data. If anyone were to access this information, they would essentially access a record of the movements of a person, and also of activities associated with the destinations. If one knows that the user of the self-driving car visited a psychiatrist, or an abortion clinic, then one can also make an educated guess on the things the user has been going through in their lives.

Besides these personal concerns that come from location tracking, there are also commercial concerns. The company that tracks location data might use the location data of the car to infer personal information of the users, and use this personal information for marketing purposes. We already know that this is possible, since this often happens with tracking mobile phone locations. If a mobile phone user visits a store that sells some product, then Google might use this data to send personalized advertisements to the user. The same could happen with self-driving cars.

According to a paper by Boeglin (2016), whether a vehicle is likely to impose on its passengers' privacy can largely be reduced to whether or not that vehicle is communicative (Boeglin, 2016). A communicative vehicle relays vehicle information to third parties or receives information from external sources. A vehicle that is more communicative will be likely to collect information. Communicative vehicles could take a number of forms, therefore it is hard to gauge how severe the associated privacy risks will be. One kind of communicative self-driving car is a car that exchanges data between itself and other self-driving cars. Both cars can use this data for risk mitigation or crash avoidance. Wireless networks are particularly vulnerable, according to Boeglin (2016). When self-driving cars become more prevalent, they might also be able to communicate with roads or road infrastructure (traffic lights or road sensors) to exchange data that will make both parties more effective. As a result, the traffic authority (e.g. the municipality) will also have access to the records of each self-driving car. Whether people will accept this remains to be seen, and not a lot of research has been done on this subject. One such paper that does explore the general public's opinion on privacy in self-driving cars finds that a majority of people would want to opt out of identifiable data collection, and secondary use collections such as recognition, identification, and tracking of individuals were associated with low likelihood ratings and high discomfort (Bloom et al., 2017).

Self-driving cars that are currently in development are not all communicative types of cars, partly because there does not exist infrastructure yet to support such cars. Privacy risks for non-communicative cars are less prevalent, but not nonexistent. Location tracking will always be an issue, and uncommunicative self-driving cars will still be heavily reliant on sensory data in order to get to the desired destination. This sensory data might still be hacked, but hacking is almost always a negative possibility that infringes on the right of privacy. Self-driving cars are hardly a special case in that regard.

It is largely unclear how users will react to the potential risks to their privacy, since this is a newly emerging technology, and issues such as safety, decision-making and autonomy are usually more pressing issues. We expect that people will not rate privacy as a large concern, and instead will be more concerned with the aforementioned issues. This is especially the case when talking about uncommunicative self-driving cars, which seem to be more prevalent than communicative cars in today's world. We also expect that people largely think of uncommunicative self-driving cars instead of communicative self-driving cars, since communicative cars are a step further into the future than uncommunicative self-driving cars. This probably lowers the perceived level of risk associated with privacy issues among users even more.

Perspective of private end-user

The potential revolutionary change that self-driving cars could stir up would affect many areas of life. Apart from improving safety, efficiency and general mobility, it would change current infrastructure and the relationship between humans and machines (Silberg et al., 2012). This section will focus primarily on the user’s attitude towards self-driving cars, specifically perceived benefits and concerns.

According to the National Highway Transportation Safety Administration cars are currently in ‘level 3 automation’, in which new cars have automated features, but still require an alert driver to intervene when necessary. ‘Level 4 automation’ would mean that a driver is no longer permitted to intervene (Cox, 2016). Before this level can be reached, the general public would need to feel comfortable with letting go of the steering wheel.

General attitude

A research by König & Neumayr (2017) showed that people are generally more worried about self-driving cars when they are older. They also showed that females have more concern than males, and that rural citizens are less interested in self-driving cars than urban citizens (König & Neumayr, 2017). Surprisingly, people who used their car more often seemed less open to the idea of a self-driving car, possibly because the change to self-driving cars would be too radical. Furthermore, the most common desire of people is to have the ability to manually take control of the car when desired. It allows them to still enjoy the pleasures of manually driving and they don’t lose the sense of freedom (Rupp & King, 2010).

Another interesting finding by König & Neumayr (2017) was that people who had no car as well as people who already had a car with more advanced automated features showed a more positive attitude towards self-driving cars. Possibly because the people without a car see it as an opportunity to be able to take part in traffic, and people with advanced cars are more familiar with the technology (König & Neumayr, 2017). Lee et al. (2017) also found that people without a driver’s licence were more likely to use a self-driving car (Lee et al., 2017).

Benefits and concerns

It is common knowledge that many cars crash due to human error. The World Health Organization (2016) reported that road traffic injuries is the leading cause of death among people between the ages of 15 to 29 (World Health Organization, 2016). Raue et al. (2019) argues that removing the human error from driving is one of the biggest potential benefits of self-driving cars. They also pose that driverless cars could potentially decrease congestion, increase mobility for non-drivers and create more efficient use of commuting time. Next to that, there are also environmental benefits; when vehicles no longer need to be built with a tank-like safety, they are lighter and consume less fuel (Bamonte, 2013; Parida et al., 2018; Raue et al., 2019).

König & Neumayr (2017) used a survey to judge people’s attitude towards potential benefits and concerns. They found that people mostly value the fact that a self-driving car could solve transport issues older and disabled people face. This is in accordance with Cox (2016) and Parida et al. (2018), who said the driverless car has the potential to expand opportunity and that it can improve the lives of disabled people and others who are unable to drive (Cox, 2016; Parida et al., 2018). From the survey König & Neumayr (2017) also found that people value the fact that they can engage in other things than driving. Participants did not feel that self-driving cars would give them social recognition, and they did not feel like it would yield to shorter travel times (König & Neumayr, 2017).

On the other hand, there are also some concerns indicated by König & Neumayr (2017). Their participants were mostly concerned with legal issues, followed by concerns for hackers. Lee et al. (2017) also found that especially older adults are concerned with self-driving cars being more expensive. Surprisingly, they found that across al sub-groups people did not trust the functioning of the technology (König & Neumayr, 2017; Raue et al., 2019).

Sharing cars

While many people look positively towards the implementation of self-driving cars, less people are willing to buy one. Many people don’t want to invest more money in self-driving cars than they do in conventional cars right now (Schoettle & Sivak, 2014). Therefore, a car sharing scheme (e.g. a whole fleet provided by a mobility service company, or a ride sharing scheme) is an option to make self-driving cars more popular. This way people would not have to spend a large sum of money, and they could gradually learn to trust the technology by using the shared self-driving cars first (König & Neumayr, 2017). According to Cox (2016), this is not necessarily true. Since corporate mobility companies will then provide the cars, they have to cover the costs of for example vehicle operation, which will increase the fees for the user (Cox, 2016).

So, how would it work when automated vehicles are being used as shared vehicles? Cox (2016) assumes that companies will be providing cars the same way they do now, renting them in short-term or long-term. Especially in large metropolitan areas automated vehicles could substantially shorten a trip, or solve current transportation problems (Cox, 2016; Parida et al., 2018). While cars are being shared, private ownership would still be possible, and people would be able to rent out their own personal cars short-term.

One option of sharing cars is to let people share a single ride. This could decrease the number of cars in an urban area and address issues like congestion, pollution or the problem of finding a parking spot (Parida et al., 2018). However, there are certain issues with ridesharing. Because not every person starts and stops in the same place, trips could actually increase in time, making ridesharing less attractive. Lowering the price of ridesharing might not even be enough to attract travellers. Ridesharing does raise another important question: do people want to share a car with strangers? As stated by Cox (2016), personal security concerns will probably only increase and therefore people will not be willing to share a ride with someone they don’t know.

An important notion is that vehicles are parked on average more than ninety percent of the time (Burgess, 2012). A driverless car fleet provided by a mobility company could possibly reduce the number of cars in a metropolitan city since the urban area is so densely packed. However, these cars would not be attractive to users living in a more rural area, or people that need to travel outside the urban area (Cox, 2016).

In the present day, many people use transit (e.g. train, metro, bus, etc.) in metropolitan areas, though this is not the fastest possible commute. Owen and Levinson (2014) found that many jobs can be reached in about half the time by car than it takes by transit. This is mostly because of the “last mile” problem, the fact that many destinations are beyond walking distance of a transit stop (Owen & Levinson, 2014). Driverless cars can be used to overcome this “last mile” problem, by placing them more at transit stops. However, a fleet of driverless cars can have two consequences on transit. On the one hand it can cause transit users to refrain from using transit because of the improved travel times and door-to-door access. On the other hand, many transit riders have a low income and will probably not be able to pay for a driverless car alternative (Cox, 2016). Though, if the charges of driverless cars are too low this might reduce the attractiveness of transit even more, causing people to use the driverless vehicle for the entire trip (Cox, 2016).

Acceptance

Many studies have delved into technology acceptance across various domains, and many different ways to determine the acceptance of self-driving cars are mentioned. Lee et al (2017) found that across all ages, perceived usefulness, affordability, social support, lifestyle fit and conceptual compatibility are significant determinants (Lee et al., 2017; Raue et al., 2019). Raue et al. (2019) found that people’s risk and benefit perceptions as well as trust in the technology relate to the acceptance of self-driving cars (Raue et al., 2019). According to Rogers (1995), to increase the probability of a wide-spread adoption of the innovation, the following factors need to be taken into account: the relative advantage, the compatibility (steering wheel with a disengage button), the trialability (test-drives), the observability (car-sharing fleets), and complexity (introduction to automation) (König & Neumayr, 2017; Rogers, 1995).

As found by Lee et al. (2017), older adults are possibly not ready yet to let go of the steering wheel. They found that older generations have a lower overall interest and different behavioural intentions to use. However, people with more experience with technology seemed to be more accepting (Lee et al., 2017). Other supporting studies did find that older adults are more likely to accept new in-vehicle technologies (Son, Park, & Park, 2015; Yannis, Antoniou, Vardaki, & Kanellaidis, 2010). However, Lee et al. (2017) also found that across all ages, people would be more likely to use a self-driving car if they would no longer be able to drive themselves due to aging or illness (Lee et al., 2017). As for the general public, Raue et al. (2019) looked into common psychological theories to assess people’s willingness to accept the self-driving car. They found that people who are familiar with actions or activities often perceive them to be less risky, and people’s levels of knowledge about a certain technology can affect how they understand it risks and benefits (Hengstler, Enkel, & Duelli, 2016; Raue et al., 2019). In that sense, affect is used as a decision heuristic (i.e. a mental shortcut) in which people rely on the positive or negative feelings associated to a risk (Visschers & Siegrist, 2018). Because negative emotions weigh more heavily against positive emotions, and people are more likely to recall a negative event, negative affect may influence people to judge self-driving cars to be of higher risk and lower benefit. This negative affect can be caused by anything, like for example the loss of control from removing the steering wheel, or knowledge of accidents involving self-driving cars (Raue et al., 2019). Parida et al. (2019) stresses the importance of public attitude and user acceptance of self-driving cars as the global market acceptance heavily relies on it (Parida et al., 2018).

Method

Research design

For this questionnaire, a non-probability convenience sampling method was applied that leveraged the group’s broad networks. Even though convenience sampling means that the sample is not representative, it was a feasible opportunity to reach out to the crucial audience and to enable the collection of relevant data forming first evidence. As the questionnaire was conducted with the general public, there was no strict geographical scope in order to reach as many different people as possible. This allows first indications of driver’s attitudes towards self-driving vehicles not applying to certain regions. The survey is conducted in the Netherlands.


Data collection

Data was collected over a one-week time frame in March 2021 using an online questionnaire using Microsoft Forms (see appendix), a web-based survey company. This method was chosen for several reasons. Assessed information was widely available among the public. Due to Covid-19, an online approach made it easier to reach people to ensure physical distancing. And by not requiring an interviewer to be present, it reduced both potential bias and cost and time. Microsoft Forms is used because it has a safe environment, and it meets EU privacy standards.

Respondents were reached by sending out emails and private messages on social media (e.g. WhatsApp), including both personalized invitational letters, explicitly stating self-driving vehicles as the topic of the research, as well as a direct link to the online questionnaire. A consent form was included on the cover page of the questionnaire where respondents were assured of anonymity and confidentiality. Given the study’s exploratively nature, reaching a large number of respondents was prioritized. The minimum number of respondents favored was 100. Completed surveys were eventually received for 115 respondents.


Measures

In the questionnaire, several relevant factors related to self-driving vehicles were examined. The main topics addressed in the questionnaire were about general knowledge and taken from our hypothesis, namely:

- Familiarity with self-driving vehicles

- Expected benefits of self-driving vehicles

- Concerns about different implementations of self-driving vehicles

- Favored ethical settings in self-driving vehicles

- Acceptance of legal responsibility in unavoidable crashes with self-driving vehicles


Personal car use and demographics

In the first part of the questionnaire, participants had to answer the question whether they have a driver's license. Additionally, the respondents were asked how often they drove a car presented with the answering options ‘(almost) every day, weekly, monthly, annually and never’. Furthermore, demographical questions regarding age and education were asked.

Familiarity with self-driving vehicles

Participant’s existing knowledge about self-driving vehicles was assessed. Respondents were confronted with a set of rating questions containing even, numerical Likert scales made up of four points ranging from ‘unfamiliar’ (1) to ‘familiar’ (4).

Expected benefits of self-driving vehicles

Participants were further asked to rate their agreement with statements reflecting presumed benefits of the use of self-driving vehicles. To allow for a ‘neutral’ opinion, the statements were combined with a 5-point scale ranging from ‘very unlikely’ (1) to ‘very likely’ (5). A 5-point Likert scale is used because in forced choice experiments, consisting of a 4-point Likert scale, choices are contaminated by random guesses.

Concerns about different implementations of self-driving vehicles

After the expected benefits of self-driving vehicles, respondents were asked to rate their concerns with statements regarding self-driving vehicles by using a 4-point Likert scale ranging from ‘not concerned’ (1) to ‘very concerned’ (4).

Favored ethical settings in self-driving vehicles

The preferred ethical setting, in which participants would like to see self-driving vehicles which are on the road, is assessed with 5 ranking options from first choice to last choice. Furthermore, there are statements regarding ethical settings used in self-driving vehicles assessed with a 5-point Likert scale, to allow the neutral opinion, ranging from ‘strongly disagree’ (1) to ‘strongly agree’ (5).

Acceptance of legal responsibility in unavoidable crashes with self-driving vehicles

Lastly, participants were asked to rate their agreement with statements about the legal responsibility in unavoidable crashes with self-driving vehicles. Again with a 5-point Likert scale, to allow the neutral opinion, ranging from ‘very unlikely’ (1) to ‘very likely’ (5).


The full text of the questionnaire is included in the appendix.

Results

Completed surveys were received for 115 respondents. First demographic questions were asked. Question 2, about the respondent's age, has received 104 responses. Since this is an open question, appropriate intervals are constructed. The youngest person who filled in the survey is 17 years old and the oldest person is 80 years old. Half of the respondents are between 17 and 30 years old, and the other half is between 42 and 80 years old. There were no respondents between the ages of 30 and 42 years old. Since this points to an obvious dichotomy, the following two intervals are used:

- 51.3% <31

- 39.1% >41

- 9.6% no answer


Question 3, about the respondent's education, has received 115 responses. 2.6% of the respondents have no education or incomplete primary education, 6.1% have a high school diploma, 7.8% of the respondents are currently studying MBO, 32.2% of the respondents are currently studying an HBO or WO and do not have a diploma yet, 29.6% have an HBO or WO Bachelor diploma, 20.0% have an HBO WO Master diploma and 1.7% has a PhD. Question 4, about having a driver's license, has received 114 responses. 89.5% of the respondents have a driving license and 10.5% have no driving license. Question 5, about the regularity of car use, there were 114 responses. Of the respondents, 28.1% uses their car (nearly) every day, 43.0% weekly, 24.5% monthly, 4.4 % annually and 0.0% never.


The question about how familiar respondents are with self-driving vehicles, question 6, has received 114 responses. 22.8% of the respondents are unfamiliar with self-driving vehicles, 16.7% are somewhat unfamiliar, 42.1% are somewhat familiar and 18.4% are familiar. Question 7, about the benefits when using a self-driving car, has received 112 responses in which 1 respondent did not respond to subquestions 1, 2 and 7, 1 respondent to subquestion 2, and 1 respondent to subquestions 3 up to 7. For all the sub-questions of question 7, the possible answers are very unlikely, somewhat unlikely, no opinion/neutral, somewhat likely and very likely. The following percentages per answer occurred, as shown in figure 1 in the result section in the appendix:

- Fewer accidents: 0.9%, 14.0%, 4.4%, 54.4%, 26.3%

- Decreased severity of accidents: 8%, 19.5%, 16.8%, 45.1%, 10.6%

- Fewer traffic jams: 2.6%, 11.4%, 7%, 39.5%, 39.5%

- Shorter travel-length: 7.9%, 26.3%, 29.8%, 23.7%, 12.3%

- Lower vehicle emission: 1.8%, 12.3%, 19.3%, 36.0%, 30.7%

- Better fuel-savings: 0.9%, 7.9%, 6.1%, 38.6%, 46.5%

- Lower insurance rates: 5.3%, 13.3%, 26.5%, 37.2%, 17.7%


In question 8, about the concerns related to self-driving vehicles, the first 39 responses were omitted and in total 76 valid responses were received. 1 respondent did not respond to subquestion 3, 1 respondent to subquestion 4, 1 respondent to subquestion 6, 3 respondents to subquestion 12, 1 respondent to subquestions 2 up to 6, 8 up to 10 and 12, 1 respondent to subquestions 1 up to 12 and 1 respondent to subquestions 2 up to 12. For all the subquestions of question 8, the possible answers are not concerned, slightly concerned, concerned and very concerned. The following percentages per answer occurred, as shown in figure 2 in the appendix:

- Driving in a vehicle with autonomous technology: 32.0%, 48.0%, 14.7%, 5.3%

- Safety-consequences of device-malfunction or system failure: 9.6%, 43.8%, 41.5%, 15.1%

- Legal liability for drivers/owners: 13.9%, 43.1%, 36.1%, 6.9%

- System security (against hackers): 12.5%, 33.3%, 40.3%, 13.9%

- Data privacy (location and destination): 31.5%, 36.9%, 17.8%, 13.7%

- Interaction with non-self driving vehicles: 26.4%, 22.2%, 38.9%, 12.5%

- Interaction with pedestrians or cyclists: 13.5%, 39.2%, 33.8%, 13.5%

- Learning to use self-driving cars: 64.4%, 24.6%, 9.6%, 1.4%

- System performance under bad weather conditions: 36.1%, 50.0%, 9.7%, 4.2%

- Confused self-driving vehicles in unpredictable conditions: 8.2%, 43.9%, 35.6%, 12.3%

- Driving ability of self-driving vehicles compared to humans: 46.0%, 39.2%, 13.5%, 1.3%

- Driving in a vehicle without a human able to intervene: 12.9%, 15.7%, 44.3%, 27.1%


Question 9, about the ethical settings preferred in self-driving vehicles on the road, has received 115 responses. This question is based on ranking 5 different options by decreasing preference, from favorite to least favorite, as shown in figure 3 in the appendix. In the first option, self-driving cars should always choose to do the least amount of damage to the least amount of people and to minimize overall harm, has the percentages ranging from favorite to least favorite: 59.1%, 26.1%, 10.4%, 2.6%, 1.7%. The second option, the car should not be allowed to make an explicit choice between human lives and therefore should not be able to intervene in the case of an unavoidable accident and as a result, there will be a random victim, has the following distribution: 16.5%, 25.2%, 13.0%, 24.3%, 20.9%. Option 3, the car should always prioritize the lives and health of the passengers above those of bystanders, has the percentages: 13.0%, 15.7%, 29.6%, 27.8%, 13.9%. The car should always prioritize the lives and health of the bystanders above those of the passengers, option 4, has the percentages: 7.0%, 23.5%, 24.3%, 27.8%, 17.4% and the last option, option 5, the choice that the car makes should be based on what the majority of road-users want, has the percentages: 4.3%, 9.6%, 22.6%, 17.4%, 46.1%. Question 10, about the different issues regarding implementing ethical settings in self-driving vehicles, has received 112 responses. 1 respondent did not respond to subquestions 2 and 3 and 2 respondents did not respond to subquestion 3. The possible answers are strongly disagree, disagree, no opinion/neutral, agree and strongly agree, as shown in figure 4 in the appendix. For the theses, self-driving vehicles must not be sold with an ethical setting that the user can adjust. Instead, this setting must be determined by, for example, the government or the manufacturer, the percentages for the answers chosen are 0.9%, 7.1%, 13.3%, 33.6%, 45.1%. The thesis, I would rather buy a self-driving car with an adjustable ethical setting, than a self-driving car with an unadjustable ethical setting has the percentages: 30.4%, 33%, 18.8%, 14.3%, 3.6%. For the last thesis, when I will use a self-driving car, the specific ethical setting would be important for me, the percentages are 8.1%, 13.5%, 23.4%, 37.8%, 16.2%.


Question 11, about the liability and responsibility when crashing with self-driving cars, has received 110 responses. 2 respondents did not respond to subquestion 2, 1 respondent did not respond to subquestion 4, 1 respondent did not respond to subquestion 5 and 1 person did not respond to subquestions 2, 3, 4 and 5. The blank answers are not included in the following percentages. For all the subquestions in question 11, the possible answers are very unlikely, somewhat unlikely, no opinion/neutral, somewhat likely and very likely. The following percentages per answer occurred, as shown in figure 5 in the appendix. Manufacturers are fully liable when a self-driving car causes an accident, even if this discourages them to produce self-driving cars: 2.6%, 26.1%, 8.7%, 38.3%, 24.3%. Manufacturers are partially liable, in order to make them produce self-driving cars while encouraging them to correct errors: 2.7%, 9.7%, 13.3%, 52.2%, 22.1%. I am liable when my self-driving car causes an accident, even if I cannot intervene (fully autonomous vehicle): 49.1%, 27.2%, 9.6%, 11.4%, 2.6%. I am liable when my self-driving car causes an accident, because I have the possibility to intervene (semi-autonomous vehicle): 2.7%, 9.7%, 12.4%, 46.0%, 29.2%. Everyone with a self-driving car is liable when a self-driving car causes an accident, by means of mandatory insurance or tax: 3.6%, 17.0%, 32.1%, 28.6%, 18.8%.


When the answers of the respondents are grouped by age, as mentioned in question 8, 46,43% of the age below 31 is not concerned about driving in a self-driving vehicle. However, only 23,08% of the age above 41 is not concerned. This is showed in table 1 in the results appendix. Also, only 21,43% of the age below 31 and 42,1% of the age above 41 is concerned or very concerned with data privacy, which is shown in table 2 in the appendix. When the answers are grouped by car usage, 26,92% of the respondents who use cars every day and only 16,67% of the respondents who use cars monthly are concerned or very concerned about driving with autonomous technology. This is shown in table 3 in the appendix. Additionally, shown in table 4 in the appendix, 43,75% of the respondents who use cars every day and only 21,88% of the respondents who use cars monthly strongly disagree with an adjustable ethical setting in self-driving cars. Furthermore, 88,09% of the respondents who rate themselves somewhat familiar or familiar with self-driving vehicles and only 33,97% of the respondents who rate themselves somewhat unfamiliar or unfamiliar are not concerned with driving in self-driving cars, as shown in table 5 in the appendix.

Discussion and conclusion

Overall, we can conclude that people believe in the advantages and have a positive attitude towards self-driving vehicles, especially when people are more familiar with self-driving vehicles, which is in line with the literature. Respondents who drive cars more often are more concerned to drive with autonomous technology and are less open to the benefits, a finding supported by König & Neumayr (2017).

What also can be concluded is that self-driving vehicles will be much more accepted when the option to intervene will be implemented, which is in line with findings from Rupp & King (2010), who stated that people don't want to lose their sense of freedom. Additionally, system security has to be reliable for the acceptance of self-driving vehicles. Although many people see the advantage of fewer accidents, a system security breach is feared. Also, more than half of the respondents are concerned about self-driving vehicles getting confused in unpredictable situations. In the future, this aspect will possibly be improved because of the developments of artificial intelligence.

Older people are more concerned about driving in a self-driving vehicle and about the driving ability of the self-driving car compared to the human driving ability, which is in line with results from other surveys pointing out that elderly see less advantages than younger people (Lee et al., 2017). This might be due to the fact that older people are more used to conventional cars and less used to automated cars, artificial intelligence and technology. Also, younger people are less worried about data privacy concerns. However, still, more than 50% of all people are worried or very worried about data privacy, which does not support our hypothesis that people would express less concern for privacy. Existing privacy laws should be adapted to this new technology and new laws should be implemented for more acceptance of self-driving cars. When comparing education levels, higher educated people are less concerned about driving in a self-driving car, legal liability and the driving ability of the self-driving car compared to human driving ability. Lower educated people are less worried about the interaction with conventional cars and pedestrians or cyclists than higher educated people.

As for ethical settings, respondents prefer self-driving cars to always choose to do the least amount of damage to the least amount of people and to minimize overall harm, a setting that corresponds to utilitarianism. This corresponds with results from similar surveys on ethical settings in self-driving cars from the perspective of the private end-user, such as (Bonnefon et al., 2016). In this survey, 76% of respondents (n=2000) preferred for self-driving cars to sacrifice one passenger rather than killing ten bystanders, which shows a clear preference for a utilitarian setting. However, from similar literature, it can be concluded that people prefer to buy cars that give preferential treatment to themselves (Bonnefon et al. (2016); Nyholm (2018)).

Respondents preferred the contractualism setting, where the choice the car makes should be based on what the majority of the road users want, the least. There is no specific preference for the other three settings. The deontological setting, that the car should not make a choice between human lives and a victim, but that it will fall randomly, is the second most preferred, but also the second least preferred setting. Due to the fact that people do not like non-human entities such as technology making life-or-death decisions, respondents probably preferred this ethical setting because randomness has an element of fairness associated with it.

There were no preferences for the ethical setting where the car would always prioritize the life and health of the bystanders over that of the occupants (virtue ethics), and to always prioritize the life and health of the occupants over that of bystanders (egoism). Respondents wanted the ethical setting to be set by the manufacturer, but the type is important. Giving the responsibility to the manufacturer, all self-driving vehicles would be programmed with the same ethical setting, which is safer on the road. Different research also shows that people don’t want an ‘ethical knob’ (Li et al., 2016). Moreover, 78.7% of the respondents agree that self-driving vehicles must not be sold in an ethical setting that the user can adjust and respondents who drive cars more often (strongly) disagree with an adjustable ethics setting.

Furthermore, 74.3% of the respondents would use a self-driving car when manufacturers are partially liable, in order to make them produce self-driving cars while encouraging them to correct errors. However, respondents could be biased due to the formulation of the answer given. This answer includes the word ‘encourage’ which is positively formulated and the answer with full liability includes the word ‘discourage’, which is negatively formulated. 75.2% of the respondents would use a self-driving car when they are liable while they can intervene, which is the highest response. Again, the implementation of the option to intervene can be advised. Respondents who drive cars more often are less likely to use autonomous vehicles if they are liable themselves.

The overall discussion about self-driving vehicles, however, should come to an end. The technique of self-driving vehicles is almost there and the cars should be implemented as soon as possible for safety on the road. The minimal damage done by self-driving vehicles is calculated based on what the technology is able to do at the moment. The developments in artificial intelligence are at the moment not far enough to distinguish between an 80 years old man and a 5 years old child. People cannot build and program that specific into an artificial intelligence system. Helmet or no helmet, jeans or motorcycle clothing, it is not recognizable for AI. And how often does it happen that a driver in a conventional car has to choose between an 80 years old man and a 5 years old child? People do not even know what to decide themselves, so artificial intelligence does not know either. It is such a small chance that the choice really has to be made and a situation like this arises. Those examples are so hypothetical, that the chance they occur is nihil. Human errors are removed by introducing self-driving vehicles so it is a good innovation. However, why are people so focused on these ethical issues that hinder its acceptance? These ethical dilemmas greatly inhibit innovation of self-driving vehicles. The discussion has been going on for more than thirty years now. The best option is to implement random choice in accidents or to save the driver himself so the car has as little damage as possible. It is the easiest to introduce and that is how people drive themselves now. Also, the implementation of self-driving vehicles should be done evenly because self-driving vehicles and conventional cars together on the road is problematic.

Survey limitations

The major part of our respondents are highly educated. We have too little low educated respondents to compare between levels of education. We can only compare with certainty between age groups, frequency of car use and familiarity with self-driving vehicles. However, comparisons are made between levels of education, but these are not very reliable. The absence of respondents between 31-40 years can have a negative effect on the accuracy of our survey because the opinion of this age group might differ from the other two age groups. Also smaller and more specific age groups may have worked out better if there were more respondents. Currently the elderly are in the same category as people who are in their forties for example. Also, just 10.5% of the respondents did not have a driver’s license, which is very little. People without a driver’s license may have different opinions because the use of cars will be more accessible for them. This group is not represented enough. 22.8% of the respondents indicate to be unfamiliar with self-driving vehicles. If they picked this answer appropriately, it means that they have never heard of self-driving vehicles and therefore they are unaware of the possible advantages or disadvantages or what a self-driving car is. This is almost a quarter of the respondents and they might have given somewhat random answers. No one responded that they never use a car and there were few that use one on a yearly basis. This is why these answers were taken into the same group as monthly.

A few issues could have had an impact on the outcome of our survey. A bias in the results of the survey could be due to participant bias (the tendency of people to give the answer that is socially desirable, or possibly desirable to the experimenter). Although it was anonymous, this could influence the answers. Especially for the ethical settings, people could feel more obliged to choose for the answer what would be most accepted and politically correct. Additionally, some questions were not answered by some respondents. These answers were treated as if they did not exist. This could have been avoided by making the questions compulsory, so one can only submit the form when every question is answered. Percentages were calculated with the number of answers to a specific question, not by the number of respondents in general. Differences in numbers of total answers can negatively contribute to the accuracy of research. Also, there are great differences in the completion time of the survey, ranging from 2 minutes and 10 seconds to 14 minutes and 45 seconds. To fill in the survey seriously, 2 minutes is too short and questions could be filled in inappropriately. There is also a major problem with survey question 8: when the form was first opened, question 8 included five possible answers. These answers were: ‘not worried’, ‘somewhat worried’, ‘neutral’, ‘worried’, ‘very worried’. When 39 respondents had already submitted the form, the ‘neutral’ option was removed from the list of possible answers because it did not fit in this scale. Neutral can be interpreted as 'not worried'. It does not fit between ‘somewhat worried’ and ‘worried’. Answers of these first 39 respondents were omitted in the results, because this confusing scale of answers can have negatively affected the accuracy of the answers.

Future work

In future research this same set-up might be repeated, only now with proper subgroups. We lacked a substantial amount of older adults, lower educated people, and people without a driver's licence. Future research could also delve more into opinions of other stakeholders like manufacturers, or the government. It would also be wise to have this research on a large scale with government support, so that it might influence and speed up the introduction and acceptance of self-driving cars. Also, most research on this topic, including our own, is exploratory in nature. This is mainly because the technology is quite new and not yet readily available, but now that self-driving technology is becoming more and more a reality, it is time for large-scale non-exploratory research. Almost none of the references listed give any concrete recommendations for implementing a specific ethical theory, or give recommendations on how to promote the acceptance. If the technology is to be accepted by the broader public, such recommendations argued from an academic perspective are a step, or even a big leap, in the right direction.

References

Abraham, H., Lee, C., Brady, S., Fitzgerald, C., Mehler, B., Reimer, B. & Coughlin, J.F. (2016). Autonomous Vehicles, Trust, and Driving Alternatives: A survey of consumer preferences. Agelab Life tommorow, 0. Geraadpleegd van https://bestride.com/wp-content/uploads/2016/05/MIT-NEMPA-White-Paper-2016-05-30-final.pdf

Adee, S. (2016, September 21). Germany to create world's first highway code for driverless cars. Newscientist. https://www.newscientist.com/article/mg23130923-200-germany-to-create-worlds-first-highway-code-for-driverless-cars/

Amoozadeh, M., Raghuramu, A., Chuah, C. N., Ghosal, D., Zhang, H. M., Rowe, J., & Levitt, K. (2015). Security vulnerabilities of connected vehicle streams and their impact on cooperative driving. IEEE Communications Magazine, 53(6), 126–132. https://doi.org/10.1109/mcom.2015.7120028

Bamonte, T. J. (2013). Autonomous Vehicles - Drivers for Change. Retrieved March 23, 2021, from https://www.roadsbridges.com/sites/rb/files/05_autonomous vehicles.pdf

Boeglin, J. (2015). The costs of self-driving cars: reconciling freedom and privacy with tort liability in autonomous vehicle regulation. Yale JL & Tech., 17, 171.

Bonnefon, J.F., Shariff, A. & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576. https://doi.org/10.1126/science.aaf2654

Bloom, C., Tan, J., Ramjohn, J., & Bauer, L. (2017). Self-driving cars and data collection: Privacy perceptions of networked autonomous vehicles. In Thirteenth Symposium on Usable Privacy and Security ({SOUPS} 2017) (pp. 357-375).

Burgess, S. (2012, June 23). Parking: It’s What Your Car Does 90 Percent of the Time. Autoblog. Retrieved from https://www.autoblog.com/2012/06/23/parking-its-what-your-car-does-90-percent-of-the-time/?guccounter=1

Cox, W. (2016). Driverless Cars and the City: Sharing Cars, Not Rides. Cityscape: A Journal of Policy Development and Research, 18(3). Retrieved from http://www.newgeography.com/content/003899-plan-bay-area-telling-people-what-do

Douma, F., & Palodichuk, S. A. (2012). Criminal Liability Issues Created by Autonomous Vehicles. Santa Clara Law Review, 52(4), 1157–1169. Retrieved from http://digitalcommons.law.scu.edu/lawreview

Driver, J. (2014). The History of Utilitarianism (Stanford Encyclopedia of Philosophy). Retrieved April 7, 2021, from https://plato.stanford.edu/entries/utilitarianism-history/

Duranton, G. (2016). Transitioning to Driverless Cars. Cityscape, 18(3), 193-196. Retrieved February 7, 2021, from http://www.jstor.org/stable/26328282

Elbanhawi, M., Simic, M., & Jazar, R. (2015). In the passenger seat: investigating ride comfort measures in autonomous cars. IEEE Intelligent transportation systems magazine, 7(3), 4-17.

Hartwich, F., Beggiato, M., & Krems, J. F. (2018). Driving comfort, enjoyment and acceptance of automated driving – effects of drivers’ age and driving style familiarity. Ergonomics, 61(8), 1017–1032. https://doi.org/10.1080/00140139.2018.1441448

Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust-The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105–120. https://doi.org/10.1016/j.techfore.2015.12.014

Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics, 21(3), 619–630. https://doi.org/10.1007/s11948-014-9565-5

Howard, D. (2013). Robots on the Road: The Moral Imperative of the Driverless Car. Retrieved March 7, 2021, from Science Matters website: http://donhoward-blog.nd.edu/2013/11/07/robots-on-the-road-the-moral-imperative-of-the-driverless-car/#.U1oq-1ffKZ1

Husak, D. (2004). Vehicles and Crashes: Why is this Moral Issue Overlooked? Social Theory and Practice, 30(3), 351–370. Retrieved from https://www.jstor.org/stable/23562447?seq=1

Jiang, J.J., Muhanna, W.A., & Klein, G. (2000). User resistance and strategies for proming acceptance across system types. Information & Management, 37(1), 25-36

König, M., & Neumayr, L. (2017). Users’ resistance towards radical innovations: The case of the self-driving car. Transportation Research Part F: Traffic Psychology and Behaviour, 44, 42–52. doi:10.1016/j.trf.2016.10.013

Lee, J. D. & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392

Lee, C., Ward, C., Raue, M., D’Ambrosio, L., & Coughlin, J. F. (2017). Age differences in acceptance of self-driving cars: A survey of perceptions and attitudes. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10297 LNCS, 3–13. https://doi.org/10.1007/978-3-319-58530-7_1

Li, J., Zhao, X., Cho, M., Ju, W., & Malle, B. (2016). From trolley to autonomous vehicle: Perceptions of responsibility and moral norms in traffic incidents with self-driving. Society of Automotive Engineers World Congress.

Lin, P. (2016). Why Ethics Matters for Autonomous Cars. 10.1007/978-3-662-48847-8_4.

Liu, P., Wang, L., & Vincent, C. (2020). Self-driving vehicles against human drivers: Equal safety is far from enough. Journal of Experimental Psychology: Applied, 26(4), 692–704.

Nagel, T. (1982). Moral Luck. Oxford University Press.

Nyholm, S. & Smids, J. (2016). The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem? Ethical Theory and Moral Practice, 1275–1289.

Nyholm, S. (2018). The ethics of crashes with self‐driving cars: A roadmap, II. Philosophy Compass. 13:e12506. https://doi.org/10.1111/phc3.12506

Marchant, G. E., & Lindor, R. A. (2012). Santa Clara Law Review The Coming Collision Between Autonomous Vehicles and the Liability System THE COMING COLLISION BETWEEN AUTONOMOUS VEHICLES AND THE LIABILITY SYSTEM. Number 4 Article, 52(4), 12–17. Retrieved from http://digitalcommons.law.scu.edu/lawreview

McManus, R., & Rutchick, A. (2018). Autonomous vehicles and the attribution of moral responsibility. Social Psychological and Personality Science, 1–8.

Millar, J. (2014). Technology as moral proxy: Autonomy and paternalism by design. IEEE Ethics in Engineering, Science and Technology Proceedings, IEEE Explore. Online Resource, Doi: https://doi.org/10.1109/ETHICS.2014.6893388

Millard-Ball, A. (2016). Pedestrians, Autonomous Vehicles, and Cities. Journal of Planning Education and Research, 38(1), 6–12. https://doi.org/10.1177/0739456x16675674

Minch, Robert. (2004). Privacy Issues in Location-Aware Mobile Devices. Robert P. Minch. 10.1109/HICSS.2004.1265320.

Mobility, public transport and road safety. (n.d.). Retrieved from Government of the Netherlands: https://www.government.nl/topics/mobility-public-transport-and-road-safety/self-driving-vehicles

Oliveira, L., Proctor, K., Burns, C. G., & Birrell, S. (2019). Driving Style: How Should an Automated Vehicle Behave? Information, 10(6), 219. MDPI AG. Retrieved from http://dx.doi.org/10.3390/info1006021

Owen, A., & Levinson, D. (2014). Access Accros America: Transit 2014, Final Report. Minneapolis, MN.

Parida, S., Franz, M., Abanteriba, S., & Mallavarapu, S. (2018). Autonomous Driving Cars: Future Prospects, Obstacles, User Acceptance and Public Opinion. Advances in Intelligent Systems and Computing, 786, 318–328. https://doi.org/10.1007/978-3-319-93885-1_29

Parkinson, S., Ward, P., Wilson, K. & Miller, J. (2017). "Cyber Threats Facing Autonomous and Connected Vehicles: Future Challenges." IEEE Transactions on Intelligent Transportation Systems, 18(11), pp. 2898-2915. doi: 10.1109/TITS.2017.2665968.

Pöllänen, E., Read, G. J. M., Lane, B. R., Thompson, J., & Salmon, P. M. (2020). Who is to blame for crashes involving autonomous vehicles? Exploring blame attribution across the road transport system. Ergonomics, 63(5), 525–537. https://doi.org/10.1080/00140139.2020.1744064

Raue, M., D’Ambrosio, L. A., Ward, C., Lee, C., Jacquillat, C., & Coughlin, J. F. (2019). The Influence of Feelings While Driving Regular Cars on the Perception and Acceptance of Self-Driving Cars. Risk Analysis, 39(2), 358–374. https://doi.org/10.1111/risa.13267

Rogers, E. M. (1995). Diffusion of Innovations (4th ed.). Retrieved from https://books.google.nl/books?hl=nl&lr=&id=v1ii4QsB7jIC&oi=fnd&pg=PR15&dq=Rogers,+E.+M.+(1995).+Diffusion+of+innovations.+New+York.&ots=DMTurPTs7S&sig=gXeTkHXQsnxXXpy5dprofoJMhRQ#v=onepage&q=Rogers%2C

Rupp, J. D., & King, A. G. (2010). Autonomous Driving - A Practical Roadmap.

Sandberg, A., & Bradshaw‐Martin, H. (2013). In J. Romportl, et al. (Eds.), What do cars think of trolley problems: Ethics for autonomous cars? . Beyond AI: Artificial Golem Intelligence, Conference Proceedings. https://www.beyondai.zcu.cz/ files/BAI2013_proceedings.pdf:12

Sarcinelli, R., Guidolini, R., Cardoso, V., Paixão, T., Berriel, R., Azevedo, P., De Souza, A., Badue, C., & Oliveira-Santos, T. (2019). Handling pedestrians in self-driving cars using image tracking and alternative path generation with Frenét frames. Computers & Graphics, 84, 173–184. https://doi.org/10.1016/j.cag.2019.08.004

Schoettle, B. & Sivak, M. (2014). A Survey of Public Opinion about Autonomous and Self-Driving Vehicles in the U.S., the U.K., and Australia. Michigan: The University of Michigan. https://deepblue.lib.umich.edu/bitstream/handle/2027.42/108384/103024.pdf?sequence=1&isAllowed=y

Schoettle, B., & Sivak, M. (2014). Public opinion about self-driving vehicles in China, India, Japan, the U.S. and Australia. Retrieved from http://www.umich.edu/~umtriswt

Shladover, S. (2016). THE TRUTH ABOUT “SELF-DRIVING” CARS. Scientific American, 314(6), 52-57. doi:10.2307/26046990

Silberg, G., Wallace, R. ., Matuszak, G., Plessers, J., Brower, C., & Subramanian, D. (2012). Self-driving cars: The next revolution. KPMG LLP & Center of Automotive Research.

Son, J., Park, M., & Park, B. B. (2015). The effect of age, gender and roadway environment on the acceptance and effectiveness of Advanced Driver Assistance Systems. Transportation Research Part F: Traffic Psychology and Behaviour, 31, 12–24. https://doi.org/10.1016/j.trf.2015.03.009

Steg, L. (2005). Car use: Lust and must. Instrumental, symbolic and affective motives for car use. Transportation Research part A: Policy and Practice, 39(2), 147-162

Straub, J., McMillan, J., Yaniero, B., Schumacher, M., Almosalami, A., Boatey, K., & Hartman, J. (2017). CyberSecurity considerations for an interconnected self-driving car system of systems. 2017 12th System of Systems Engineering Conference, SoSE 2017. https://doi.org/10.1109/SYSOSE.2017.7994973

Teoh, E. R. & Kidd, D. G. (2017). Rage against the machine? Google’s self-driving cars versus human drivers. Journal of Safety Research, 63, 57–60. https://doi.org/10.1016/j.jsr.2017.08.008

Visschers, V. H. M., & Siegrist, M. (2018). Differences in risk perception between hazards and between individuals. In Psychological Perspectives on Risk and Risk Analysis: Theory, Models, and Applications (pp. 63–80). https://doi.org/10.1007/978-3-319-92478-6_3

Wagner M. & Koopman P. (2015) A Philosophy for Developing Trust in Self-driving Cars. In: Meyer G., Beiker S. (eds) Road Vehicle Automation 2. Lecture Notes in Mobility. Springer, Cham. https://doi.org/10.1007/978-3-319-19078-5_14

Wakabayashi, D. (2018). Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam. The New York Times, Technology. https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html

World Health Organization. (2016). Road traffic injuries.

Yannis, G., Antoniou, C., Vardaki, S., & Kanellaidis, G. (2010). Older Drivers’ Perception and Acceptance of In-Vehicle Devices for Traffic Safety and Traffic Efficiency. Journal of Transportation Engineering, 136(5), 472–479. https://doi.org/10.1061/(ASCE)TE.1943-5436.0000063

Appendix

Survey Acceptatie van volledig zelfrijdende auto's

Consent Form

Instemming onderzoeksdeelname voor onderzoek ‘Acceptatie van volledig zelfrijdende auto’s’. Dit document geeft u informatie over het onderzoek ‘Acceptatie van volledig zelfrijdende auto’s’.

Voordat het experiment begint is het belangrijk dat u kennis neemt van de werkwijze die bij dit experiment gevolgd wordt en dat u instemt met vrijwillige deelname. Leest u dit document a.u.b. aandachtig door.

Doel en nut van het experiment

Het doel van dit onderzoek is te meten welke relevante factoren bijdragen aan de acceptatie van de volledig zelfrijdende auto voor privégebruik. Het onderzoek wordt uitgevoerd door de studenten Laura Smulders, Sam Blauwhof, Joris van Aalst, Roel van Gool en Roxane Wijnen van de Technische Universiteit Eindhoven, onder supervisie van dr. ir. M.J.G. van de Molengraft.

Procedure

Dit onderzoek vult u online in via uw webbrowser. In dit onderzoek wordt u een aantal vragen gesteld over de volgende relevante factoren: gebruikersperspectief, veiligheid, ethische instelling, verantwoordelijkheid. Ook worden er wat additionele demografische vragen gesteld.

Duur

Het onderzoek duurt ongeveer 5-10 minuten.

Vrijwilligheid

Uw deelname is geheel vrijwillig. U kunt zonder opgave van redenen weigeren mee te doen aan het onderzoek en uw deelname op welk moment dan ook afbreken door de browser af te sluiten. Ook kunt u nog achteraf (binnen 24 uur) weigeren dat uw gegevens voor het onderzoek mogen worden gebruikt. Dit alles blijft te allen tijde zonder nadelige gevolgen.

Vertrouwelijkheid

Wij delen geen persoonlijke informatie over u met mensen buiten het onderzoeksteam. De informatie die we met dit onderzoeksproject verzamelen wordt gebruikt voor het schrijven van wetenschappelijke publicaties en wordt slechts op groepsniveau gerapporteerd. Alles gebeurt geheel anoniem en niets kan naar u teruggevoerd worden. Alleen de onderzoekers kennen uw identiteit en die informatie wordt zorgvuldig afgesloten bewaard.

Nadere inlichtingen

Als u nog verdere informatie wilt over dit onderzoek of voor eventuele klachten, dan kunt u zich wenden tot Roel van Gool (roel.vangool@gmail.com).

Instemming onderzoeksdeelname

Door onderstaand 'Volgende' aan te klikken geeft u aan dat u dit document en de werkwijze hebt begrepen en dat u ermee instemt om vrijwillig deel te nemen aan dit onderzoek van de bovengenoemde studenten van de Technische Universiteit Eindhoven.


Demografie

Wat is uw leeftijd? (Open vraag)

Wat is uw geslacht? (Man, Vrouw, Anders)


Wat is uw hoogst behaalde opleidingsniveau?

- Geen opleiding/ onvolledige basisonderwijs

- Middelbaar diploma

- Middelbaar beroepsonderwijs (MBO)

- Hoger beroepsonderwijs of wetenschappelijk onderwijs zonder diploma (HBO/WO)

- Bachelor diploma (HBO/WO)

- Master diploma (HBO/WO)

- Doctor, PhD


Heeft u een auto rijbewijs? (Ja/Nee)

Hoe vaak gebruikt u een auto?

- (Bijna) elke dag

- Wekelijks

- Maandelijks

- Jaarlijks

- Nooit


Gebruikersperspectief

Hoe bekend bent u met zelfrijdende auto’s?

Onbekend, Enigszins onbekend, Enigszins bekend, Bekend


Hoe waarschijnlijk denkt u dat de volgende voordelen optreden bij het gebruik van volledig zelfrijdende auto’s?

Zeer onwaarschijnlijk, Eenigszins onwaarschijnlijk, Geen mening/neutraal, Enigszins waarschijnlijk, Zeer waarschijnlijk

- Minder ongelukken

- Verminderde ernst van ongelukken

- Minder verkeersopstoppingen

- Kortere reizen

- Lagere voertuigemissies

- Betere brandstofbesparing

- Lagere verzekeringstarieven


Veiligheid

Hoe bezorgd bent u over de volgende kwesties gerelateerd aan volledig zelfrijdende voertuigen?

(Niet bezorgd, Enigszins bezorgd, Neutraal, Bezorgd, Zeer bezorgd)

- Autorijden in een voertuig met zelfrijdende technologie

- Veiligheids consequenties van apparaatstoring of systeem mislukking

- Wettelijke aansprakelijkheid voor chauffeurs/eigenaren bij ongelukken

- Systeembeveiliging (tegen hackers)

- Gegevensprivacy (locatie en bestemming)

- Interactie met niet-zelfrijdende voertuigen

- Interactie met voetgangers en fietsers

- Zelfrijdende voertuigen leren gebruiken

- Systeemprestaties in slecht weer

- Verwarde zelfrijdende voertuigen door onvoorspelbare situaties

- Rijvermogen van het zelfrijdende voertuig in vergelijking tot menselijk rijvermogen

- Rijden in een voertuig zonder dat de bestuurder kan ingrijpen


Ethische instelling

In sommige onvermijdbare ongelukken zal de auto een keuze moeten maken tussen verschillende mensenlevens, bijvoorbeeld tussen die van de bestuurder en voetgangers. Het zou zelfs mogelijk zijn een instelling aan een zelfrijdende auto toe te voegen die bepaalt welke keuze de auto zou moeten maken in het geval van een ongeluk. De onderstaande vragen gaan over deze ethische instelling.

Met welke ethische instelling ziet u het liefst zelfrijdende auto’s op de weg? Rangschik van favoriet (1) naar minst favoriet (5):

1. De auto zou altijd het leven en gezondheid van de inzittende(n) moeten prioriteren boven die van omstander(s).

2. De auto zou altijd het leven en gezondheid van de omstander(s) prioriteren boven die van de inzittende(n).

3. De auto zou altijd moeten kiezen om de minste hoeveelheid schade aan te richten bij de minste hoeveelheid mensen, of ze nu omstander of inzittend zijn.

4. De auto zou geen expliciete keuze mogen maken tussen mensenlevens, en zou dus bij een onvermijdbaar ongeluk niet moeten ingrijpen. Als gevolg zal er dus willekeurig een slachtoffer vallen.

5. De keuze die de auto maakt zou gebaseerd moeten zijn op wat het merendeel van de weggebruikers wilt.


Note to ourselves: from top to bottom they represent the following ethical theories: egoism, virtue ethics (kinda), utilitarianism, deontology, contractualism.


In hoeverre bent u het met de volgende stellingen eens:

Zeer oneens, Oneens, Geen mening/neutraal, Eens, Zeer eens

- Zelfrijdende auto’s moeten niet verkocht worden met een door de gebruiker instelbare ethische instelling. In plaats daarvan moet deze instelling vastgesteld worden door bijvoorbeeld de regering of de fabrikant.

- Ik zou liever een zelfrijdende auto kopen met een instelbare ethische instelling dan een zelfrijdende auto met een onaantastbare ethische instelling.

- Als ik een een zelfrijdende auto gebruik, zou voor mij de specifieke ethische instelling van belang zijn.


Verantwoordelijkheid

In hoeverre zou u het volledig zelfrijdende voertuig gebruiken in de volgende stellingen?

Zeer onwaarschijnlijk, Enigszins onwaarschijnlijk, Geen mening/neutraal, Enigszins waarschijnlijk, Zeer waarschijnlijk

1. Fabrikanten zijn volledig aansprakelijk als een zelfrijdende auto een ongeluk maakt, ook als dit ze ontmoedigt om zelfrijdende auto’s te produceren.

2. Fabrikanten zijn gedeeltelijk aansprakelijk zodat ze wel produceren, maar altijd aangemoedigd zijn om fouten te verbeteren.

3. Ik ben zelf aansprakelijk als mijn zelfrijdende auto een ongeluk maakt, ook al kan ik niet zelf ingrijpen (volledig autonome auto).

4. Ik ben zelf aansprakelijk als mijn zelfrijdende auto een ongeluk maakt, omdat ik de mogelijkheid heb om in te grijpen (semi-autonome auto).

5. Iedereen met een zelfrijdende auto is aansprakelijk als een zelfrijdende auto een ongeluk maakt door middel van een verplichte verzekering of belasting.


For the full questionnaire, click the link: https://forms.office.com/Pages/DesignPage.aspx?fragment=FormId%3DR_J9zM5gD0qddXBM9g78ZIQEJ0K6qk1Epl7wQE_GwFJUQzRFUEg3RFVEMFVFVDY4NFVMQVJaRUgxQi4u%26Token%3D7a2197128d054f1d9d81e3056e2eafde

Results

In this appendix only the tables that have the most relevant and obvious results are listed for clarity. For the full list of tables grouped per demographic subcategory, see: https://docs.google.com/document/d/1NU_mgnyudpMwVYj0RVxQWUABamYle2OQiNXvVHWjn-g/edit?usp=sharing


Table 1:

Concern.png


Table 2:

Data.png


Table 3:

Car usage.png


Table 4:

Adjustable setting.png


Table 5:

Familiar.png


Figure 1:

Screenshot 10.png


Figure 2:

Screenshot 6.png


Figure 3:

Screenshot 7.png

Figure 4:

Screenshot 8.png


Figure 5:

Screenshot 9.png

Note that these figures have translated text in them. The original questions and responses are in Dutch and have been translated into English for the purposes of this report.

Planning

Week Task 1 Task 2 Task 3 Task 4 Objectives (end of the week)
Week 1 Choose subject Make a planning Collect information Update the wiki-page Subject chosen
Week 2 Define research question Literature research Concrete planning Update the wiki-page Research question specified
Week 3 Literature review Define subtopics Literature study Update the wiki-page Subtopics defined
Week 4 Make survey Plan meetings in smaller groups Write hypothesis Update the wiki-page Survey started
Week 5 Send out survey Contact professors Switch subtopics Update the wiki-page Contact made
Week 6 Analysing survey Make final report Write conclusion/discussion survey Update the wiki-page Survey finished
Week 7 Finish final report Start making the presentation/powerpoint Update the wiki-page Presentation finished
Week 8 Peer review Last preparations for presentation Finish final report Finalize the wiki-page Presentation and final report finished

Planning per week

Week 1

Name Total [h] Break-down
Laura Smulders 8.5 Meetings [3h], Starting lecture [1h], Research [1h], 5 relevant references [2h], Start/discuss problem statement & objectives [1.5h]
Sam Blauwhof 8.5 Meetings [3h], Starting lecture [1h], Research [1h], 5 relevant references [2h], Start/discuss Approach, Milestones and deliverables [1.5h]
Joris van Aalst 9 Meetings [3h], Starting lecture [1h], Research [2h], 5 relevant references [2h], Start/discuss User part [1h]
Roel van Gool 8 Meetings [3h], Starting lecture [1h], Research [1.5h], 5 relevant references [2h], Check references [0.5h]
Roxane Wijnen 8 Meetings [3h], Starting lecture [1h], Research [1h], 5 relevant references [2h], Start/discuss user requirements [1h]

Week 2

Name Total [h] Break-down
Laura Smulders 7 Meetings [3h], Summarize 5 relevant articles [4h]
Sam Blauwhof 7.5 Meetings [3h], Summarize 5 relevant articles [4.5h]
Joris van Aalst 8 Meetings [3h], Summarize 5 relevant articles [5h]
Roel van Gool 8 Meetings [3h], Summarize 5 relevant articles [5h]
Roxane Wijnen 7.5 Meetings [3h], Summarize 5 relevant articles [4.5h]

Week 3

Name Total [h] Break-down
Laura Smulders 7 Meetings [3h], Problem statement [3h], Update Wiki [1h]
Sam Blauwhof 7.5 Meetings [3h], Safety - traffic behaviour [4.5h]
Joris van Aalst 7.5 Meetings [3h], Perspective of private end-user [4.5h]
Roel van Gool 8 Meetings [3h], Ethical theories [5h]
Roxane Wijnen 7 Meetings [3h], Responsibility [4h]

Week 4

Name Total [h] Break-down
Laura Smulders 12.5 General meetings [2h], Meeting with Sam & Roel [2.5h], Update Wiki [1h], Hypothesis [2h], Planning [1.5h], Literature study [3.5h]
Sam Blauwhof 11 General meetings [2h], Meeting with Laura & Roel [2.5h], Survey with Joris [2.5h], Literature study [4h]
Joris van Aalst 10.5 General meetings [2h], Meeting with Roxane [2h], Survey with Sam [2.5h], Literature study [4h]
Roel van Gool 10.5 General meetings [2h], Meeting with Laura & Sam [2.5h], Research platforms survey [0.5h], Literature study [5.5h]
Roxane Wijnen 8 General meetings [2h], Meeting with Joris [2h], Literature study [4h]

Week 5

Name Total [h] Break-down
Laura Smulders 11 General meetings [2h], Meeting with Sam & Roel [2h], Survey with Roxane & Roel [2.5h], Define relevant factors & Literature study [2h], Update Wiki & Planning [0.5h], Finish survey feedback Raymond Cuijpers [2h]
Sam Blauwhof 8 General meetings [2h], Meeting with Laura & Roel [2h], Literature study [4h]
Joris van Aalst 7 General meetings [2h], Meeting with Roxane [1.5h], Literature study [3.5h]
Roel van Gool 12 General meetings [2h], Meeting with Laura & Sam [2h], Survey with Roxane & Laura [2.5h], Contact with Raymond Cuijpers [0.5h], Literature study [3h], Finish survey feedback Raymond Cuijpers [2h]
Roxane Wijnen 9 General meetings [2h], Meeting with Joris [1.5h], Survey with Laura & Roel [2.5h], Literature study [3h]

Week 6

Name Total [h] Break-down
Laura Smulders 9 General meetings [2h], Review Responsibility [2h], Meeting with Roxane [1h], Methods survey [3h], Update Wiki [0.5h], Update planning [0.5h]
Sam Blauwhof 9.5 General meetings [2h], Review Ethical theories [2.5h], Meeting with Roel [1.5h], Meeting with Joris [1h], Introduction survey [2.5h]
Joris van Aalst 11 General meetings [2h], Review Safety [2h], Meeting with Sam [1h], Research statistics [1.5h], Results survey [4.5h]
Roel van Gool 12.5 General meetings [2h], Privacy [5h], Meeting with Sam [1h], Results survey [4.5h]
Roxane Wijnen 11.5 General meetings [2h], Review Perspective of private end-user [6h], Meeting with Laura [1h], Meeting with Joris [1h], Research statistics [1.5h]

Week 7

Name Total [h] Break-down
Laura Smulders 7.5 General meetings [3h], Slides presentation [3h], Meeting with Lambèr Royakkers [1h], Update planning [0.5h]
Sam Blauwhof 8 General meetings [3h], Presentation preparation [3h], Meeting with Lambèr Royakkers [1h], Introduction survey [1h]
Joris van Aalst 8 General meetings [3h], Discussion [4h], Meeting with Lambèr Royakkers [1h]
Roel van Gool 8 General meetings [3h], Results [5h]
Roxane Wijnen 8 General meetings [3h], Presentation preparation [3h], Review End-user perspective [2h]

Week 8

Name Total [h] Break-down
Laura Smulders 5.5 General meetings [2h], Discussion [3h], Updating planning [0.5h]
Sam Blauwhof 5 General meetings [2h], Discussion [3h]
Joris van Aalst 5 General meetings [2h], Discussion [3h]
Roel van Gool 6 General meetings [2h], Discussion [3h], Ethics [1h]
Roxane Wijnen 6 General meetings [2h], Last check of spelling and references [3h], Discussion future work [1h]