A fatal accident involved an autonomous car for the first time. These vehicles, considered safer, require several adjustments before crisscrossing the roads.
In the United States, a traffic accident caused the death of a passenger. The latter had taken place aboard a Tesla autonomous car equipped with Autopilot, an automatic piloting system allowing the vehicle to drive itself. It is the first time that a fatal accident has involved an autonomous car.
The American authorities have launched an investigation into the circumstances of this accident. According to information gathered by the US Highway Safety Agency (NHTSA), the accident occurred when “a truck took a left turn in front of the Tesla at an intersection.” Neither the passenger in the car nor the Autopilot system detected the presence of the truck. As a result, the Tesla’s brakes were not engaged.
The production of self-driving cars is booming. Machines, considered safer than humans, aim to reduce fatalities and road accidents over time. However, before these machines hit the road for good, there are many questions that need to be answered – including the most gloomy ones.
Altruism and survival
For example, during sudden braking, a driver may have to run over pedestrians to save his skin, or on the contrary, to embed his car in a wall to avoid the group, even if it means sacrificing his own existence. But what if the driver is not driving because he has an autonomous car? Which option will choose the robot that controls your vehicle: to protect yourself? Or kill you?
A team of researchers from CNRS, MIT and the University of Oregon, studied the psychology of drivers and that which should be applied to the machine in the event of a road accident. The works, published in the journal Science, were conducted in the United States on 2,000 participants interviewed through six surveys. They reveal that conceptually, citizens fully support the idea of sacrificing their lives to save many. More than 75% of respondents believe that self-driving cars should choose to kill their passenger for the survival of the greatest number – even if the driver carries his children on board.
Conceptually, therefore, the respondents turn out to be altruistic. In practice, it is less obvious. To the question: “Would you buy a vehicle that would choose to sacrifice yourself to save a group?” », The participants showed a more nuanced sense of humanism. In fact, most of them preferred that other drivers acquire such a car; but for themselves, they loved having a vehicle that protects them as much.
“The Moral Machine”
To overcome the difficulties linked to these difficult to reconcile desires, the researchers devised legislation that would require vehicles to adopt the logic of the collective, the principle of the greatest number of lives saved. What the respondents were firmly opposed to: in such a legislative context, few would be inclined to acquire an autonomous car.
These regulations could therefore have the opposite effect from that expected, alert the researchers. They risk “costing more lives, by slowing down the adoption by citizens of an autonomous car, safer than current vehicles”, summarizes in a communicated the CNRS.
To refine the psychological profile of human and robot conductors, researchers have set up a portal on the Internet, called ” Machine Morale “. Internet users are invited to question their preferences: should the autonomous car brake straight on pedestrians, or deviate to the right by crushing the three small dogs crossing? Sacrifice a man or rather a woman? The elderly or the family?
“The objective, for scientists, is to identify and study the situations for which people encounter the most difficulty in making a decision,” said the CNRS press release. And in the long term, to develop a “moral machine”, in the image of man. Good luck.
.