...

The Moral & Ethical Considerations Of Self-Driving Vehicles

A few days ago, my esteemed colleague Maarten Vinkhuyzen posted an excellent article on self-driving systems that explores the difference between being 10 times safer or 10 times better than a human driver. His conclusion is that 10 times better is the preferred metric, since 10 times safer still results in about 4,000 road deaths a year in the US. This discussion, of course, was driven by Zachary Shahan’s recent discussion with Elon Musk regarding the rapid advances coming soon to a Tesla near you.

The field of autonomous vehicles is moving forward quickly. Dozens of companies are testing self-driving cars in California. Waymo has ordered a fleet of self-driving Chrysler Pacifica plug-in hybrid minivans and Jaguar I-Pace SUVs. Autonomous shuttles tethered to certain city streets are operating in several US cities, with a new 1-mile-long route planned to open soon in Virginia.

No less a personage then Elon Musk has compared self-driving cars to horizontal elevators. He suggests that one day in the not too distant future, people will give the idea of stepping into a self-driving car less thought than they do to getting in an elevator. There is, however, a massive difference between the two. An elevator has a rigidly defined path it must follow. It cannot jump over to an adjacent elevator shaft, nor must it decide what to do if a pedestrian, bicyclist, or another elevator suddenly appears in its path.

Maarten suggests that society will tolerate a certain level of deaths and injuries attributable to autonomous cars, but I think he may be a bit optimistic in that regard. I would suggest that people will expect self-driving cars to be damn near perfect. I would also suggest there is a whole coterie of trial lawyers salivating over the prospect of asking a jury of humans to award damages for a death or dismemberment attributed to computer error. I would expect the tolerance for injuries caused by autonomous cars to be about the same as it would be for an elevator that suddenly plunges from the penthouse to the parking garage without warning.

Want proof? The new shuttle in Fairfax County, Virginia is manufactured by EasyMile, a company based in France. It has 16 similar shuttles operating in various US cities. In February, NHTSA ordered all 16 to cease operations when a passenger aboard an EasyMile shuttle in Columbus, Ohio fell when the shuttle made an emergency stop, according to the Washington Post. Of course, the shuttle is supposed to stop if an obstruction suddenly appears in its path. But this experience suggests the tolerance for any kind of injury while riding in an autonomous vehicle is near zero. Now the shuttles contain new signage warning passengers that sudden stops are possible and to wear their (newly installed) seat belts.

Rachel Flynn is a deputy executive for Fairfax County. She tells the Post the new shuttle is “exciting, innovative, new,” but adds it could take some time for skeptics to adjust to the new technology. “When cars first came out, I’m sure people had fears about riding on a carriage that involved a combustion engine, right? And they got used to it, and they said, ‘Oh, this is really convenient. This works.’ Same thing with an autonomous vehicle. It’ll become the norm, and then they’ll forget that we ever had people operating cars,” Flynn says. That may be true a generation or two from now, but it is a long way from certain that people will have that attitude today or any time in the near future.

Ethical & Moral Considerations

Tesla and others are focused on designing the systems that will guide self-driving cars. But there is a moral and ethical component that must be incorporated into the instructions baked into the software by programmers, and those considerations can vary significantly by country and by region. A common hypothetical involves a choice between taking an action that will kill either 5 people or only 2 people. What instructions will the autonomous driving computers have pre-loaded?

A few years ago, researchers at MIT’s Media Lab created an online experience they call the Moral Machine that allows people to answer such ethically loaded questions online. In the first two years, two million people participated in the research, recording more than 40 million decisions. In a detailed discussion of the ethical and moral concerns that pertain to autonomous driving systems, The New Yorker reports there were strong differences in people’s responses that correlate to the country they live in and their religious training. Generally speaking, “Most players sacrificed individuals to save larger groups. Most players spared women over men. Dog-lovers will be happy to learn that dogs were more likely to be spared than cats. Human-lovers will be disturbed to learn that dogs were more likely to be spared than criminals,” the New Yorker says.

Germany is the only country that has devised a legal framework for the ethical considerations it thinks should be incorporated into self-driving systems. In 2017, a German government commission headed by Udo Di Fabio, a former judge on the country’s highest constitutional court, released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.”

Asked to respond to the differences observed in the responses to the Moral Machine, Di Fabio said that philosophers and lawyers often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life and death decisions. “In Germany, people are very sensitive to such discussions. This has to do with a dark past that has divided people up and sorted them out,” he says.

The New Yorker suggests that decisions made by Germany will reverberate beyond its borders. Volkswagen is one of the largest car companies in the world, but its manufacturing prowess imposes a complicated moral responsibility on it. “What should a company do if another country wants its vehicles to reflect different moral calculations?” the magazine asks. Azar Shariff, one of the designers of the Moral Machine, leans toward adjusting each model for the country where it’s meant to operate. Car manufacturers, he thinks, “should be sensitive to the cultural differences in the places they’re instituting these ethical decisions.” Otherwise, the algorithms they export might start looking like a form of moral colonialism.

But Di Fabio worries about letting autocratic governments tinker with the code. He imagines a future in which China wants the cars to favor people who rank higher in its new social-credit system, which scores citizens based on their civic behavior. Based on China’s recent imposition of control over Hong Kong, such suppositions do not seem out of place.

Any number of possibilities suggest themselves. Could companies program their cars to favor their customers when faced with perilous choices? If the owner of an autonomous car crosses the border with a neighboring country, should he or she expect the car to make different choices in Albania than it would in Switzerland? It seems there is a lot more to designing autonomous systems than simply telling a self-driving car how to avoid potholes, pedestrians, and bicyclists.

Original Publication by Steve Hanley at CleanTechnica.

Want to buy a Tesla Model 3, Model Y, Model S, or Model X? Feel free to use my referral code to get some free Supercharging miles with your purchase: http://ts.la/guanyu3423

You can also get a $100 discount on Tesla Solar with that code. No pressure.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x