The Moral Machine walks users through scenarios with simply-depicted autonomous driving situations involving difficult moral dilemmas in which the death of one or more individuals is certain. In these scenarios, the visitor is given options of which individuals’ lives should be prioritized. The choices are between various types of passengers, pedestrians and animals.
Here’s an example scenario: Two children run onto the street in front of a self-driving car with one occupant. The only options are for the car to swerve and crash into a brick wall, killing the person in the car or to drive headlong into the children, killing both of them.
A series of surveys conducted by Science magazine revealed that, in general, participants believed that a self-driving car’s behavior should prioritize the greater good: For example, if one choice results in a single death while the other option results in two, the first choice should be made. However, when presented with options that involved sacrificing themselves or family members, people typically indicated that they felt that vehicles should prioritize the lives of their passengers.
The MIT initiative is available on the MIT Labs site to crowdsource opinions from as broad a population as possible. Visitors can participate in judging decisions, compare their decisions to those of others and design new scenarios.
The Moral Machine is an example of current research into roboethics.
Markiplier takes you through the dilemmas of the Moral Machine: