Photo by Denys Nevozhai on Unsplash
In 2018, Uber self-driving car under test in Tempe, Arizona was involved in a crash which unfortunately leads to the killing of a pedestrian. Last week, National Transport Safety Board concluded that it was Uber’s self-driving software’s fault (apart from various non-technical valid issues), as the autonomous software was not programmed to react to pedestrians crossing the street outside of designated crosswalks. This flaw (which Uber seems to have fixed now) raises a question about situations in which software, when not programmed correctly, can lead to more severe crashes.
This reminded me of Moral Machine, a project at Massachusetts Institute of Technology, that creates extreme scenarios (similar to trolley problem) to understand human perception. The data collected points to the fact that every individual has a different perspective to the same extreme situations.
This is interesting, as self-driving cars are designed and programmed to do what humans have been doing for over a century: driving the car. If people have a different perspective about a hypothetical crash situation, then how will an autonomous car react to such situations? How does software considers this?
For sure, the programmers writing code for autonomous cars are smart enough to take all this into account, but with Uber’s technical flaw it surely means that moral machine concepts cannot be overlooked. There will be scenarios when the software will follow specific rules, and that may be safe.
Moral machines concept is something to think about, as the industry is still far away from providing technological solutions that will make self-driving cars hardware and software do what human brains can.