Over at Vox, Timothy B. Lee makes an argument about self-driving cars:
We don’t have enough data yet to say how today’s self-driving cars compare with today’s human drivers. What does seem likely, however, is that self-driving software will steadily get better over time, while human drivers won’t. And the sooner we reach the point where computers are safer than human drivers, the sooner we can start saving lives.He talks about this partially in the context of Uber, which has a culture that is very tolerant of risk taking. However, I think that the idealism in this viewpoint may be too strong.
If we were talking about computer driven trains, where one of the outcomes can be "stop the train and wait for a human" But I think that there are several issues here that should be considered when arguing for self-driving cares being necessarily safer.
1) I would expect there to be a period of transition where human and self-driving cars share the road. The cars need to be able to handle the human drivers, where tricks like inter-vehicle communication are not going to work
2) Infrastructure is quite varied and tends to break. How do self-driving cars handle a broken streetlight system or roads without markings, or badly parked cares? Working 99% of the time is not enough if occasionally there is a disruption. Right now most adults are very proficient at driving -- what if that skill stops being common among passengers?
3) Is silicon valley reliable enough? Remember, running on software are vulnerable to issues like bad software updates. It seems like special pleading to claim that a bad patch should be treated as an exception. People aren't supposed to drive drunk but they do and the resulting mayhem is laid at the feet of cars (and rightly so).
4) How things work at scale is a whole different issue. To make a lot of this stuff work we need to have strong regulation. Look at the bitcoin problems -- fine for a secondary currency but imagine if the hard fork issue hit the US dollar? What do you do if there are multiple protocols for self-driving cars that don't necessarily play nice with each other?
5) What happens when the car doesn't work? Your iPhone can become a brick. Do we really want to be on hold waiting to find out why the software system in the car isn't working?
None of this is to say that I don't like self-driving cars. Insofar as they can make pubic transit possible for older and sick members of society then it will be a positive good. It's not impossible to imagine ways that this technology can really help matters (automating parallel parking plays to all of the computer strengths and many drivers dislike it already). But regulation seems to be the way to go, and it really would make me feel better if the people who spent decades making cars safe (traditional car companies) were more involved.
I would be surprised if Mark didn't have thoughts too.