A few thoughts on the recent automated car crash.
- The cars need to be able to operate without safety drivers to actually do what pundits want (driverless taxis, shared cars). If they require a safety driver that is a bad thing.
- It sure seems like the failure here was pretty central. This should have been a case where the car sensors give it an advantage over a human driver.
- It is a non sequitur to say that the car was following the rules of the road. Complex urban areas often have many actions that are technically illegal. Ramming rulebreakers at full speed will make traffic much worse and less safe, not better.
- There is a hint of catastrophic failure here and in the Tesla crash. This means that we need the rate to be lower than for human piloted cars, as severity of incidents may be higher.
- Automatic software updates are going to be exciting, as a bad patch is not going to be pretty.
Mike the Mad Biologist did an estimate of the accident rate. Using his figures the fatal crash rates per billion passenger miles (bpm)
Cars 7.28 per bpm
Now it is true that there is one crash so far. But if we assume that crashes are uniformly distributed across the whole driving time, it is worrisome to see the fatal crash happen in the first 5% of the 140 million passenger miles driven. It surely could have happened here by chance. But it isn't a reassuring piece of data.
This is doubly true as we'd like self-driving cars to be as safe as buses, if we are going to eliminate public transit with a network of cars. .
None of this is to say that making cars smarter is a bad thing. But it points out the challenges for some of the more extreme applications, like self-driving taxis. It isn't clear to me that focusing on improved public transit isn't a viable alternative.
No comments:
Post a Comment