Consider the following from Lee:
It seems inevitable that lax regulation of self-driving cars will lead to some preventable deaths. Still, there’s a good argument that today’s permissive regulatory environment is the best approach.
The reason: While self-driving cars are potentially dangerous, human drivers are definitely dangerous.
“It's so easy to immediately focus on self-driving cars as the new and the scary and forget that every day 100 people die on the road,” Smith said. He says that about 90 percent of those fatalities are caused by human error — errors that self-driving cars could avoid some day.
The trouble with this line of reasoning is that autonomy is not a yes/no proposition; it's a scalar. Here is the standard metric:
A classification system based on six different levels (ranging from none to fully automated systems) was published in 2014 by SAE International, an automotive standardization body, as J3016, Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems.[22][23] This classification system is based on the amount of driver intervention and attentiveness required, rather than the vehicle capabilities, although these are very closely related. In the United States in 2013, the National Highway Traffic Safety Administration (NHTSA) released a formal classification system,[24] but abandoned this system when it adopted the SAE standard in September 2016.
SAE automated vehicle classifications:
Level 0: Automated system has no vehicle control, but may issue warnings.
Level 1: Driver must be ready to take control at any time. Automated system may include features such as Adaptive Cruise Control (ACC), Parking Assistance with automated steering, and Lane Keeping Assistance (LKA) Type II in any combination.
Level 2: The driver is obliged to detect objects and events and respond if the automated system fails to respond properly. The automated system executes accelerating, braking, and steering. The automated system can deactivate immediately upon takeover by the driver.
Level 3: Within known, limited environments (such as freeways), the driver can safely turn their attention away from driving tasks, but must still be prepared to take control when needed.
Level 4: The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.
Level 5: Other than setting the destination and starting the system, no human intervention is required. The automatic system can drive to any location where it is legal to drive and make its own decisions.
Lee is jumping from one end of the scale to the other mid-argument. The improvements in safety start at level 1 and, if anything, tend to flatten out as you approach level 4. If your car takes control of the wheel when you start to drift out of a lane and applies the brakes when you are about to hit something or someone, then you have already achieved most of your gains in this area.
With regulation, the situation is just the opposite. It isn't till around Level 4 that the serious legal concerns start kicking in, and not until you get to the stage of readily available driverless (as compared to self-driving) vehicles that the issues become truly daunting. Strictly from a technological standpoint, we still have quite a ways to go.
For the record, there are still any number of compelling reasons to fully develop this functionality – – for example, Uber's widely hyped plan to use autonomous vehicles to alleviate labor costs makes no sense if the cars still have to have licensed drivers behind the wheel – – but Lee's safety argument simply doesn't hold water.
No comments:
Post a Comment