Tuesday, October 27, 2020

The good news about Tesla's Full Self Driving technology is that it's not advanced enough to lull you into a false sense of security

Interesting companion piece to yesterday's post on Tesla's disturbing beta rollout of FSD. It promises that the driver has to do almost nothing. The claim turns out to be false but perhaps that's a feature, not a bug. It turns out that doing almost nothing is really hard.

John Pavlus writing for Scientific American.

People often use the phrase “in the loop” to describe how connected someone is (or is not) to a decision-making process. Fewer people know that this “control loop” has a specific name: Observe, Orient, Decide, Act (OODA). The framework was originally devised by a U.S. Air Force colonel, and being “in” and “out” of the OODA loop have straightforward meanings. But as automation becomes more prevalent in everyday life, an understanding of how humans behave in an in-between state—known as “on the loop”—will become more important.

Missy Cummings, a former Navy fighter pilot and director of Duke University’s Humans and Autonomy Laboratory, defines “on the loop” as human supervisory control: "intermittent human operator interaction with a remote, automated system in order to manage a controlled process or task environment.” Air traffic controllers, for example, are on the loop of the commercial planes flying in their airspace. And thanks to increasingly sophisticated cockpit automation, most of the pilots are, too.

Tesla compares Autopilot with this kind of on-the-loop aviation, saying it “functions like the systems that airplane pilots use when conditions are clear.” But there’s a problem with that comparison, Casner says: “An airplane is eight miles high in the sky.” If anything goes wrong, a pilot usually has multiple minutes—not to mention emergency checklists, precharted hazards and the help of the crew—in which to transition back in the loop of control...

Automobile drivers, for obvious reasons, often have much less time to react. “When something pops up in front of your car, you have one second,” Casner says. “You think of a Top Gun pilot needing to have lightning-fast reflexes? Well, an ordinary driver needs to be even faster.”

 ...

But NASA has been down this road before, too. In studies of highly automated cockpits, NASA researchers documented a peculiar psychological pattern: The more foolproof the automation’s performance becomes, the harder it is for an on-the-loop supervisor to monitor it. “What we heard from pilots is that they had trouble following along [with the automation],” Casner says. “If you’re sitting there watching the system and it’s doing great, it’s very tiring.” In fact, it’s extremely difficult for humans to accurately monitor a repetitive process for long periods of time. This so-called “vigilance decrement” was first identified and measured in 1948 by psychologist Robert Mackworth, who asked British radar operators to spend two hours watching for errors in the sweep of a rigged analog clock. Mackworth found that the radar operators’ accuracy plummeted after 30 minutes; more recent versions of the experiment have documented similar vigilance decrements after just 15 minutes.


No comments:

Post a Comment