Thursday, August 26, 2021

Self-driving cars and reverse centaurs -- are we approach autonomy backwards?

 



Jalopnik's Jason Torchinsky uses the embarrassing Tesla Bot debut as a stepping off point for a clever discussion of technology and autonomy.

Tesla’s big AI Day event just happened, and I’ve already told you about the humanoid robot Elon Musk says Tesla will be developing. You’d think that would have been the most eye-roll-inducing thing to come out of the event, but, surprisingly, that’s not the case. The part of the presentation that actually made me the most baffled was near the beginning, a straightforward demonstration of Tesla “Full Self-Driving.” I’ll explain.

...

What’s being solved, here? The demonstration of FSD shown in the video is doing absolutely nothing the human driver couldn’t do, and doesn’t free the human to do anything else. Nothing’s being gained!

It would be like if Tesla designed a humanoid dishwashing robot that worked fundamentally differently than the dishwashing robots many of us have tucked under our kitchen counters.

The Tesla Dishwasher would stand over the sink, like a human, washing dishes with human-like hands, but for safety reasons you would have to stand behind it, your hands lightly holding the robot’s hands, like a pair of young lovers in their first apartment.

Normally, the robot does the job just fine, but there’s a chance it could get confused and fling a dish at a wall or person, so for safety you need to be watching it, and have your hands on the robot’s at all times.

If you don’t, it beeps a warning, and then stops, mid-wash.

Would you want a dishwasher like that? You’re not really washing the dishes yourself, sure, but you’re also not not washing them, either. That’s what FSD is.

...

Now, if you want to argue that Tesla and other L2 systems offer a safety advantage (I’m not convinced they necessarily do, but whatever) then I think there’s a way to leverage all of this impressive R&D and keep the safety benefits of these L2 systems. How? By doing it the opposite way we do it now.

What I mean is that there should be a role-reversal: if safety is the goal, then the human should be the one driving, with the AI watching, always alert, and ready to take over in an emergency.

In this inverse-L2 model, the car is still doing all the complex AI things it would be doing in a system like FSD, but it will only take over in situations where it sees that the human driver is not responding to a potential problem.

This guardian angel-type approach provides all of the safety advantages of what a good L2 system could provide, and, because it’s a computer, will always be attentive and ready to take over if needed.

Driver monitoring systems won’t be necessary, because the car won’t drive unless the human is actually driving. And, if they get distracted or don’t see a person or car, then the AI steps in to help.

1 comment:

  1. Jason Torchinsky! He did this for which I will always be grateful.

    ReplyDelete