- Former Uber self-driving boss crashes Tesla using FSD, then raises questions on automation.
- He argues semi-autonomous tech trains drivers to trust it, then blames them when it fails.
- The real danger may be systems that work almost perfectly until the moment they don’t.
There’s irony, and then there’s crashing a Tesla on Full Self-Driving after spending years running Uber’s self-driving division. That’s exactly what happened to Raffi Krikorian in his Model X, but his story isn’t really about one crash.
Instead, it highlights the awkward phase we’re in with modern automation, where the technology can handle most of the job yet still expects the human to step in instantly when something goes wrong. That may be the reality of today’s driver-assist systems, but cases like this show how uneasy that balance can be.
What Exactly Happened?
According to his essay in The Atlantic, Krikorian was driving his kids through residential streets with Tesla’s Full Self-Driving (Supervised) active, something he’d gradually grown comfortable using outside highways after months of uneventful trips. Then the car made a turn, the steering wheel jerked unexpectedly, and within seconds the Model X slammed into a concrete wall.
More: Tesla On FSD Appears To Smash Through Railroad Gates As Train Approaches
Everyone walked away, but the experience left him shaken. And not just by the crash, but by how familiar the pattern felt. He describes what researcher Madeleine Clare Elish calls the “moral crumple zone,” the idea that when complex automated systems fail, the human operator absorbs the blame the same way a crumple zone absorbs crash energy. Even though the system is doing most of the work, the driver is still legally responsible.
My Tesla tried to drive me into a lake today! FSD version 14.2.2.4 (2025.45.9.1)@Tesla @aelluswamy pic.twitter.com/ykWZFjUm8k
— Daniel Milligan (@lilmill2000) February 16, 2026
Driver Accountability Rules
Tesla has won plenty of court proceedings on this principle, and it’s easy to see why. It (and other automakers) warn drivers time and time again that the autonomous features aren’t perfect and that drivers must be ready to take over at a moment’s notice. But the essay’s most interesting point isn’t legal. It’s psychological and physiological.
On the psychological front, semi-autonomous systems create, Krikorian argues, a dangerous middle ground where they do so well that drivers stop actively driving. But they don’t drive so well that they can eliminate the need for human interaction at times.
Researchers call this the vigilance decrement. When people monitor a system that almost never fails, their attention drifts. It’s a known issue that often gets overlooked in the wake of flashy headlines about crashes that involve autonomy in any way.
Video Reddit
When attention drifts, we land on the second part of the issue: physiology. Even the most fit humans often need multiple seconds to regain focus, decide how to act, and then physically act. The same pattern shows up anywhere humans supervise automation, from airline cockpits to AI chatbots.
The technology builds trust by working most of the time, then relies on a human to save the situation when something unexpected happens. And when that rescue fails, the human is usually the one held responsible.
Here’s the roughest part of this whole situation: this middle stage may be unavoidable. Technology has to be used in the real world to improve, and that means living with systems that can do most of the job but still need a human ready to take over instantly.
The problem is that the better these systems get, the easier it is to forget you’re still the one in charge. That is, right up until the moment the crash report reminds you.

