The development of autonomous vehicles has taken much longer than many in the industry predicted. One of the major factors slowing down progress is the fact that the artificial intelligence used in vehicles currently can’t connect a cause with its effect.

Although AI is impressive, it is not intelligent in the same way we consider a human to be. The technology is not currently allowed to reason or infer, at least not in most vehicles. That means that robotaxis are incapable of problem-solving when confronted with a new situation.

As a result, the chaos of the real world can be a big problem for AVs, reports Autonews. The industry calls weird, new scenarios “edge cases,” and that’s the term that former Cruise AV CEO Kyle Vogt used to describe the incident that took the company’s robotaxis off the road last year.

In that circumstance, a woman was hit by another vehicle, and bounced onto the Cruise autonomous vehicle. I would hazard to guess that the vast majority of us have never experienced that, but we would all know to stop, rather than to try and pull over, as the robotaxi did, dragging the injured woman several feet down the road with it. It’s not that the car was malicious, it just couldn’t tell what was going on, and was not endowed with the ability to predict what impact its actions might have.

Read: Cruise Recalls Robotaxis For Software Fix To Prevent Them From Dragging Pedestrians Along The Road

 AI Can’t Think On Its Feet As Weird Situations Stump Self-Driving Cars

However, autonomous researchers do have some strategies to prevent bad decisions from being made. For example, human operators can take over in situations where the program is unprepared. Data from those incidents is monitored, fed back into simulators, and the human’s response is used to try and train the vehicle for the future.

Training Challenges and Limitations

In addition, some companies simply try to think up as many scenarios as possible before the vehicle even hits the road. That can involve manual coding or the staging scenarios that are used to train autonomous vehicles in a safe place. While researchers admit that they can never think up every edge case, this does at least take care of some of them.

However, some experts now believe that these methods of training may never be able to effectively prepare a robotaxi for the chaos of the real world. Without causal reasoning, AI may never be able to navigate every edge case that’s out there.

“The first time it sees something that is different than what it’s trained on, does somebody die?” Phil Koopman, an IEEE senior member and a professor at Carnegie Mellon University, told Autonews. “The entire machine learning approach is reactive to things that went wrong.”

Solving the problem means more than just giving AI the ability to reason causally. In fact, the technology has so far been prevented from making too many causal judgments to prevent unpredictable false positives from occurring. That was a decision made for safety’s sake, but it leaves researchers with a paradox to solve if they want to make robotaxis that can truly navigate the road autonomously: should AVs make mistakes because they’re dumb, or because they’ve reached the wrong conclusion?

 AI Can’t Think On Its Feet As Weird Situations Stump Self-Driving Cars