A new study found that Tesla's "Full Self Driving" still needs humans to intervene frequently to save them from potentially dangerous outcomes.
Throughout driving more than 1,000 miles in Southern California, drivers had to step in more than 75 times to stop dangerous behavior, according to tests done by AMCI Testing.
That is about one intervention every 13 miles.
The study tested Tesla models with Full Self Driving builds 12.5.1 and 12.5.3 on city streets, two-lane rural highways, interstate highways and mountain roads.
Some of the dangerous behaviors exhibited by Full Self Driving included driving through a red light and veering into an oncoming lane while another was coming on a curvy road.
"Whether it's a lack of computing power, an issue with buffering as the car gets 'behind' on calculations, or some small detail of surrounding assessment, it's impossible to know. These failures are the most insidious," Guy Mangiamele, director of AMCI Testing, told Ars Technica.
He continued, "But there are also continuous failures of simple programming inadequacy, such as only starting lane changes toward a freeway exit a scant tenth of a mile before the exit itself, that handicaps the system and casts doubt on the overall quality of its base programming."
The study did also acknowledge impressive feats though, including pulling into a gap between two parked cars to let another vehicle pass through. The study also praised Full Self Driving for how it handled blind curves.
The Latin Times reached out to Tesla for comment on the study's findings but did not hear back at the time of publication.