That's all good theory, but the trials of driverless cars seem to be involved in significantly less accidents per mile than human drivers.
Driverless Car Miles™ is about (IIRC) 130 million miles with one death. That could be an outlier
either way, with the 'True Rate' being a billion miles per death (this one just happened unusually early in the long future-history of the statistic) or actually we really should have had half a dozen deaths for this number of autonomous miles driven by this current capability of system, just there were people paying attention more in other cases, to seize control back in good time.
(Non-autonomous deaths are about
1 per 0.1 billion* miles, it seems, which means that a presumed 1 per 0.13 billion isn't actually that far off human-level ability. And that figure includes significant drunk-driving, so Tesla is 'not quite so drunk', if we take the minimal data at strict first sight...)
* (short billion, given we're talking about the land of the colonials
)
So for all that e.g. a computer could be fooled by ambiguous sensory data "swirl of autumn leaves", it seems that on the balance, humans are distracted by more things than the robots.
But not actually so much, that we can tell.
Additionaly, the events that did cause an AI driver to fuck up showed to also be completely capable of causing a human driver to fuck up.
A sixteen-wheeler pulling across the path of an (originally) distant car driving beyond the apparent limits of its ability to see is not unknown in a fully-human scenario, but humans who do this tend to know (or ought to have known) that they're driving beyond their abilities. It appears the Tesla was oblivious to its impairment. And was not trained enough in the other circumstances to convert confusion into a better failsafe reaction.