You note that I said "in an ideal world". In the real world an automated system drives into to the back of a truck and it makes world news. I personally know two human drivers who have done it and it didn't even make local news (both survived, but long hospital stays and months to a not quite complete recovery).. The total number of human drivers who have performed this feat has to be in the thousands.
I actually believe it is the lawyers who will end up deciding this, but can't predict which group of lawyers will win. The ones sueing car companies for fielding a less than perfect autonomous system, or the lawyers sueing car companies for not fielding an autonomous system that would have prevented a stupid driver death.
I think demanding that autonomous systems performance greatly exceed humans statistically is not the wrong decision, even in an ideal world. Sure, some of that is driven by humans thinking that even if some other idiot drives into the back of a truck,
they won't. But autonomous systems should work with humans to achieve greater safety, not replace them as a publicity stunt or gimmick. There are driving assistance systems that do this quite effectively, although not perfectly yet. Automatic Emergency Braking from Volvo is one example that won't drive you into a truck, but it will prevent you from doing it 99% of the time.
Expectations of automated systems are not based on human fallibility at the same task. If I forgetfully burn my dinner on the stove and start a fire, that's human error. If an electric teakettle doesn't shut off and the thermal cutout doesn't work and a fire ensues, the manufacturer will be sued and they'll likely lose and unless shown to be a one-off, the product will be recalled. That failure is not acceptable even in small numbers.