Author Topic: San Francisco group placing traffic cones on self-driving cars to disable them.  (Read 3996 times)

0 Members and 1 Guest are viewing this topic.

Offline tom66

  • Super Contributor
  • ***
  • Posts: 7014
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
And what happened to my flying car? They promised me one last century.   :-DD

 
The following users thanked this post: BrianHG

Offline mendip_discovery

  • Frequent Contributor
  • **
  • Posts: 985
  • Country: gb
In the UK the government started work on determining who would be blamed in the case of an auto driving car. I think they have applied it's the manufacturer who is at fault. But we will see how much money they have for lobbying against this. It's not like we made illegal scooters legal if its via a gig economy business.

On the point about a human being in the test car. I know people who work with the development cars for Ford, they put lots of miles on the cars taking them on random journeys just to get more data. So there is no real excuse for not having a person in the car other than saving money.

There are tales of the Tesla's not liking some motorcycles and there has been at least two where the car crashed into the rear of a bike. Using the 30k deaths is one of those strawman argument things, the number if deaths on the roads have been dropping over the years even though there are more cars around than ever before.

I am interested to know if there is a spike with pedestrians being ran over by electric cars as they can be rather sneaky. If the new self driving cars are electric then I can see an increase in accidents per car.
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 

Offline MTTopic starter

  • Super Contributor
  • ***
  • Posts: 1675
  • Country: aq
And what happened to my flying car? They promised me one last century.   :-DD

Flying submarines that climbs trees have been around for millions of years..basically.

 
The following users thanked this post: BrianHG

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2004
  • Country: us
    • KE5FX.COM
Simple: Put a human observer/emergency driver into each test car. I am stunned that they got permission to road-test with unmanned cars at this early development stage.

Apparently it's not that "simple" after all, because at least some of the fatal accidents involving experimental self-driving cars have occurred with a human observer in the car.  It's a "simple" human factors problem: people who aren't paying close enough attention to the car and its surroundings cannot be expected to take over at a moment's notice, and there is no way people are going to pay enough attention when the car is performing flawlessly 99%+ of the time. 

This is an incredibly difficult engineering problem.  How do we get from three or four nines to five or six?  Telling ourselves that it will be OK as long as a (presumably infallible) human is available to take over in an emergency is not the way forward. 

Putting traffic cones on prototype cars in hopes that companies will just give up is going to get more people killed in the long run.  At the moment, solid info regarding safety is hard to come by, but we are clearly not seeing the epidemic of carnage that detractors have been whining about.
 
The following users thanked this post: Someone

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8007
  • Country: us
Apparently it's not that "simple" after all, because at least some of the fatal accidents involving experimental self-driving cars have occurred with a human observer in the car.

I didn't read the link.  Are they including Teslas?

Quote
This is an incredibly difficult engineering problem.  Telling ourselves that it will be OK as long as a (presumably infallible) human is available to take over in an emergency is not the way forward.  It's one of those cases where breaking a few eggs now absolutely will save countless lives later.

I agree that it is difficult.  I agree that the backup human is not a complete solution because of disengagement--improved safety is best had by having the human drive and the AI monitor, but that doesn't sell cars make news and attract investors.  I disagree that countless lives will be saved later because I don't think that AI driving can ever be fully realized with our current roadways and traffic system, especially if the fake-it-till-you-make-it types insist on using only visible spectrum cameras.  We'll see.  In any case, I'm not volunteering to be one of the eggs broken in these experiments.

« Last Edit: July 17, 2023, 06:01:40 pm by bdunham7 »
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Offline ebastler

  • Super Contributor
  • ***
  • Posts: 6973
  • Country: de
Apparently it's not that "simple" after all, because at least some of the fatal accidents involving experimental self-driving cars have occurred with a human observer in the car.  It's a "simple" human factors problem: people who aren't paying close enough attention to the car and its surroundings cannot be expected to take over at a moment's notice, and there is no way people are going to pay enough attention when the car is performing flawlessly 99%+ of the time. 

This is an incredibly difficult engineering problem.  How do we get from three or four nines to five or six?  Telling ourselves that it will be OK as long as a (presumably infallible) human is available to take over in an emergency is not the way forward. 

According to the reports posted early in this thread, most of the Cruise and Waymo incidents do not involve split-second decisions, but cars getting stuck in the middle of the road, creating major traffic jams and blocking emergency vehicles. A human copilot on standby should easily be able to resolve these.

Yes, it won't be perfect, and I agree with your point that staying alert at all times becomes increasingly difficult when there is nothing to do most of the time. But if this could reduce the number of annoying and potentially dangerous incidents by a factor of ten, it would go a long way in increasing acceptance of the testing among the citizens. As would be the fact that the companies are willing to spend money on extra staff to limit the annoyances.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 2004
  • Country: us
    • KE5FX.COM
Quote
  In any case, I'm not volunteering to be one of the eggs broken in these experiments.

Yeah, I edited that part because it came off as a bit on the Stalinist side.  >:D  Nobody wants to be one of the broken eggs, myself included.  But it appears that there are just not that many eggs being broken, and presumably the egg-breakage rate will only get better over time.

And yes, it does seem nuts for Tesla to limit themselves to visual-spectrum cameras.  I don't believe that's the case for the cars being sabotaged, though.
 
The following users thanked this post: tom66

Offline tom66

  • Super Contributor
  • ***
  • Posts: 7014
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
The big problem with Tesla is they're testing with general members of the public as the monitor-driver.   These people aren't trained on how SDCs go wrong and in some of the worst and most unpredictable ways.   I don't think using visual cameras alone is a bad idea, it certainly complicates the solution because you need to interpret the video data in 3D, but LIDAR sensors aren't exactly cheap so if Tesla want to make $30k taxis then it's going to be difficult to justify LIDAR. The problem with the Tesla platform isn't perception but rather the interpretation of that 3D world created from the neural networks (and doing that interpretation on the current HW platform required a loss of redundancy; the car now only has 'one' self driving AP processor). 

Having a look at some of the videos on YouTube, e.g. AIDRIVR has some good examples, Tesla seem quite a bit further along than many in the SDC community want to admit given most everyone has bet on LIDAR being necessary.  However it's always those annoying edge cases that bring the car from "interesting on YouTube" to "adequate for self-driving without a human".  It's worth noting that LIDAR struggles in heavy rain, fog and snow, and a SDC is going to need to work in those.  While vision might struggle in fog, it can operate in rain and snow and it seems LIDAR has more significant degradation in fog. 

Waymo seem to be winning the battle and Cruise are desperately chasing.  Waymo's interpretation of how to fit into a world with human drivers is near-superb in the sample of videos I've seen, and it can handle the painful situations like humans directing traffic away from accidents, emergency vehicles trying to pass, etc.   That said it's fun to watch Waymo videos when the car sees rain - it really doesn't like it - though they seem to have improved over time.  Perhaps there's a reason Waymo started their testing in LA and Phoenix, AZ... two of the driest areas in the continental US?
« Last Edit: July 17, 2023, 09:45:02 pm by tom66 »
 

Offline redkitedesign

  • Regular Contributor
  • *
  • Posts: 111
  • Country: nl
    • Red Kite Design
When self driving cards get into accidents that were caused by their actions, I wonder if anyone will be charged or be held accountable and if so, then who?

When will we hold human drivers accountable for their actions? Unless there is proof that a human driver caused an accident on purpose, it will always be attributed to an "unfortunate mistake". Even things like speeding, running red lights and drunk driving aren't treated like first degree murder.
And AI drivers don't tend to speed, ignore (or just miss) traffic lights or be hampered by fatigue or the binary equivalent of alcohol.

So let the owner of the self driving car pay for a decent liability insurance (obligatory in most of the civilized world anyhow) and de done with it.
 
The following users thanked this post: tom66, Someone

Offline tom66

  • Super Contributor
  • ***
  • Posts: 7014
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
When self driving cards get into accidents that were caused by their actions, I wonder if anyone will be charged or be held accountable and if so, then who?

When will we hold human drivers accountable for their actions? Unless there is proof that a human driver caused an accident on purpose, it will always be attributed to an "unfortunate mistake". Even things like speeding, running red lights and drunk driving aren't treated like first degree murder.
And AI drivers don't tend to speed, ignore (or just miss) traffic lights or be hampered by fatigue or the binary equivalent of alcohol.

So let the owner of the self driving car pay for a decent liability insurance (obligatory in most of the civilized world anyhow) and de done with it.

^This.  And it seems a lot of bandwidth is wasted on people talking about things like the trolley problem... Oh no, what if a self-driving car faces hitting three grannies or one baby, which will it choose to murder? 

Well it'll probably brake and steer towards the best free space, so likely hit both groups of individuals if no one individual can be avoided.  It's not going to be programmed with morals or value judgements.  Good road design with improved visibility and SDCs that don't speed shall ensure the likelihood of a pedestrian collision is significantly reduced.
 

Offline TimFox

  • Super Contributor
  • ***
  • Posts: 8416
  • Country: us
  • Retired, now restoring antique test equipment
Interesting legal detail:  this year, the State of Illinois passed a law replacing the word "accident" with "crash" in all relevant statutes, to emphasize that not all motor-vehicle "surprises" are accidental.
Like most States, Illinois requires liability insurance for motor vehicles  https://www.ilsos.gov/departments/vehicles/mandatory_insurance.html  and enforces that law electronically, including suspending licenses when proof of insurance is not displayed.
 
The following users thanked this post: I wanted a rude username

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 8007
  • Country: us
When will we hold human drivers accountable for their actions? Unless there is proof that a human driver caused an accident on purpose, it will always be attributed to an "unfortunate mistake". Even things like speeding, running red lights and drunk driving aren't treated like first degree murder.

It's not treated like first degree murder unless all the elements of that are present, which they aren't in a typical crash.  But drivers can be and routinely are prosecuted and imprisoned for crashes where they have sufficient culpability.  Civil liability can be expensive as well-there was a recent case here in California where an injured (not killed) motorcyclist obtained a $20 million judgment for what was essentially an illegal lane change.
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 7014
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
It's not treated like first degree murder unless all the elements of that are present, which they aren't in a typical crash.  But drivers can be and routinely are prosecuted and imprisoned for crashes where they have sufficient culpability.  Civil liability can be expensive as well-there was a recent case here in California where an injured (not killed) motorcyclist obtained a $20 million judgment for what was essentially an illegal lane change.

That does sound pretty significant but not knowing the motorcyclists injuries it's hard to be certain.  I wouldn't take $20 million over being paralysed and in a wheelchair for the rest of my life, for instance.

The biggest one I know of in the UK is the Selby rail disaster, a 4x4 came off a motorway due to a driver having not slept sufficiently.  4x4 ended up on the tracks, with a trailer in tow.  A train hit it, derailed, and another train was then hit by that derailed train.   Ten people killed (including both train drivers), 82 seriously injured.

By 2003, the driver's insurers had paid out over £31 million, and had budgeted for up to £50 million in costs.  (In real terms, that's probably closer to $70 million USD now.)  The driver was criminally prosecuted for death by dangerous driving, and received a five year sentence.
 

Offline EPAIII

  • Super Contributor
  • ***
  • Posts: 1154
  • Country: us
« Last Edit: July 19, 2023, 10:01:28 am by EPAIII »
Paul A.  -   SE Texas
And if you look REAL close at an analog signal,
You will find that it has discrete steps.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6947
  • Country: nl
Apparently it's not that "simple" after all, because at least some of the fatal accidents involving experimental self-driving cars have occurred with a human observer in the car.
The level of annoyance necessary to make a driver keep his eyes on the road with level2+ would require quite a lot of pay.

Gaze detection and pretty much immediate loud warning on eyes off road events.
 

Offline redkitedesign

  • Regular Contributor
  • *
  • Posts: 111
  • Country: nl
    • Red Kite Design
^This.  And it seems a lot of bandwidth is wasted on people talking about things like the trolley problem... Oh no, what if a self-driving car faces hitting three grannies or one baby, which will it choose to murder? 

Well, faced with an granny in one lane, and a baby in the other (on a 2-lane road with cars parked on both sides, 50km/h speed limit), the AI driver will have noticed long ago that it couldn't decide the road ahead is free for enough distance,and it would have lowered its speed long ago. Because AI drivers never get tired of expecting toddlers to appear between parked cars, nor would be bothered by driving a safer 30km/h where 50 is legally allowed.

The human driver on the other hand would panic, brake abruptly and jerk the steering wheel, causing the car to overturn and sweep over sideways both the granny and the toddler, while damaging at least 4 of the parked cars.

Thankfully, the driver would be unharmed thanks to the airbags. Since the driver didn't expect it, and according to the car computer was nicely below the speed limit at 49km/h, they won't be criminally prosecuted.

Sure, AI drivers aren't perfect. But I'm pretty sure there are far more emergency vehicles blocked by asshole humans who "just need to grab this really important package (i.e. a box of cigarettes)" than by confused AI's
 

Offline AVGresponding

  • Super Contributor
  • ***
  • Posts: 4829
  • Country: england
  • Exploring Rabbit Holes Since The 1970s
^This.  And it seems a lot of bandwidth is wasted on people talking about things like the trolley problem... Oh no, what if a self-driving car faces hitting three grannies or one baby, which will it choose to murder? 

Well, faced with an granny in one lane, and a baby in the other (on a 2-lane road with cars parked on both sides, 50km/h speed limit), the AI driver will have noticed long ago that it couldn't decide the road ahead is free for enough distance,and it would have lowered its speed long ago. Because AI drivers never get tired of expecting toddlers to appear between parked cars, nor would be bothered by driving a safer 30km/h where 50 is legally allowed.

The human driver on the other hand would panic, brake abruptly and jerk the steering wheel, causing the car to overturn and sweep over sideways both the granny and the toddler, while damaging at least 4 of the parked cars.

Thankfully, the driver would be unharmed thanks to the airbags. Since the driver didn't expect it, and according to the car computer was nicely below the speed limit at 49km/h, they won't be criminally prosecuted.


Someone driving like that might well be prosecuted, at least in the UK, as regardless of the speed limit, you are REQUIRED to "drive according to the road conditions". This includes lowering your speed if your vision is impaired by parked cars.




Sure, AI drivers aren't perfect. But I'm pretty sure there are far more emergency vehicles blocked by asshole humans who "just need to grab this really important package (i.e. a box of cigarettes)" than by confused AI's

This is highly improbable, when broken down per vehicle mile.
nuqDaq yuch Dapol?
Addiction count: Agilent-AVO-BlackStar-Brymen-Chauvin Arnoux-Fluke-GenRad-Hameg-HP-Keithley-IsoTech-Mastech-Megger-Metrix-Micronta-Racal-RFL-Siglent-Solartron-Tektronix-Thurlby-Time Electronics-TTi-UniT
 

Offline TimFox

  • Super Contributor
  • ***
  • Posts: 8416
  • Country: us
  • Retired, now restoring antique test equipment
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf