Slowing Down Driverless Cars Would Be A Fatal Mistake
from the people-are-already-dying dept
Unsubstantiated driverless car hype may be annoying, but that shouldn't blind us to the real cost of unnecessarily delaying autonomous vehicle (AV) deployment.
Last week, after exploring new data from the California AV disengagement reports, Ross Marchand of the Taxpayers Protection Alliance argued that we should "put driverless cars back in the slow lane." California requires AV companies testing in the state to report each time a human operator takes over for a driverless car — an event otherwise known as a "disengagement." Marchand offers some interesting analysis, but ultimately reads far too much into a limited dataset and pushes for a restrictive policy prescription that would undermine public safety. The discussion is worth fleshing out because it reveals important limits to the "precautionary principle" mindset that is so common in AV discussions.
In 2017, Waymo — the self-driving car project formerly belonging to Google — reported driving over 350,000 miles on California roads with 63 total disengagements. Marchand claimed that, based on these data, Waymo's test vehicles are still not as safe as human drivers and that they are improving at a slower rate than those hyping AVs would have us believe. Further, he argued that until driverless cars can prove they are safer than human operators, we should keep them off public roads — and instead test them on expensive private tracks.
There are a few glaring issues with this argument. First, it overestimates how applicable and reliable the California disengagement data really are. As many commentators have pointed out, disengagement data are a poor measure of AV progress. Not only are disengagement reports an apples-to-oranges comparison across vehicle manufacturers who use different definitions, strategies and road conditions for testing, but Marchand drills down by comparing particular disengagement subcategories, leaving him with sample sizes of less than 20 — several orders-of-magnitude too small to make meaningful comparisons. Furthermore, comparing disengagements to would-be fatalities is problematic given that a safety driver's presence enables testing in regions that the vehicle is still learning to handle.
Marchand also left aside the successful testing and deployment of Waymo's fully driverless cars in Arizona. Since November 2017, hundreds of AVs have been providing free taxi services in the suburbs of Phoenix without any safety driver in the front seat. To date, there have been no reported accidents or fatalities. This suggests what we've known all along: these companies already face a host of legal, political, economic, regulatory and publicity pressures that incentivize them to prioritize safety in AV deployment. They know that every bump, scrape and crash will make headlines (regardless of who is at fault) and will slow or — if it's serious enough — completely derail their path to market. Waymo obviously feels confident enough to take its hands off the wheel, and so far has been right. Why rip AVs off the roads when no one has been harmed?
Marchand's larger argument against AV testing on public roads provided a textbook example of the precautionary principle in practice. Simply put, the precautionary principle requires innovators to prove that their new technology will not harm society, rather than placing the onus on regulators and litigators to demonstrate that an innovation actually causes harm.
And to that point, Marchand fails to specify what exactly the harm of public testing has been. Public testing has not unleashed mass fatalities on society, or even mass fender-benders. Rather, it appears to be speeding up the feedback loop of better data and more-rigorous test environments, leading to faster improvements in autonomous technology.
As a society, we can't afford to wait until we are 100-percent certain that driverless cars are statistically safer than humans before letting them on the roads. As a report from RAND highlighted, it could take several decades to accumulate enough miles on private test courses to know beyond a shadow of a doubt that AVs are safer than their human counterparts. Relying on Marchand's precautionary principle approach would mean waiting decades while nearly 40,000 people die on our roads every year. Regulatory delay of this magnitude could, conservatively-speaking, cost tens of thousands of lives.
That's not to say private test courses don't have a role to play in AV development. Indeed, Waymo already operates an extensive test track in Arizona where operators take real-world scenarios and experiment with hundreds of possible variations. This hybrid approach combines the advantages of real-world testing and private test courses. But forcing all AV testing onto private test tracks cuts off the real-world data necessary for this complementary approach and substantially raises the barrier to entry for new competitors.
To be clear, we should avoid over-hyping the progress made in AV development. Carefully taking into account the safety data will be a key part of this effort. But halting all real-world AV deployment is a heavy-handed "solution" desperately in search of a problem.
Caleb Watney (@calebwatney) is a technology policy associate at the R Street Institute. Marc Scribner (@marcscribner) is a senior fellow at the Competitive Enterprise Institute.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: autonomous vehicles, california, driverless cars, human drivers, ross marchand
Reader Comments
Subscribe: RSS
View by: Time | Thread
63 disengagements (which could be literally anything) by automated vehicles versus 3,680 vehicle related fatalities by human drivers
And he's claiming that automated vehicles aren't as safe as human drivers? Yeah, they seem safer.
[ link to this | view in chronology ]
Re:
And he's claiming that automated vehicles aren't as safe as human drivers? Yeah, they seem safer.
Er no - that's not what the statistics show at all.
350,000 miles represents the average mileage of about 50 cars with human drivers.
I'm guessing that the 3680 fatalities represents rather more than 50 cars.
Also "California roads" are fairly untypical of the rest of the world in that it is a first world country with an infrastructure that has mostly been built for the car.
These companies need to test their vehicles in places where much of the road layout predates the motor car (eg Europe) and in places that don't have much road infrastructure at all (eg Africa, Asia Latin America.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Define Your Terms
Nobody here defines that -- so everybody is talking about different levels of vehicle automation & spouting statistics that are irrelevant to the primary issue of "Full Automation" vehicles (vehicles that can go Anywhere/Anytime on normal roads with zero human intervention).
There are clear and useful definitions of the degree of automation in vehicles ... which greatly eases the confusion about this technology -- and dampens the nonsense discussions (like we see in this TD post/thread).
NHTSA/SAE specify the five official categories of automated vehicles. Category #5 is Full Automation -- what most people envision when hearing the term "Driverless Vehicle". However, CAT #5 vehicles do not exist anywhere and are not being tested anywhere. CAT # 2 are the only ones available on the market.
nhtsa.gov/technology-innovation/automated-vehicles-safety#issue-road-self-driving
[ link to this | view in chronology ]
Re: Define Your Terms
'Marchand also left aside the successful testing and deployment of Waymo's fully driverless cars in Arizona. Since November 2017, hundreds of AVs have been providing free taxi services in the suburbs of Phoenix without any safety driver in the front seat.'
They are not, as you say, available for purchase by the general public at this point.
[ link to this | view in chronology ]
Re: Define Your Terms
Waymo's Phoenix taxi vehicles are a weak Category 4, at best -- no where close to Cat #5.
Waymo's taxi had a human driver in the driver's seat for almost all the testing. Waymo now uses a safety-observer in another seat for its intended "driverless" taxi service. If something goes wrong, the company safety-observer can push the "pull over" button to stop the car.
Waymo's alleged "fully driverless" cars will only navigate in a small portion of the Phoenix metropolitan area around the southeastern suburb of Chandler... a very carefully chosen area that has been thoroughly pre-mapped in 3D for the vehicle's computer. That computer software will refuse any taxi destination outside the safe area.
Arizona, was cherry-picked for its excellent weather and relatively uncomplicated road system. Try using this "fully autonomous" vehicle in a Chicago winter. None of these "driverless cars" can handle normal rain/snow/fog conditions.
Anytime you see the media hype about "driverless vehicles" for taxi/bus service... you know it's phony. Current driverless technology can not handle normal driving tasks -- so it's only being used for very restricted purposes and driving conditions.
[ link to this | view in chronology ]
Re: Define Your Terms
[ link to this | view in chronology ]
Re: Re:
No they don't. It is not necessary that the cars handle these conditions to be used in places where the conditions do not exist. If driver-less cars could handle just specific cities in the US, there would be a great market for commuter travel and they would not even need to be able to handle driving on a dirt road.
[ link to this | view in chronology ]
Re: Re: Re:
If the cars are going to be used mainly in places like California to begin with, then they need to test cars there to begin with. Lessons will be learned, and those applied to more complicated rollouts. Demanding that a car be able to perfectly navigate a labyrinthine European city before being used in a grid-based American city is silly.
[ link to this | view in chronology ]
human statistics?
[ link to this | view in chronology ]
Re: human statistics?
https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in_U.S._by_year
[ link to this | view in chronology ]
Re: Re: human statistics?
[ link to this | view in chronology ]
Re: Re: Re: human statistics?
https://en.wikipedia.org/wiki/Transportation_safety_in_the_United_States
[ link to this | view in chronology ]
Re: Re: Re: Re: human statistics?
[ link to this | view in chronology ]
Re: Re: Re: Re: human statistics?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: human statistics?
https://duckduckgo.com/?q=traffic+safety+non-fatal+accidents&t=canonical&ia=web
[ link to this | view in chronology ]
Re: human statistics?
If disengagement equaled accident they'd be right to be more cautious. Fortunately, it doesn't.
[ link to this | view in chronology ]
Re: Re: human statistics?
These incidents represent, at minimum, a situation where the car would be stuck if there was no driver in the car - effectively equivalent to a breakdown.
One breakdown per year's driving doesn't look like an acceptable reliability record to me.
[ link to this | view in chronology ]
Re: Re: Re: human statistics?
One flat tire, one empty fuel tank, one getting lost with no map, one traffic stop for speeding... all of those are potential stops that a driverless car either prevents or makes extremely difficult.
And as mentioned in the article, a number of these disengagements occurred in areas that the car was being intentionally pushed to its limits to see what it would do, and so that the developers could account for these situations in the next stage of development. That's what beta testing is for.
[ link to this | view in chronology ]
rather than placing the onus on regulators and litigators to demonstrate that an innovation actually causes harm
excellent point. i bet the food and drug people would like to hear more about this innovative approach. we could cure overcrowding in a decade or less.
[ link to this | view in chronology ]
Re:
If you want to compare it to food and drug, it would be more comparable to banning pizzerias from selling pizza because more people get burnt by eating pizzeria's pizzas than by making pizzas themselves.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
I am not aware of driverless cars passing rigorous testing. If you're talking about the limited manufacturer testing that has been done, then I'm reminded of the testing done by cigarette manufacturers that purported to show how safe cigarettes were.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
how safe cigarettes were
bullshit. they discovered that cigarettes are good for your health and advertised them as such.
yes, that's pretty similar to the whitewashing we're getting from the car companies. to me it is impressive that they do as well as they do, but there's a whole world out there that they don't encounter.
they are insisting that they use every one of us and our loved ones and friends as their lab rats.
[ link to this | view in chronology ]
Re: Re: Re: Re:
..... by the manufacturer of chemical X ?
Why should I believe them?
[ link to this | view in chronology ]
Re: Re: Re: Re:
Based on that set of results, I'm not sure "dangerous" can be entirely ruled out.
[ link to this | view in chronology ]
“until driverless cars can prove they are safer than human operators, we should keep them off public roads”
This would be an interesting requirement to also apply when giving out driving licences to humans.
[ link to this | view in chronology ]
There is a corollary...
What this argument is saying is,
After all, who wants to die / wants others to die?
But you know what else would prevent people dying on roads? Passengerless cars.
[ link to this | view in chronology ]
Re: There is a corollary...
But nearly all accidents occur for one of three reasons. One: a driver was being reckless (such as speeding). Two: a driver was not fit to drive (such as being drunk) but determined it was the only way they could get from point A to point B (such as driving home from a bar). Three: a lapse in concentration (such as using a cellphone, not checking mirrors, ignoring a stop sign).
A driverless car would not be reckless, nor would it reasonably be distracted. ("Distraction" caused by hacking a vehicle to make it unsafe is a completely different concern -- like hijacking an airplane vs. safety of autopilot. It absolutely needs to be addressed, but is a different problem than the actual driving algorithms.)
It would also provide an easy option for those who are not fit to drive. And for those who are too stubborn, it could even be enforced the way it is for DUI convictions. Want to drive yourself instead of using autopilot? Blow into the attached breathalyzer to prove you're legally sober.
And for crashes caused by distraction, there are a number of fender-bender situations that can be avoided by having some level of autonomy in either vehicle. Even most non-autonomous vehicles these days are produced with these override features installed -- those automatic brakes and lane-drift alerts that the commercials are always so proud to tout. So even if your car is not fully autonomous, you have some level of protection which can respond instantly to avoid a potential crash caused by a human driver, which another human driver may not have been aware of or had time to react to.
We certainly wouldn't see all 40,000 deaths disappear. But we'd see a drastic reduction in them.
[ link to this | view in chronology ]
Re: Re: There is a corollary...
idk ... from my experience the number one cause of automobile accidents is tail gating coupled with lack of attention. I see it in the commute every day. Irresponsible drivers are causing everyone's insurance rates to increase. Why do we not hear about how this is socialism .. bad bad socialism.
[ link to this | view in chronology ]
Re: Re: Re: There is a corollary...
[ link to this | view in chronology ]
Re: There is a corollary...
It's really not. It's saying that 1) it's expected that driverless cars will cause fewer fatalities than human drivers. And number 2, as far as I can tell, you just straight made up.
[ link to this | view in chronology ]
You've got to be kidding
[ link to this | view in chronology ]
Re: You've got to be kidding
[ link to this | view in chronology ]
"I propose we call this the Ford/GM didn't just put $345,212 into my personal offshore bank account law".
[ link to this | view in chronology ]
GM, huh?
https://www.detroitnews.com/story/business/autos/general-motors/2018/01/12/gm-driverless-c ar-fleet-cruise-av/109381232/
[ link to this | view in chronology ]
Prototypes vs Production
A logical flaw is that every disengagement automatically means an accident. Without knowing the details of each event it is hard to say if an accident would have actually occurred or whether the fault is with the AV.
[ link to this | view in chronology ]
Actually!
[ link to this | view in chronology ]
Re: Actually!
[ link to this | view in chronology ]
In which case, surely, they'd never be safe to drive on public roads, as they'd not be being updated to deal with all the pottential scenarios which can occur on such a road, which they would likely encounter if being tested there, and could be adapted for?
[ link to this | view in chronology ]
I say put them on the roads first.
[ link to this | view in chronology ]
Re: I say put them on the roads first.
[ link to this | view in chronology ]
It may just be about taxes
Then there are the unions for those professions that contribute to political campaigns, who would no longer be contributing to political campaigns as their raison d'être would be gone.
[ link to this | view in chronology ]
Aerospace Engineer-Member SAE AV Testing Task Force
I received the IEEE Barus Ethics Award for whistleblowing regarding the DHS Deepwater program post 9/11 - (Google me for much more) - http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4468728
I am also a member of the SAE On-Road Autonomous Driving Validation & Verification Task Force
While your desire to avoid slowing down in creating AVs is laudable you are actually causing the problem you are trying to avoid. As Waymo’s recent paradigm shift after 8 years of getting it wrong clearly shows most AV makers are not only using approaches that will never yield a full autonomous, and not save lives doing so, they will soon take lives needlessly as the hyped and benign scenarios you and everyone else thinks are so impressive evolve to complex, dangerous and accident scenarios. When they run thousands of accident scenarios, thousands of times each and cause thousands of accidents injuries and casualties you will do more than slow down.
Please find detailed information below on the issues and how to resolve them.
Impediments to Creating an Autonomous Vehicle
https://www.linkedin.com/pulse/impediments-creating-autonomous-vehicle-michael-dekort/
Autonom ous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI
https://www.linkedin.com/pulse/autonomous-levels-4-5-never-reached-without-michael-dekort
Disengage ments and Miles Driven Mean Almost Nothing
https://www.linkedin.com/pulse/disengagements-miles-driven-mean-almost-nothing-michael-dekort /
DoT, NHTSA and NTSB are Enabling Autonomous Vehicle Tragedies
https://www.linkedin.com/pulse/dot-nhtsa-ntsb-enabling-autonomous-vehicle-tragedies-michael -dekort/
Please let me know if you would like to discuss this further.
[ link to this | view in chronology ]
Re: futility
the public is heavily influenced by the mainstream media and massive internet social chatter ... which are all extremely biased toward the cool/glamorous hi tech aura of "Driverless Cars" . Facts are unimportant. Your resume is just noise to them.
It's not what you say that counts -- it's what they hear !
[ link to this | view in chronology ]
Re: Re: futility
With the way he's putting across his point, absolutely he is. He gives no reason to believe him, while he's handily knocking down strawmen and hyperbolic versions of what people might actually be saying.
Maybe people should try not simply attacking those with opposing opinions, attacking positions most of them have not stated, arguing logical fallacies like appeal from authority and referring to trustworthy primary sources rather than random opinion posts on social media.
It's correct to be cautious, but posts like the above really aren't going to change anyone's mind.
[ link to this | view in chronology ]
Re: Re: futility
Would you rather have blind obedience? I'm sure some would say yes.
[ link to this | view in chronology ]
Being able to own an automobile and being able to take it wherever you want, when you want are significant freedoms. The "People would be safer without humans behind the wheel" crowd need to understand that a future with fully autonomous vehicles being enforced as the norm means less personal autonomy for your average person. There's no getting around the fact that it would be a huge transfer of power from individuals to corporations and governments, and that's something that I feel would be worth an article or two on Techdirt (if there aren't any already).
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: locus of agency
And it becomes more acute when we talk cars...who is deciding to what degree it will obey its occupants commands?
This is the conversation being avoided...the contract left implicit that causes all the trouble.
Not that these things can’t save lives....and let me do stuff on my commute
[ link to this | view in chronology ]
Re: Re: locus of agency
[ link to this | view in chronology ]
Re: Re: Re: locus of agency
Standard Car: Drive off that bridge. Car: going forward, steering towards the barrier...oops, we are in the drink!
Autonomous Car: Drive off that bridge... do we hit the barrier or not, and how hard? Why? Who made the decision?
[ link to this | view in chronology ]
Re: Re: Re: Re: locus of agency
That would be an incredibly specific question that would have to be dealt with on a case by case basis. It's not a real argument for deferring this tech. Hell, half the time we don't really know why the human drivers made certain decisions, and I dare say that the automated system would have a lot more "training" in how to deal with split-second extreme decisions than the average human driver. Plus, it would hopefully provide detailed logs that make the sequence of events clearer than it can be with current post-crash analysis, at least in terms of how the decisions were made if not why, and the lessons learned used to improve software decisions.
Anyway, the security of these devices is a valid concern, and frankly a government agency *will* be the one setting the standards as to whose decisions take the highest priority. Whether or not that was the meaning of the word agency that you meant, a government agency will be heavily involved in these processes.
[ link to this | view in chronology ]
Re:
Depends what your argument is. Most of the arguments do indeed seem to be "I don't want to give up the old ways" and "new tech bad old tech good". Other arguments that are most intelligently arrived at are welcome.
Put it this way - there's typically 2 types of drivers - those who really just need a way to get from A to B, normally on a regular commute, and have no access to decent public transport, and those who genuinely love owning their own vehicle and everything it entails. If the first group outnumber you and your only argument is that you're in the second group, you're going to lose the argument.
"The "People would be safer without humans behind the wheel" crowd need to understand that a future with fully autonomous vehicles being enforced as the norm means less personal autonomy for your average person."
Or... we understand that trade-off and think that the lower autonomy is worth the reduction in death & injury through stupid human decisions that will be prevented.
It also depends on what you class as "freedom". I'd sure love the freedom of being able to have a could of extra beers at a party without trying to get someone else to drive me after, or towork on a few things during my commute while the car drives me, over and above the freedom to do whatever it is that I don't do with my car at the moment. Your tastes may vary, but that doesn't mean my preferences are wrong.
"There's no getting around the fact that it would be a huge transfer of power from individuals to corporations and governments"
Not really as much as you think. Those already control everything from the vehicles you drive to the roads you drive on. Unless you think that they're going to tell you where and when you can drive (which will be the one thing that turns people against this tech, for sure), they're not getting much more power over you.
Otherwise, again, what is worse in the minds of most people - the government being able to actually force cars to stick to that 70mph speed limit, or the asshole who just caused a 20 car pileup because he used his freedom to drive at double that limit before he lost control? You might argue freedom, but the people stuck in the 10 mile tailback trying to get home would sure love some of that automated control at that point in time.
[ link to this | view in chronology ]
Re: Re: tradeoffs with autonomous
Start with my point-wise estimate from personal experience: One acquaintance killed in a car was drunk or high and joyriding, distinctively stupid; another hit ice and smashed a pole. Do we have a good model for the actual causes (besides the common observation that half of fatalities involve alchohol) of both injuries and fatalities involving cars?
As for manual control: I've been to plenty of events where parking is "on the grass", and until autonomous transport becomes common enough near my rural home I'm comfortable not owning a car, I'll need my own vehicle. I've also borrowed parking spots on the side of the road, and used an overgrown gravel drive. I have overriden "lanekeeping assist" many times to go around trees and things that have fallen or almost fallen into my roads from the latest windstorm, not to mention runners and cyclists. How hard can I ask my vehicle to push the speed limit? 10 mph under? (it's a good day, want to finish something before I hit work) 10mph over? (it's a bad day, boss is mad I'm not there yet) 20mph over? (I'm having trouble breathing, asthma, we are on the way to the hospital) Oh, and, someone tries to wave me down on the road...now what? (one time that was a domestic violence victim, but what if I'm afraid?)
Now, we get to a law-enforcement interaction with my vehicle and me....what if I'm a suspect and they want my vehicle to stop? What if I want to ensure that happens in a relatively safe place, like in front of a crowd, so there are lots of eyes reporting the abuse law enforcement is all-too-ready to heap on suspects?
I think I have just shown that proper autonomous driving has multiple value- and context- based decision problems on the same level of complexity as moderating online forums.
Now, for an interesting idea:
Hey, can I leave my personal autonomous vehicle double parked, NYC style, or send it to a parking lot a mile away from my dropoff? And I'm sure happy to have it go visit the mechanic while I'm at work....
[ link to this | view in chronology ]
That may possibly be true. However, did anyone actually claim they would be during the prototyping stage or insist that they will never improve with further testing before they come to the mainstream market?
[ link to this | view in chronology ]