Arizona Bans Self-Driving Car Tests; Still Ignores How Many Pedestrians Get Killed
from the plenty-of-blame-to-go-around dept
By now, most folks have read about the fact that Uber (surprise) was responsible for the first ever pedestrian fatality caused by a self-driving car in the United States. Investigators in the case have found plenty of blame to go around, including a pedestrian who didn't cross at a crosswalk, an Uber driver who wasn't paying attention to the road (and therefore didn't take control in time), and Uber self-driving tech that pretty clearly wasn't ready for prime time compared to its competitors:
"Uber’s robotic vehicle project was not living up to expectations months before a self-driving car operated by the company struck and killed a woman in Tempe, Ariz.
The cars were having trouble driving through construction zones and next to tall vehicles, like big rigs. And Uber’s human drivers had to intervene far more frequently than the drivers of competing autonomous car projects."
All of the companies that contribute tech to Uber's test vehicle have been rushing to distance themselves from Uber's failures here. Many of them are laying the blame at the feet of Uber, including one company making it clear that Uber had disabled some standard safety features on the Volvo XC90 test car in question:
"Uber Technologies Inc. disabled the standard collision-avoidance technology in the Volvo SUV that struck and killed a woman in Arizona last week, according to the auto-parts maker that supplied the vehicle’s radar and camera.
“We don’t want people to be confused or think it was a failure of the technology that we supply for Volvo, because that’s not the case,” Zach Peterson, a spokesman for Aptiv Plc, said by phone. The Volvo XC90’s standard advanced driver-assistance system “has nothing to do” with the Uber test vehicle’s autonomous driving system, he said."
Mobileye, the company that makes the collision-avoidance technology behind Aptiv's tech, was also quick to pile on, noting that if implemented correctly, their technology should have been able to detect the pedestrian in time:
"Intel Corp.’s Mobileye, which makes chips and sensors used in collision-avoidance systems and is a supplier to Aptiv, said Monday that it tested its own software after the crash by playing a video of the Uber incident on a television monitor. Mobileye said it was able to detect Herzberg one second before impact in its internal tests, despite the poor second-hand quality of the video relative to a direct connection to cameras equipped to the car."
In response to Uber's tragic self-driving face plant, Arizona this week announced that it will be suspending Uber's self-driving testing technology in the state indefinitely:
NEW: In light of the fatal Uber crash in Tempe, Governor Ducey sends this letter to Uber ordering the company to suspend its testing of autonomous vehicles in Arizona indefinitely #12News pic.twitter.com/gO5BZB9P2e
— Bianca Buono (@BiancaBuono) March 27, 2018
Plenty have justly pointed out that Arizona also has plenty of culpability here, given the regulatory oversight of Uber's testing was arguably nonexistent. That said, Waymo (considered by most to be way ahead of the curve on self-driving tech) hasn't had similar problems, and there's every indication that a higher quality implementation of self-driving technology (as the various vendors above attest) may have avoided this unnecessary tragedy.
Still somehow lost in the finger pointing (including Governor Doug Ducey's "unequivocal commitment to public safety") is the fact that Arizona already had some of the highest pedestrian fatalities in the nation (of the human-caused variety). There were ten other pedestrian fatalities the same week as the Uber accident in the Phoenix area alone, and Arizona had the highest rate of pedestrian fatalities in the nation last year, clearly illustrating that Arizona has some major civil design and engineering questions of its own that need to be answered as the investigation continues.
Again, there's plenty of blame to go around here, and hopefully everybody in the chain of dysfunction learns some hard lessons from the experience. But it's still important to remember that human-piloted counterparts cause 33,000 fatalities annually, a number that should be dramatically lower when self-driving car technology is inevitably implemented (correctly).
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: arizona, autonomous vehicles, pedestrians, safety, self-driving cars
Companies: uber
Reader Comments
Subscribe: RSS
View by: Time | Thread
I Didn't Know A Law Could Be Written And Passed That Quickly
[ link to this | view in chronology ]
Re: I Didn't Know A Law Could Be Written And Passed That Quickly
[ link to this | view in chronology ]
Re: tl;dr
[ link to this | view in chronology ]
Re: Re: tl;dr
[ link to this | view in chronology ]
Re: Re: tl;dr
[ link to this | view in chronology ]
Re: I Didn't Know A Law Could Be Written And Passed That Quickly
[ link to this | view in chronology ]
So how many humans are you willing to kill?
And as I've noted, this minion may be only a re-write bot. That'd also explain the at least seven "accounts" having six year gaps yet return without showing least awareness of that LONG time, nor how ODD that suddenly recall Techdirt and password...
[ link to this | view in chronology ]
Re: So how many humans are you willing to kill?
If anything, you are being the one willing to keep the status quo of thousands dying. Who is killing people, the ones striving for safer cars, roads or the ones trying to block such developments? Did vaccines come to fruition without any death? Do you really think the ones developing medicine want to see people die due to the testing part? Are you that stupid?
[ link to this | view in chronology ]
Re: Re: So how many humans are you willing to kill?
It doesn't. A single datapoint is not statistically significant. You can't draw a trend line if you only have one point.
[ link to this | view in chronology ]
Re: Re: So how many humans are you willing to kill?
Uber killed a person with bad driving. Now their license was suspended—temporarily—and they'll need to reapply and convince the DMV they can drive safely. Other self-driving companies can continue operating.
Is that unreasonable? I'd expect the same if a human killed someone with their car.
Also see the Ars stories saying that Uber cars are 25 times as deadly as humans, and far less safe than other self-driving cars. OK, sample size is small, but a temporary pause isn't crazy.
[ link to this | view in chronology ]
Re: Re: Re: So how many humans are you willing to kill?
No, it's not unreasonable. The people calling for the entire sector of tech development to be discontinued due to a single incident are the unreasonable ones.
"Also see the Ars stories saying that Uber cars are 25 times as deadly as humans"
The ones that lie with statistics manipulation to advance a certain narrative? No matter how you spin it, that claim is a ridiculous spin aimed at scaring the ignorant.
[ link to this | view in chronology ]
Re: tl;dr
Would that be the article where the author clarified that claim after about a dozen people in the comments pointed out that the data -- or rather, datum -- we have is not significant enough to support it?
[ link to this | view in chronology ]
Re: Re: So how many humans are you willing to kill?
[ link to this | view in chronology ]
Re: tl;dr
Except it's not as simple as that, because the limited data we have are so limited that they don't support the comparison you're making.
https://en.wikipedia.org/wiki/Significance_arithmetic
https://en.wikipedia.org/wiki/Propagat ion_of_uncertainty
[ link to this | view in chronology ]
Re: Re: Re: So how many humans are you willing to kill?
[ link to this | view in chronology ]
Re: Re: So how many humans are you willing to kill?
It takes one billion miles of motoring for 660 accidents with human drivers, so even Google has not done enough miles to be able to say that it better than a human. I suggest that a benchmark should be 10 million miles, in which case humans would have had a few accidents.
I think self driving cars should be completely banned from the road until they have done 10 million miles of simulated driving. By that I mean that the software should be QAed for 10 million miles before being allowed on the road, and if you change the code you need to do it again -sort of like validating drugs. This is an infant industry and it needs to prove that it is better than humans and we can't risk it failing because some company pushes out an OTA release because of time pressures.
The new Nvidia technology makes this completely possible and car companies need to buy the tech and do the job properly.
[ link to this | view in chronology ]
Re: So how many humans are you willing to kill?
[ link to this | view in chronology ]
Recommended Reading
The Ones Who Walk Away From Omelas by Ursula K. Le Guin. In Omelas Guinn posits a joyful, healthy, utopian community that is powered by a single forsaken child, a literal kid who is shown no compassion or mercy and seldom lives to her teens. Upon the child's demise, she is replaced with another child given the same treatment, and by this inflicted misery, Omelas' luster, prosperity and tranquility are preserved.
The story is a classic, commonly inflicted on 7th-to-8th grade English students in the US to make them cry and fiercely discuss morality.
The notion of such a place is horrific until we realize that we fuel the well being of our own society with countless miserable lives who suffer and perish with little meaning, whether we work them to death, or pack our prisons with them or throw them into meaningless wars. And the society propped up by all these squandered lives is a far, far reach from paradise or utopia. It's certainly not Omelas
So we are evidently willing to kill thousands. Millions.
And any effort to advance technology to reduce that number is a good investment out of desperation that some day things might be better.
[ link to this | view in chronology ]
PS: note also the plug for Waymo, GOOGLE subsidiary.
But it will. Inevitable. We have ONLY Google's word for how often "intervention" is needed, no independent audit. Google surely also, as spook front, has its own "Men In Black" squad that covers up the incidents.
And as minion hints, the Techno-Uber-Alles types are resolved to get this no matter what consequences to humans.
[ link to this | view in chronology ]
Re: PS: note also the plug for Waymo, GOOGLE subsidiary.
The tech will get better over time but I find it odd that we're doing live testing in crowded urban settings already. We're just not there yet, it's still experimental (to wit: the useless human "driver" just in case).
[ link to this | view in chronology ]
Re: Re: PS: note also the plug for Waymo, GOOGLE subsidiary.
"AI will never be able to predict individual human behavior."
Humans will never be able to predict individual human behavior, so we can never expect AI to do so either.
"The tech will get better over time but I find it odd that we're doing live testing in crowded urban settings already."
Because you have to test in real world situations to know whether your isolated testing has actually simulated the real world effectively. And given how complex the real world is, it's never never going to be practical to perfect these systems without leaving the testing grounds. This case seems to be more about Uber's poor implementation and Arizona's overly permissive regulations than proof we're doing this too soon.
[ link to this | view in chronology ]
Nothing to see here
NONE of the autonomous car companies testing in Arizona are required to do a basic test to put their cars on the road.
Arizona requires no testing be done of the vehicles to determine how safe they are
Arizona recently updated its rules and did not include any of the above provisions
There is nothing preventing this from happening again with another car company, and the governor has done NOTHING to even appear to try and prevent it.
[ link to this | view in chronology ]
Re: Nothing to see here
[ link to this | view in chronology ]
Re: Re: Nothing to see here
I'll guess there's some hyperbole in there too, but I don't see the point of doing the research to answer another tiring post by someone desperate to pass blame on to Uber rather than examining what really happened.
[ link to this | view in chronology ]
Re: Re: Re: Nothing to see here
Blame isn't a zero-sum game; it's possible to assign blame to multiple parties. The victim shouldn't have crossed the street where she did; the safety driver should have had her eyes on the road; the car should have recognized an obstacle and swerved or braked. And that's just for starters. As the article notes, the Phoenix area's high number of pedestrian fatalities indicates a failure of city planning. It's possible that the speed limit should be lower on that stretch of road, or that other mitigating factors could be implemented. (That section of Mill also curves, which isn't really a city planning problem, it's a geography problem; there's a mountain that it has to curve around.)
It's also possible that Arizona's lack of regulations for AVs may be partially to blame, though I'm not prepared to say that for certain before the investigation concludes. This may be an issue where additional regulation could have prevented whatever issue that resulted in the car failing to brake or swerve, or it may not.
Regardless, there were multiple points of failure here. There usually are.
[ link to this | view in chronology ]
If a human kills another human there is a process to deal with it. When a dev writes code for a machine that kills someone, why should they have zero culpability?
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
Let me know when it's Developed.
[ link to this | view in chronology ]
Re: Re: Re:
There's no such thing as perfect technology, so that can't be the standard. It's already better then humans in normal driving conditions, so what exactly are you looking for?
Technology is always developing. The cars you drive every day on your public streets are developing technology. You're imagining that there's some finish line that simply doesn't exist.
[ link to this | view in chronology ]
Re: Re: Re: Re:
There may be no "finish line" but there is a point when the general public learns to trust automated vehicles more than human-operated vehicles. Until we get there the corporations ought to be forced to bond their testing so there is a readily accessible fund when they inevitably screw up. Something easier to access than having to sue some entity with massively deep pockets.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: SKYNET
I'm sure it wouldn't be biased towards it's own kind, or decide to take over the entire world either, but if it did... we can just patch it later, right?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
"The difference here is that it's not a single individual driver you can hold accountable when they screw up. In this case you have to try to hold the corporation behind the tech accountable..."
But what's the alternative? No individual is ever going to develop a self-driving car, so there will always be a corporation responsible. It's not that different to a conventional car with a design flaw that proves fatal. It's not an individual engineer that (maybe) gets held to account, it's the company with their name on the back.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
How do we deal with a machine kills a human?
The same way we've always dealt with machines killing humans: regard it as an accident. Trace the situation with diagnostics.
Ultimately we like having someone to blame. If a driver fails to manage a situation and accidentally kills a pedestrian, we use that as justification to make his life Hell, even if there was no way his limited reflexes and attention span could navigate the situation.
The failure here is not the robot driver, it's circumstances that challenge the presumption that there has to be malice or stupidity but ultimately fault.
Just as there's no malice behind industrial accidents regarding large machines (typically) there's also no malice behind a vehicle that hit a pedestrian. The failure may end up being a software glitch, a sensor failure or (probably) a combination of several factors including a bad situation.
So rather than blaming, we should seek to fix it. Because I betcha our driving software is already better than most human drivers. We just like having a natural person to blame and reprise against.
[ link to this | view in chronology ]
Re: How do we deal with a machine kills a human?
Dream on. I betcha it isn't. Not even close.
[ link to this | view in chronology ]
Re: Re: How do we deal with a machine kills a human?
[ link to this | view in chronology ]
Re: Re: Re: How do we deal with a machine kills a human?
You first.
[ link to this | view in chronology ]
Re: Re: Re: Re: How do we deal with a machine kills a human?
But, thanks for demonstrating how weak the grasp of logic is among the people arguing against this tech so far.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
Oh, Pauly, Pauly, Pauly. You know your previous statements are still visible, right?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
You guys aren't really too stupid to understand how a conversation works, are you? You're just pretending, surely?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
Are you hallucinating or are you just getting desperate for attention again?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
I that case, he was was just asking for stats to back up your guess work.
I gave you the benefit of the doubt that maybe you were just demonstrating ignorance rather than duplicity. I can see now that I was wrong. Go ahead now, have your final say, as I really have nothing more to say to you.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
Why? As you stated it was a guess, I never stated it as fact. Why are you pretending it wasn't a guess, which by definition doesn't need proof as long as it's not being stated otherwise? Are you really that desperate to say things?
"I can see now that I was wrong."
No, I can still see that you're lying about what I said, though.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
To quote: "Because I'm sure the single death related to an AI driver is far outweighed by the human caused deaths, even after adjusting for driving time"
The fact that you also did not offer up any data to back your opinion still leaves it just as much a "claim" as his was.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
"The fact that you also did not offer up any data to back your opinion still leaves it just as much a "claim" as his was."
Yes, so why is some tosser then demanding proof for a claim that was never made? Opinions don't need proof so long as they're stated as opinions. Which is what the words I typed were. Jesus.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
As we've entered the world of "Deploy it quick! We can patch it later." software quality has gone down the crapper. Most embedded systems are riddled with issues, the type of software that should be least prone to such problems (to wit: IoT, including vehicles). There is no reason in the world anyone should trust any software-driven system any more. Given that vehicles are basically guided cannonballs we should be especially careful with how they're deployed.
No, I'm not a fan of this tech but I do see it is inevitable. Some day we'll get there but for right now we should not be testing this experimental technology in crowded urban areas.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
"we should not be testing this experimental technology in crowded urban areas"
The single death being discussed happened near a public park while there were few people around. The problem is, the thing that caused the crash was erratic human behaviour, and you're not going to account for that with lab testing. You have to expose your systems to the real world.
[ link to this | view in chronology ]
Re: How do we deal with a machine kills a human?
Autonomous vehicles were created by humans, they are not a force of nature.
[ link to this | view in chronology ]
Bad faith vs. Perfect Storms
When a bridge has a known fatal flaw, when it fails due to shoddy materials, then yes, that can be traced back to a designer or an engineer or a contractor. But if the bridge was designed to withstand a 7.5 earthquake but falls apart when a 9.2 earthquake hits, then there's no human at fault. That was the risk taken when building a bridge that can withstand only so much. (And really, a bridge that could withstand a 9.2 would probably be too expensive to build)
When the Three Mile Island meltdown occurred, it was determined to have been caused by a perfect storm of component statuses that resulted in system collapse. Maybe a nuclear supergenius might have been able to predict it, but no one had. Newer power reactors have safeguards that will help reduce future complex failures and will definitely prevent a failure like the one at Three Mile Island, but there are an unlimited number of ways a system can fail, and we can only ultimately reduce their probability.
Now I'm not entirely sure what happened with the autonomous vehicle. Did the car detect Herzberg at all? If not was it a sensor failure or a processing failure? So far, we don't know. The AI may have been released for field testing prematurely. The driving system may have insufficient redundancy in its sensors. The vehicle's drive train or braking system might have failed. We don't yet know.
But the question is, could someone have prevented it, and failed to do so, either due to neglect, shoddy work or malice? If the answer is yes, then yeah, we have someone we can blame. If not, then the only thing to be done is to learn from this incident, and add safety features that would prevent it from happening again.
Now yes, the last line of defense, the safety driver, wasn't paying attention. But his responsibility is redundant. He is to blame for not catching a failure. He's not to blame for why it failed.
The thing is we build systems bigger than ourselves all the time. A train going at full speed is a force of nature, as is a main electric power plant (nuclear or otherwise). We depend on countless systems that are beyond our control, and it's never clearer when those systems fail.
[ link to this | view in chronology ]
Re: Bad faith vs. Perfect Storms
Waymo cars were able to drive almost 5,600 miles last year without driver intervention. Uber’s cars weren’t able to meet a target goal of 13 miles per intervention.
So the safety driver wasn't the redundant "last line of defense." The goal was that some day he might be, but for now he was the non-redundant FIRST line of defense.
The problem of keeping safety drivers attentive in driverless or Tesla Autopilot cars isn't new either. Or even before that: Driver Attention Monitoring Systems - using eye tracking and more - have been in production cars for over a decade.
This wasn't a 9.2 earthquake hitting. It wasn't a perfect storm. It was shoddily designed system, not ready for using an unsuspecting public as test subjects.
[ link to this | view in chronology ]
Sometimes there is fault. (Just not always)
If the Uber cars required intervention every 13 miles or less, then yeah, it sounds like the safety driver might not have been doing his job. You had more information than I did.
I think the point I want to emphasize is that autonomous vehicle programs or the notion of self-driving cars should not be generally condemned even if the failure in the Herzberg incident turns out to be a bad actor, say a negligent operator or a poorly tested system introduced to the field sooner than due caution should have allowed.
Anonymous Coward was suggesting that whenever a bridge collapses, whenever we have an industrial accident, whenever a train derails then we can always attribute it to wrongdoing of a human being (or a committee acting as a single entity). And I was trying to say it's not always so simple, that sometimes systems fail because systems are complex. Heck, the sinking of the Titanic came down to a general weakness of common rivets as they were made circa 1912, that no-one predicted.
In the specific case of Herzberg's death, then yes, it sounds like there might have been bad actors after all.
[ link to this | view in chronology ]
Re: Re: Bad faith vs. Perfect Storms
I'm all for punishment if this is shown in court. Until then, what we have is luddites attacking a new technology out of fear like they have every new technology for centuries.
[ link to this | view in chronology ]
Re: How do we deal with a machine kills a human?
[ link to this | view in chronology ]
Re:
Sell more of the device used to kill humans for fear that we won't be able to arbitrarily kill humans with said device if the government outlaws said device?
[ link to this | view in chronology ]
Re:
Only if you also call normal cars Human Operated Killer Robots. Otherwise, you're being really, really stupid.
"If a human kills another human there is a process to deal with it."
Yes there is, it's called due process, in fact. Something that's been lacking in the attacks on this issue in the public sphere.
"When a dev writes code for a machine that kills someone, why should they have zero culpability?"
Is anyone saying they should have none? But, nobody's shown that their code caused the accident. In fact, the video appears to show that a human would not have fared any better. Baying for blood doesn't change the fact that genuine accidents happen.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
She wasn't jaywalking.
I know the police initially used the word, but there's a lot about what they said that turned out to be - to put it politely - inaccurate.
In most places it's only jaywalking to cross in mid-block when the intersections at both ends of the block have lights. That wasn't the case here.
Well. Perhaps it can be labelled jaywalking. But not to imply that it was illegal or even wrong.
[ link to this | view in chronology ]
Re: Re:
I have no idea why the Uber camera showed the area to be as dark as it did.
[ link to this | view in chronology ]
Re: Re: Re:
You mean by the time it was finally released? Gee, I wonder what could account for that...
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
Of course if the street lights failed, Uber would be telling us incessantly. It didn't happen. Their video was a lie.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re:
This makes Lidar useful for detecting, ranging and tracking targets, but requires a camera to identify them, and maybe makes the camera the prime detector of long range targets.
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re:
You're not suggesting that they might have been trying to protect corporate interests, are you? How shocking!
[ link to this | view in chronology ]
Re: Re:
Arizona Revised Statutes §28-793 says:
She was within her rights to cross, but it was her duty to make sure it was safe, and not step in front of a car.
It is, of course, also the duty of the driver to "Exercise due care to avoid colliding with any pedestrian on any roadway."
[ link to this | view in chronology ]
Basically
You program the AI to respond, "obstacles"... like pedestrians, like a human driver would, that is driving with the assumption that pedestrians can and will get out of the way, probably resulting in the "occasional" fatality.
Or you program the AI to slow or stop for every pedestrian, resulting in a vehicle that takes forever to get anywhere.
Option three is to create Autonomous vehicle roads, banning pedestrians from them. basically turning autonomous vehicles into some analogue of a light rail system.
Personally I'm not sure I would want anything to do with any of those options (there may be others, I can't think of). But if autonomous vehicles are to respond like human drivers, we might just as well have human drivers... at least it's easier to find someone to blame in the event of an accident.
I don't see the point of option 2
and Option 3 seems like a set backwards, to me, plus it would be rather expensive to build and there's no guarantee pedestrians still won't get killed.
I guess I'm just a reactionary who doesn't like ceding control to computers.
[ link to this | view in chronology ]
Re: Basically
you don't live in Arizona by any chance?
[ link to this | view in chronology ]
Re: Re: Basically
[ link to this | view in chronology ]
Re: tl;dr
They do, but we don't actually have many roundabouts here, so it's mostly not an issue.
[ link to this | view in chronology ]
You're going to have to explain this one to me.
In California, pedestrians have the right of way. They are still required to cross at crosswalks, and being out in the street when you're not supposed to is considered jaywalking. But people get around without too many delays from obnoxious pedestrians.
In New York, I understand that cars have the right of way on streets and pedestrians are expected to get out of the way. But at the same time I've also heard that drivers eagerly take their vehicles on sidewalks. Maybe pedestrians are saved by the daily traffic congestion reducing city traffic to five miles an hour.
But I'm not sure why pedestrians in California would be less inclined to obey traffic laws when the cars are robot-driven than when they're human driven. Maybe you have a logic I haven't worked out.
Here in California, pedestrians are not allowed on freeways, which makes sure that long trips are not hindered by pedestrian traffic. Again, maybe it's different on the East Coast.
[ link to this | view in chronology ]
Some Choices are No Fun For Uber
https://www.techdirt.com/articles/20180322/09215539477/ubers-video-shows-arizona-crash-vi ctim-probably-didnt-cause-crash-human-behind-wheel-not-paying-attention.shtml#c1918
Based on the number of neurons in a human brain, the number of connections per neuron, and the rate at which neurons fire, a defensible performance estimate for the brain is something like 150 PetaFLops. There are a very few giant supercomputers in the world that fast. They fill up whole warehouses, cost hundreds of millions of dollars, and have power requirements in the tens of thousands of horsepower. None of them are old enough for kindergarten. That is, they simply have not been running long enough to achieve internal organization. No economically foreseeable computer is likely to match a human driver's ability to distinguish pedestrians from scattered garbage, etc.
Some years ago, I was walking through a parking lot, past the entrance of a bowling alley. In the doorway, next to her father, was a little Italian-American girl, perhaps four or five years old., with auburn hair, peach skin, and black doe-eyes. Quite a little darling. Her father was teaching her how to cross the street. He told her to look left and right, and to see if anything was coming. She looked rather doubtfully at me, walking towards them. Her father followed her gaze, and laughed, with a rather apologetic gesture to me "Oh, not him, honey. He's not an automobile!" Listening to small children, one realizes their sense of the unreality of the world.
That said, certain claims for Artificial Intelligence are, ipso facto, fraudulent. If someone says his system does it the way a human does it, he's lying. If you want to put it that way, Elaine Herzberg was killed so that Uber could perpetrate a fraud on prospective investors. You cannot get in on the ground floor of a government project. Government projects just don't work that way. Uber has an immense stake in convincing the public that self-driving cars will not require special roads, because special roads are the province of the government.
Artificial Intelligence only works in certain extremely reductionist subjects, such as chess. A chess queen is defined in such a way that actual queens such as :Queen Semiramis; Queen Cleopatra,;Queen Zenobia; Queen Bodicea,; Empress Livia; Empress Theodora; Elizabeth of Hungary; Eleanor of Aquitaine Queen Phillipa (of Hainault); Anne Boleyn (Anne of the Thousand Days),; Queen Elizabeth I (Tudor); Mary Queen of Scots,; Catherine de Medici (France); Queen Isabella of Castille; Anne of Austria (queen of France); the Russian empresses Elizabeth and Catherine the Great,; Marie Antoinette; Queen Victoria; and Tsu Hsi (the Chinese Dowager Empress) are all irrelevant and prejudicial, A chess master is, first and foremost, a master of deliberate forgetting. He can create walls in his mind to exclude irrelevant knowledge. At that level, the sheer mental load of deliberately forgetting is so great that it is much easier to have never known the irrelevant facts in the first place. A chess program is ultimately a triumph of ignorance.
[ link to this | view in chronology ]
Re: Basically
[ link to this | view in chronology ]
Re: Re: Basically
The way I see it people are just plain stupid. If you program the AI in such a way it avoids people being stupid, people will take advantage of that, and self drive cars will get nowhere fast.
program the AI to be less accommodating, and I suspect the accident rate will go up to the point that it makes not much difference if there's an AI or a human driving
create AI only roads, and we're back to railways.
[ link to this | view in chronology ]
Re: Re: Re: Basically
All the more reason to get them out of the way as much as possible. Not being able to predict the stupid things people will still do despite the tech, is not a reason to avoid implementing the tech.
"create AI only roads, and we're back to railways."
Which with the added bonuses of not having to fit around a predetermined schedule and being able to choose the start/end points of the journey would be absolutely fine for the needs of most people.
Rail systems work pretty well as mass transit systems through most of the world. In the US, the problems with them are often due to lower population density and entire cities being designed with the assumption everyone will have a car. One of the main problems getting people to switch to public transport is the travel between them and the station/hub they need to use to get to it, the other is trying to fit around a predetermined timetable. If you can use special roads and get the travel done door-to-door and at the time the traveller themselves wishes, that's more than sufficient for most commuter traffic.
There's still use cases where the use of a standard vehicle might be preferable, but if all you're trying to do is get from A to B and the traffic flow can be reliably optimised, then a "railway" might be all people actually need.
[ link to this | view in chronology ]
Re: Re: Re: Re: Basically
But yeah it would probably be a better way to utilise AI vehicles.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Basically
Not completely, but a lot of deaths will still be prevented. One of the silliest arguments you can make on this issue is that it shouldn't be attempted because the results won't be perfect. No technology is perfect, but the results will still be there to see even when mistakes happen.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Basically
1. An automobile drives onto a transporter vehicles, parks, and thereby becomes baggage. Any old automobile will do-- you don't need a Tesla.
2. The transporter vehicle carries the automobile to a location close to the final distination.
3. The automobile drives off the transporter vehicle, and proceeds to its final destination, subject to a speed limit low enough that it can be safe.
The transporter vehicles are specific to one improved road system. They do not present compatibility problems. New York and Los Angeles do not have to use the same kinds of vehicle transporters. The transporter vehicle systems can grow organically, adding one route at a time, and taking over one additional lane at a time.
An automobile can drive off one transporter vehicle and onto another, just as a railroad passenger changes trains at a station.
For a long time, there have been "auto trains," of two basic kinds. One is for carrying automobiles through railroad tunnels, eg. the English Channel tunnel, and the vaarious Alpine tunnels. The passengers drive their cars onto the train, and stay in the cars. Road tunnels are much harder to build than railroad tunnels, and the road tunnel under a given Alpine pass was typically built fifty or a hundred years after the rail tunnel.
The other kind of auto-train is for going long distances, typically from the winter to the sun. The American Auto-Train runs from Northern Virginia to Orlando, Florida, a matter of eight-hundred-and-fifty miles, and I understand there are similar services from Germany to Italy. Passengers have their cars loaded onto "auto-racks," and then move into sleeper cars for an over-night journey. The French system is different. Passengers check their automobiles as baggage, but then take a high-speed train, at anything up to two hundred miles an hour, and stay in hotels until their autoobles, hauled on ordinary trains, catch up with them.
[ link to this | view in chronology ]
Re: Basically
[ link to this | view in chronology ]
BUT, this was a Self Driving UBER car. A car with LIDAR!!! Do you know what this means? it's like Radar and it paints a 360-degree picture of everything around it and it works just as well in the DARK as it does during the day!!!
This person was crossing the street left to right. This person had a BIKE they were pushing also. It's a BIG target. There was no jumping out in front of the car. The person wasn't hidden where the LIDAR couldn't see.
The person was dumb enough to walk right in front of a car in the dark, expecting it to stop. That's natural selection at work. On the other hand, that UBER car should have seen that person, further back in the dark, and stopped in more than enough time to not hit the idiot jaywalker!!!
If anything, this was a perfect example of how Self Driving cars are better than a human, and yet it completely FAILED. The car didn't stop. it didn't swerve. Didn't slow down. It was like the person just wasn't there.
Remember, a Pedestrial ALWAYS has the right away over a car. Even IF they are jaywalking. You take the risks that go along with that also. Just because someone is jaywalking, doesn't mean you can run them down, trying to score points.
The UBER car completely FAILED. It clearly wasn't ready for primetime.
[ link to this | view in chronology ]
Re:
Thanks, I'll keep that tip in mind for the next time I'm forced to drive through inner city ghetto streets (which I tend to avoid like the plague especially at night) where people habitually walk out in front of moving cars for reasons that I've yet to understand.
[ link to this | view in chronology ]
Re: Re:
Anybody stupid enough to get in my way deserves what they get.
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re:
"The person was dumb enough to walk right in front of a car in the dark, expecting it to stop. That's natural selection at work."
That might sound pretty harsh, but in all the talk about this incident there has been very little said about the fact that the car would've been just as visible to the pedestrian as the pedestrian supposedly should've been to the car. And on the road, the car wins. She deserves just as much blame as Uber does.
[ link to this | view in chronology ]
Re: tl;dr
I...can't speak for where you live, but I can state with some confidence that in Tempe, most pedestrians are not equipped with LIDAR.
[ link to this | view in chronology ]
Re: Re: tl;dr
[ link to this | view in chronology ]
Pedestrian in the headlights.
I'm still wondering about that part. Herzberg is unfortunately not alive to ask if she saw the approaching Uber car.
BART trains in the Bay Area have LIDAR to monitor the tracks and break if an obstacle is detected, and this system is (by magnitudes) faster than the reflexes of train operators. Still, occasionally we'll get a suicide that leaps in front of a BART train.
[ link to this | view in chronology ]
Re: Pedestrian in the headlights.
[ link to this | view in chronology ]
Re: Re: Pedestrian in the headlights.
BART is light rail, and has its own proprietary standards. Still, they're trains and they can't stop on a dime. But BART is pretty proud how difficult it is to get run over by a train when one isn't willfully trying. I think it's less about breaking power but early and thorough detection.
[ link to this | view in chronology ]
Correctly
But it's still important to remember that human-piloted counterparts cause 33,000 fatalities annually, a number that should be dramatically lower when self-driving car technology is inevitably implemented (correctly).
The keyword is correctly.
If the collision avoidance system that Uber disabled was present and working on all those human driven cars that fatality figure would already drop dramatically.
The point is this. A completely autonomous car, on the public roads, given current technology levels, is nothing other than a publicity stunt.
To run such a publicity stunt at present is stupid and selfish - and can only delay the technology. (Which will of course cost lives).
Don't get me wrong here. I fully support the use of technology to fix our road death problems - I just think that the way that Uber, Google etc are going about it is wrong.
At present we should concentrate on systems that monitor the human driver and intervene to prevent accidents. (As others have pointed out - this IS happening anyway). Once these technologies are fully developed and universal we can move on to fully autonomous vehicles.
[ link to this | view in chronology ]
It appears this ban affects only Uber. Waymo cars were able to drive almost 5,600 miles last year without driver intervention. Uber’s cars weren’t able to meet a target goal of 13 miles per intervention.
Uber is losing $billions a year, but got $3.5 billion from Saudi Arabia’s Public Investment Fund.
Arizona didn't ban self-driving cars; they banned a method of extracting money from Saudis.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Citation needed. -- Not from The GOOGLE, either.
So you're told! But your only "evidence" is from the entity in question.
[ link to this | view in chronology ]
Re: Re: Citation needed. -- Not from The GOOGLE, either.
Waymo Via NYT
Beyond that I won't accept that you're skeptical, with the only "evidence" coming from the entity questioning.
[ link to this | view in chronology ]
Ducey didn't do anything except election-year posturing. He banned Uber's AVs after Uber pulled them off the roads.
There's an ongoing investigation. One of two things is going to happen: the investigation will conclude that Uber was at fault, or it won't.
In the latter case, Ducey will rescind the ban. In the former, he could have waited to institute the ban until after the results were released.
Announcing a ban before the end of the investigation, while the cars are already off the roads, accomplishes nothing except to make Ducey look like he's doing something about this.
[ link to this | view in chronology ]
Re:
It accomplishes keeping them off the roads. Sorry for your investment loss.
[ link to this | view in chronology ]
Re: Re:
If they wouldn't have gone back on the roads until after the conclusion of the investigation anyway, then announcing a ban before the end of the investigation accomplishes no such thing, as compared against waiting to announce one (or not) after the investigation ends.
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Unless they could interface it to their own system, that makes perfect sense, as two computer fighting for control of the vehicle is a recipe for distaster. Does Volvo publish an API, or is their technology a trade secret?
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Software to challenge and monitor human back-ups?
Similar software could be designed for human train operators in rural areas. Recent gruesome train accidents mostly seem to be due to operator inattention, not surprising when one considers how completely the details of operating a train are now automated.
[ link to this | view in chronology ]
Re: Software to challenge and monitor human back-ups?
It seems like a second safety driver would help significantly. That would, of course, double the cost of drivers and reduce the maximum number of paying passengers by one per car; it would be a significant expense. And it's not clear that it would have helped in this case. But if I were making recommendations, that one would be up there.
[ link to this | view in chronology ]
How about just ban cars?
[ link to this | view in chronology ]
(non) driver alertness
This is purely anecdotal, but it certainly seems that a person riding as a front seat passenger is much MUCH more likely to doze off during a long trip than when that person is driving. As a self-driving car tends to make every human "driver" into basically a passenger, a big question is how much this might quantifiably affect that person's alertness, especially when lulled over time to trust the computer to drive safely and thus relax.
Until the self-driving car is perfected and has no need for a human backup driver, one solution might be to see what can be done to both monitor the "driver" more effectively as well as perhaps provide some sort of suitable mental stimulation that would make up for the loss of driving sensation that would otherwise be keeping a hands-on driver alert.
[ link to this | view in chronology ]
Many more manual cars
Like 4 or 5 few orders of magnitude more. Saying "more regular cars kill people than self-driving cars" means absolutely NOTHING.
If anything I've even seen someone "do the math" and prove that if everyone had one of those Uber cars, there would be like 130x more accidents.
Don't know how good that math was but the point is these self-driving cars, and especially Uber's self-driving cars may still be WAY too unreliable - and yes, even more unreliable than the "average human driver".
But of course self-driving cars need to be way better than the average human drivers. If you think people are just going to accept car deaths from self-driving cars that are ANYWHERE CLOSE to the rate of regular car deaths, then you're nuts. The self-driving car killings needs to be WAY WAY WAY smaller. No question about it.
[ link to this | view in chronology ]
Re: Many more manual cars
That's meaningless unless you compare those figures to the number of accidents that actually happen with humans. Did you do that? Every comparison I've ever seen suggests the figure will still be much less than we have now.
"The self-driving car killings needs to be WAY WAY WAY smaller."
There's still just the one. Run a comparison of that against the number of deaths on the road every single day, and see which figure is larger. Many, many more people have been killed and injured during the freakout about this single death than would have been if such vehicles were commonplace. Feel free to prove me wrong if you like, but I've not seen any convincing evidence as yet.
[ link to this | view in chronology ]
Re: Re: Many more manual cars
[ link to this | view in chronology ]
3 things that go together
2. The Internet Of Things
3. Self driving cars
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Trusting our lives to algorithms
The number of algorithms to which we trust lives is astounding. Some of the big ones are in oil tankers, train systems and power reactors.
And generally they're way better than humans at preventing system failures.
[ link to this | view in chronology ]
Re: Trusting our lives to algorithms
[ link to this | view in chronology ]
Re: Re: Trusting our lives to algorithms
The flight certified software used on commercial airliners receives much more scrutiny during code review, integration, test and subsequent field trials.
The uncertified driverless vehicle software receives ???
Modern airliners are capable of landing autonomously but airports/regulatory agencies are not ready to let them do so I guess. Also, the shuttle is/was capable and did.
[ link to this | view in chronology ]
Certified vs. uncertified software
There was a point when flight-certified software was not yet so reviewed, integrated, scrutinized, etc. There was a point it was installed on a plane for the first time.
Driverless vehicle software are in various stages. Some of them, meant to assist drivers rather than replace them, are installed in production vehicles, such as the Mercedes autonomous cruise control system and Tesla Autopilot.
The same thing with various algorithmic systems that make trains, power plants and industrial parks not explode.
(Fun trivia: Engineers originally ran steam engines which were complicated contraptions with a propensity for exploding spectacularly and catastrophically, often killing the engineer and anyone in proximity. Engines got better as we added mechanical, and later electronic regulators to prevent some kinds of catastrophic system failure, to the point that these days we only need a train operator rather than someone with sophisticated engineering training.)
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
Bad self-driving algorithms can be changed with a software update and can be applied to every car of its kind on the road after just one of them gets in an accident.
Bad human driving behaviours are much, much harder to fix. There are people who actively ignore good advice and resist training, plus you have to reeducate each individual driver one at a time. Also, blowjobs and cash bribes can be used in exchange for passing your test with merit.
Your car will not accept a blowjob or an envelope full of money to ignore a software update. Machines have no morals to corrupt and learn immediately after new data has been uploaded to them. They are much more trustworthy than human beings when it comes to keeping our roads safe.
Face it, meatbags, you suck at driving. Hand over your keys -- the machines are more sober, smarter and safer than you.
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
One security snafu, and a malicious update can be applied to every car of its kind on the road simultaneously. And we all know how good at security car makers are.
[ link to this | view in chronology ]
Car security
I suspect that between failed proprietary device-security and state-mandated backdoors, that's how we'll get right-to-jailbreak (a la right-to-repair) and we'll see an uptick in open-source offerings.
I suspect once someone willfully murders someone else using an IoT exploit (say, to make a robotic insulin dispenser OD a diebetic victim) our fear of cyber-assassins will become greater than our fear of rebellious / buggy robots.
That was the point of a recent XKCD post.
[ link to this | view in chronology ]
overstating your case
- 4 of the fatalities the same week were due to driver error, jumping the curb and hitting someone. 1 was an impaired driver that killed 1 pedestrian. And one distracted driver in an SUV hopped a curb and hit 4 pedestrians, killing 3.
That's not an engineering question, as those curbs have been there quite some time without being "hopped." It's plain, simple human stupidity and violation of existing laws.
- 2 more were pedestrians jaywalking in the middle of the respective blocks where they were struck. One of those drivers was impaired.
- 1 woman was hit by a guy that lost control of his car in a parking lot, straying into the street at the bus stop where she was walking.
- 1 man was killed crossing in the walk, but the driver tested negative for impairment.
- And another 1 was a woman walking in the middle of the road at midnight.
- 1 was a woman in the crosswalk, and the driver fled the scene but was caught later.
Every one of these accidents was in a city. Pheonix, Scottsdale, Tempe. They were all downtown, and at first glance, either the driver or pedestrian in each incident was distracted or DUI. There are plenty of crosswalks in each area.
[ link to this | view in chronology ]
Re: overstating your case
"4 of the fatalities the same week were due to driver error"
OK.
"2 more were pedestrians jaywalking in the middle of the respective blocks where they were struck. One of those drivers was impaired. "
So, at least one was partially due to driver error. What were the circumstances of the other one? You have a fixation of impairment, but what were the other circumstances? Speeding, not driving correctly for the weather conditions, any other issues? There's possible fault there too even if they weren't a DUI.
"1 woman was hit by a guy that lost control of his car in a parking lot"
So... driver error. Why did you not count that with the 4 above?
"1 man was killed crossing in the walk, but the driver tested negative for impairment."
OK, I'll take that one.
"And another 1 was a woman walking in the middle of the road at midnight."
I'd need more info on that one, was it dark? No lights? If not, then how was that not driver error? So, 6 or 7 rather than the 4 you initially claimed.
"1 was a woman in the crosswalk, and the driver fled the scene but was caught later."
OK, so quite likely driver error as the natural reaction of someone not at fault, impaired or otherwise driving illegally is not to flee the scene.
So, the case really wasn't overstated, unless you left out some vital details in your rebuttal. Even within your own counterargument, 3/4 of the drivers were clearly at fault.
[ link to this | view in chronology ]
Re: Re: overstating your case
I think that the initial group of four were not supposed to be the exhaustive set of the cases that were due to "driver error", but rather the cases that were due to "the form of driver error which involves the driver jumping the curb and hitting someone". It could certainly have been phrased more clearly, but I think the intent is visible.
[ link to this | view in chronology ]
Re: Re: Re: overstating your case
[ link to this | view in chronology ]
Re: Re: Re: Re: overstating your case
But if the initial group is "fatalities from jumping the curb due to driver error", then the others don't belong in the same group.
I'm not disputing that that group definition wasn't all that clearly conveyed, nor that in order to avoid being misleading it probably should have been expressed more clearly, but I still see it as being present.
While I can see where you arrive at that assessment of intent, I do not see that intent as being apparent in the way that you apparently do.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: overstating your case
[ link to this | view in chronology ]
aosindia
[ link to this | view in chronology ]
This is an apples-to-oranges comparison, and it's wrong
These raw numbers mean nothing. If you want to compare fatality rates, then use either "pedestrian fatalities per operator hours" or "pedestrian fatalities per vehicle miles".
Using operator hours as a metric normalizes over the accumulated time an ensemble of vehicles was in use. Using vehicle miles normalizes over the accumulated distance that an ensemble of vehicles travels. Of course, the more hours that vehicles are in use and the more miles that they travel, the more opportunity they have to be involved in accidents, including but not limited to pedestrian fatalities. [1]
To provide a hypothetical example, if 5000 vehicles were operated for exactly 1 hour each, that's 5000 operator-hours and if there were 10 pedestrian fatalities associated with those 5000 vehicles, then the that's a rate of .002 fatalities/operator-hour. (Similarly for vehicle miles.) And if -- during the same time period -- 1 self-driving vehicle was operated for 10 hours with 1 associated pedestrian fatality, then that's a rate of .1. Which is 50 times higher.
The real numbers are far more skewed than this example, of course; Phoenix has a population of roughly 1.5M. If only 10% of those people drive only 20 minutes (during the time period in question), that's 45,000 operator-hours. And as large as that is, it's still too low to be realistic: consider all the vehicles that are operated all day longs (cabs, buses, trucks, delivery cars/vans/trucks, police cars, etc.) and consider the impact of twice-a-day commuting on the aggregate total. I wouldn't be surprised at all if the normalized pedestrian fatality rate per operator-hour for human-driven vehicles is a ten-thousandth or less or much less of that for driverless vehicles. (And the same goes for vehicle-miles, although obviously the numbers would be calculated differently.)
Feel free to use your own back-of-the-envelope estimates for these. AAA has published the figure of 17,600 minutes/year as an estimate for all drivers; that's 338 minutes/week or 5.6 hours/week -- a lot higher than the 20 minutes I used above. The Car Insurance Institute estimates about 13,500 miles/year per driver. or about 260 miles/week. Obviously these vary by state and city, but I'm sure the actuaries who do this for a living have solid estimates for Phoenix. However you do the calculation, you'll find that in the Phoenix area, the pedestrian fatality rate for driverless vehicles is several to many orders of magnitude higher than that for human-driven ones.
[1] Obviously the kind of accidents they're likely to be involved in varies with where the vehicles are. Pedestrian fatalities are more likely to happen on streets and less likely to happen on highways. On the other hand, high-speed collisions are more likely to happen on highways and less likely to happen in urban centers. However, calculating this based on an ensemble of vehicles which encompasses the entire area (downtown, city, suburbs, exurbs, etc.) and over a sufficient period of time (much more than a single day, in order to account for commuting/non-commuting days) smooths out the variations enough to yield useful metrics that are applicable to the entire region.
[ link to this | view in chronology ]
Re: This is an apples-to-oranges comparison, and it's wrong
That's a reasonable metric, but the problem is that the miles travelled by automated vehicles is still so low that a single accident can heavily skew things, while the miles travelled by traditional vehicles is so high that dozens of fatalities barely registers a blip. Extrapolating from such a small dataset is going to give you bad results.
https://xkcd.com/605/
"Feel free to use your own back-of-the-envelope estimates for these."
I'd rather see some real figures, but even with the caveat above that seems to be lacking. I'd rather people with any power be basing things of what's actually happening, not random guesswork.
"However you do the calculation, you'll find that in the Phoenix area, the pedestrian fatality rate for driverless vehicles is several to many orders of magnitude higher than that for human-driven ones."
Yep, and there's still only one of those. I'm sure the families of the other dead will be pleased that their deaths have been reduced to an even more meaningless statistic by those afraid of automation than they would have been if they happened at any other time. I also somehow doubt you'd have been so concerned about the death rate before this one happened, since that would have argued the opposite point for you.
[ link to this | view in chronology ]
Re: Re: This is an apples-to-oranges comparison, and it's wrong
Agreed. It would probably be better to use statistics at the national level in order to better represent all driverless vehicles, but that still leaves the problem of the massive difference in scale between the two sets of statistics.
"I'd rather see some real figures [...]"
I'm working on getting those. I'm curious to see what they are as well. Of course, for a fair comparison, we'd also need figures on operator-hour and vehicle-miles for the driverless vehicles too. However, because there aren't many, we could deliberately overestimate those (e.g. 168 hours/week/vehicle, which is the theoretical maximum) and then see what those calculations tell us.
"I also somehow doubt you'd have been so concerned about the death rate before this one happened, since that would have argued the opposite point for you."
I've been arguing against driverless vehicles for a long time. I commented here on this specific point because the citation of the death rate is being used to suggest that driverless vehicles are safer. Personally, I think it would be better to compare all accidents (that is, fatal and non-fatal, pedestrian and non-pedestrian) in order to use larger data sets and perhaps gain better insight. But it should be clear to everyone that using raw numbers without normalization is just wrong.
[ link to this | view in chronology ]
Re: Re: Re: This is an apples-to-oranges comparison, and it's wrong
Yes, which makes it useless for a direct comparison at the moment, unless you want to push the more scary-sounding ratio that this single death provides.
"However, because there aren't many, we could deliberately overestimate those (e.g. 168 hours/week/vehicle, which is the theoretical maximum) and then see what those calculations tell us."
Again, I'd rather get some valid data rather than try to randomly generate figures that will by nature be both fictional and skewed toward whatever the person guessing wants to prove.
"I've been arguing against driverless vehicles for a long time."
I'm yet to hear a valid reason, apart from "I don't trust them". Which is fine, but I trust human drivers less. It's highly subjective without any figures for hard proof, which means we're both just stating an opinion. My opinion is I'd rather have these out on the roads than the type of people I have to deal with every day on my commute.
"I commented here on this specific point because the citation of the death rate is being used to suggest that driverless vehicles are safer"
That's because until overall figures are provided that reliably show otherwise, the data proves that they are. We are literally talking so much about this accident because it's the only one that's ever happened to this point. A few weeks ago, nobody had ever died in such an accident, and the tally for most manufacturers is still zero. Everybody's scrambling to try and prevent the next one, at Uber, at their competitors and in the public sector. The other people who died that weekend will barely make a blip on traffic statistics and will largely be counted as simply the cost of people having private vehicles.
Again, I agree that the figures skewed both ways, but there is nothing to show that automated vehicles are either more likely to crash or more likely to harm when they do. In fact, what we know so far indicates they are less likely, and we're still at the prototype stage (meaning, you expect more accidents at this stage). By nature, the technology and its safety will improve before they go into mass production. Until something shows the above assumption wrong, I'm going to go with what we know, and that is that they have a decent safety record thus far and nothing indicates that it will worsen.
[ link to this | view in chronology ]
Re: Re: Re: Re: This is an apples-to-oranges comparison, and it's wrong
Do note by using the theoretical maximum that I suggested that this stacks the deck *against* my point. I did so deliberately to avoid skewing the numbers of favor of my argument.
"I'm yet to hear a valid reason, apart from "I don't trust them""
I've provided some in previous commentary here, and I've referenced others. I'm overdue to write a long-form piece laying out some of them -- and there are plenty. One of my principle concerns is that driverless vehicles aren't special -- they're just another thing in the IoT, and the entire IoT is an enormous, raging dumpster fire of security and privacy failures. There are ZERO reasons to think that cars will be any better than toasters, and a lot of reasons to think that they'll be worse.
I'll publish it when I have the time so that the arguments are laid out more clearly for analysis/critique. If you want to see a draft version, drop me an email and I'll send you what I have so far.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: This is an apples-to-oranges comparison, and it's wrong
Yes, but most would not be so generous.
"There are ZERO reasons to think that cars will be any better than toasters, and a lot of reasons to think that they'll be worse."
But, will they still be better than drunk, distracted, recklessly driving human beings? I believe they will. If so, this is a reason to exercise caution, not the remove the tech.
I look forward to a detailed overview, but my general thinking is very simple - while the tech has many issues that need to be addressed, it will be an improvement. I hope an analysis will cover these and explain why you think it won't be a net benefit, but all I'm seeing so far is a lack of trust in the tech.
You're right to be cautious. I just think we need something more as a reason not to explore this kind of tech, which I do believe will be a net benefit once it matures.
[ link to this | view in chronology ]
Re: This is an apples-to-oranges comparison, and it's wrong
Last I checked in Statistics class a sampling of one is useless. And every time we have a Hurricane Sandy (or even the fucked up season that was 2017) our climatologists in their intellectual honesty have to admit this is one data point based on probability.
We have to face the truth of a really huge error margin, that Autonomous cars may have been super lucky so far, or that Herzberg got super unlucky and driverless cars are safe as houses. The actual probability is somewhere in there, not necessarily near the midpoint.
[ link to this | view in chronology ]
Re: Re: This is an apples-to-oranges comparison, and it's wrong
Two responses to that.
First, if we accept that statement, then it is useless in support of the claim "driverless cars are more safe" and equally useless in support of the claim "driverless cars are less safe".
Second, that's why I suggested approaches that (a) normalize and (b) use many more data points. If -- and I'm fabricating these numbers to illustrate -- driverless cars have been on the road for 4000 hours in Phoenix, then we have substantially more than one data point about them. Of course while that was happening, human-driven cars might have been on the road for 315,000 hours, so we still have the problem posed by the enormous disparity in the raw numbers. But at least we're past the problem of a singular data point.
What we need to better understand this are the real numbers for both human-driven and driverless cars. I'm working on the former at the moment.
[ link to this | view in chronology ]
Supporting the claim "driverless cars are more / less safe..."
The problem is that you're still working with an anomaly. Even if you offset deaths to miles (miles in which someone died in contrast do miles in which someone did not die) you're dealing with a status set of one incident.
All that tells us is that deaths by autonomous cars are not impossible.
[ link to this | view in chronology ]
Re: Re: Re: This is an apples-to-oranges comparison, and it's wrong
Yes. So, we ignore that single data point. What do the rest of the statistics show once that outlier has been removed? One claim is supported more than the other, I'm sure you'll find, although we do still need more data to be confident. The only way to get that data is to continue public testing.
Besdies - data IS available on hours travelled, accidents, incidents requiring human intervention, etc.. I'm not sure how complete it is in the public record, but there's certainly more than one data point available surrounding other activities.
The only thing that is a single data point, and the one people are freaking out about is the single death that's ever happened in a collision involving one of these vehicles. So, we shouldn't be treating that as the all important issue.
[ link to this | view in chronology ]
Re: Re: This is an apples-to-oranges comparison, and it's wrong
Is it really "one" here? What about all the times a self-driving car did stop? Or would stop—we don't need to do human testing here, we can feed sensor data to algorithms to see what they'd do.
The companies should have logs of these things, from real-world tests, closed-track tests, and simulations. Too bad we don't have anything to compare against; no human driver reports that they almost hit someone.
[ link to this | view in chronology ]
Re: tl;dr
You've got the right idea, but this isn't even a sampling of one, it's an entire dataset of one.
[ link to this | view in chronology ]
Tim Lee calculated "that Uber's cars have killed people at roughly 25 times the rate of a typical human-driven car in the United States".
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Techdirt Big Tech bias
- ~2.400.000 cars (statista.com)
- 962 automotive deaths (azdot.gov)
- 600 autonomous cars (azcentral.com)
- 1 death in autonomous vehicle accident.
So obviously autonomous vehicles are much more dangerous with 1 death on 600 vehicles compared to 1 death per 2.500 normal vehicles.
[ link to this | view in chronology ]
Re: Techdirt Big Tech bias
No he isn't. He's just not wilfully misrepresenting them like you assholes.
Your desperation is clear, but your conscience must sting a little? Lying to try and suppress a technology that stands to save many lives in favour of a technology that kills thousands each month? Just a little bit, surely?
[ link to this | view in chronology ]
Re: Re: Techdirt Big Tech bias
Granted, we don't have nearly enough data on self-driven cars to draw any solid conclusions but the above isn't misrepresentation, just incomplete data.
[ link to this | view in chronology ]
Re: Re: Re: Techdirt Big Tech bias
[ link to this | view in chronology ]
Re: Re: Re: Techdirt Big Tech bias
...but there's no evidence that it would hold. You are misrepresenting the data if you don't admit that you're extrapolating a single data point, which is a dishonest thing to do and will not give you anything approaching a valid prediction.
"the above isn't misrepresentation, just incomplete data."
If you present it without some caveat about your guesswork, then you're just making shit up. It's complete misrepresentation without you providing the context.
[ link to this | view in chronology ]
Re: Re: Techdirt Big Tech bias
And I think anybody who's trying to extrapolate a ratio of deaths from anonymous cars versus manually-driven ones at this stage, with a single data point, is either being intentionally disingenuous or does not understand how statistics works. Whichever argument they're making -- "autonomous cars are safer" or "human drivers are safer" -- they simply don't have the data to back that claim up.
I've had my disagreements with Rich Kulawiec on the subject of AVs, but he's absolutely right that if we want to draw any conclusions about the relative safety of AVs and traditional drivers, it would make a lot more sense to compare all the accidents we have data for, as "all accidents" make for a more reliable data set than "fatalities" (which is not a data set at all, it's a datum, singular).
I do think Karl's right that the number of pedestrian deaths in the Phoenix area suggest some serious city design problems, as well. (I'd also be interested in seeing a breakdown by month. I suspect that people drive more aggressively when it's hot.)
[ link to this | view in chronology ]
Re: Re: Re: Techdirt Big Tech bias
[ link to this | view in chronology ]
Re: Re: Techdirt Big Tech bias
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
We could all be riding small pods on tracks, or subway-like hyperloops. Maybe horses and bicycles come back in a major way.
[ link to this | view in chronology ]
Bicycles
Madeline L'engle predicted the power-assisted bicycle in A Wrinkle In Time not a moped or motorized bicycle, but one whose propulsion engine was small enough that it could be lugged around as needed.
They've appeared in the last decade and are still early-adopter expensive (and short range).
Thanks to XKCD I'm fantasizing about computer assisted autogyros. (Airplanes got started, like cars, from bicycles, which in turn got started with horses.)
[ link to this | view in chronology ]
I would personally love to see a series of articles on Techdirt of the ways in which self-driving cars will change the way we interact (or don't interact) with the world, and the technology's implications for our personal freedoms.
For instance, the privacy implications of self-driving cars, and how a future in which humans are straight-up banned from driving in most countries could be detrimental to civil liberties. The tech is basically a repressive government's wet dream.
There's also questions to be asked about if we'll be able to own our own AVs, or if all vehicles will be owned and operated by tech companies and car-companies-turned-tech-companies. If we do own our own AVs, how much would we be allowed to tinker with them or fix them ourselves, or will they be locked down with restrictive DRM that mandates you visit an authorized dealer?
These are issues where the "The sooner humans are out from behind the wheel, the better" crowd and folks like the EFF who support tech-based freedoms (and more freedom in general) would butt heads. Articles about said issues would make for interesting reads and comment threads.
[ link to this | view in chronology ]
[ link to this | view in chronology ]