Pedestrian Deaths By Car In Phoenix Area Last Week: 11. But One Was By A Self-Driving Uber
from the I-can't-do-that,-Dave dept
Despite worries about the reliability and safety of self-driving vehicles, the millions of test miles driven so far have repeatedly shown self-driving cars to be significantly more safe than their human-piloted counterparts. Yet whenever accidents (or near accidents) occur, they tend to be blown completely out of proportion by those terrified of (or financially disrupted by) an automated future.
So it will be interesting to watch the reaction to news that a self-driving Uber vehicle was, unfortunately, the first to be involved in a fatality over the weekend in Tempe, Arizona:
"A self-driving Uber SUV struck and killed a pedestrian in Tempe, Arizona, Sunday night, according to the Tempe police. The department is investigating the crash. A driver was behind the wheel at the time, the police said.
"The vehicle involved is one of Uber's self-driving vehicles," the Tempe police said in a statement. "It was in autonomous mode at the time of the collision, with a vehicle operator behind the wheel."
Uber, for its part, says it's working with Tempe law enforcement to understand what went wrong in this instance:
Our hearts go out to the victim’s family. We’re fully cooperating with @TempePolice and local authorities as they investigate this incident.
— Uber Comms (@Uber_Comms) March 19, 2018
Bloomberg also notes that Uber has suspended its self-driving car program nationwide until it can identify what exactly went wrong. The National Transportation Safety Board is also opening an investigation into the death and is sending a small team of investigators to Tempe.
We've noted for years now how despite a lot of breathless hand-wringing, self-driving car technology (even in its beta form) has proven to be remarkably safe. Millions of AI driver miles have been logged already by Google, Volvo, Uber and others with only a few major accidents. When accidents do occur, they most frequently involve human beings getting confused when a robot-driven vehicle actually follows the law. Google has noted repeatedly that the most common accidents it sees are when drivers rear end its AI-vehicles because they actually stopped before turning right on red.
And while there's some caveats for this data (such as the fact that many of these miles are logged with drivers grabbing the wheel when needed), self-driving cars have so far proven to be far safer then even many advocates projected. We've not even gotten close to the well-hyped "trolly problem," and engineers have argued that if we do, somebody has already screwed up in the design and development process.
It's also worth reiterating that early data continues to strongly indicate that self-driving cars will be notably safer than their human-piloted counterparts, who cause 33,000 fatalities annually (usually because they were drunk or distracted by their phone). It's also worth noting that 10 pedestrians have been killed by drivers in the Phoenix area (including Tempe) in the last week alone by human drivers, and Arizona had the highest rate of pedestrian fatalities in the country last year. And it's getting worse, with 197 Arizona pedestrian deaths in 2016 compared to 224 in 2017.
We'll have to see what the investigation reveals, but hopefully the tech press will view Arizona's problem in context before writing up their inevitable hyperventilating hot takes. Ditto for lawmakers eager to justify over-regulating the emerging self-driving car industry at the behest of taxi unions or other disrupted legacy sectors. If we are going to worry about something, those calories might be better spent on shoring up the abysmal security and privacy standards in the auto industry before automating everything under the sun.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: accidents, autonomous vehicles, pedestrian fatalities, self-driving cars
Companies: uber
Reader Comments
Subscribe: RSS
View by: Time | Thread
Autonomous Accidents
[ link to this | view in chronology ]
Human drivers outnumber autonomous by, say, 10,000 to 1...
Just a matter of time until unfixable flaws show up. Just like running Windows, a # of crashes per time are guaranteed.
Answer this: would you trust a car-control system if designed by Microsoft?
[ link to this | view in chronology ]
Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
[ link to this | view in chronology ]
Re: Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
[ link to this | view in chronology ]
Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
And you ignore that almost all crashes were not caused by the autonomous vehicles. If not all.
"Just a matter of time until unfixable flaws show up."
No such thing. There will be flaws, they are always in the path of development. I do hope most people developing them and making laws aren't like you.
I'd rather have autonomous cars all around. Their failure rates will be much lower than humans, that's guaranteed.
[ link to this | view in chronology ]
Very Limited Testing so far
Those "test miles" have all been under benign, restricted conditions ... and are not comparable to the normal driving conditions faced by American drivers every day.
That Uber automated car in Arizona was also operating under very restricted conditions, with a safety driver in the driver's seat.
Gazillions of development testing miles do not tell you the vehicles will operate in full, real-world conditions.
"Development" testing & "Operational" testing have different purposes and designs. Apples & Oranges
[ link to this | view in chronology ]
Re: Very Limited Testing so far
That's the first step, but of course automated cars are also being used in real life. Now, certain misguided law makers think they need to limit automated cars to the benign, restricted conditions, so that people like you can crawl up and say that they're only been under benign, restricted conditions. It's a self serving, useless process. Perish, thou!
[ link to this | view in chronology ]
Re: Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
I'd rather have autonomous cars all around. Their failure rates will be much lower than humans, that's guaranteed.
In the end game maybe - but we are not there yet. In the meantime the hubris of Google, Uber etc is driving the technology in exactly the wrong direction.
At present the idea is that the car drives autonomously and teh human supervises it in case anything goes wrong. This is giving the human an absolutely terrible job. Zero interest, huge responsibility and total attention required. It isn't surprising that in the latest incident the human driver didn't effectively intervene.
If we want to go to self driving cars then the correct route (for now, whilst a human is still involved) is for the computer to monitor the human driver, not the other way around.
It's much less sexy for the computer but much better for the human driver. In fact the current situation is just repeating the history of autopilot systems - which have now been revised to be much more "computer monitoring human" than the other way around.
I'm afraid that Google, Uber etc are basically doing a publicity stunt with an immature technology at present - whereas with proper application of the technology we could save many lives every year.
If the technology for "monitoring the human" was pushed ahead it would provide hard evidence of the safety value of the computer. At that point we could move to a "computer monitoring computer" system with the "safety" computer beign a proven system. aspect
[ link to this | view in chronology ]
Re: Re: Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
Essentially all high-end cars have sophisticated driver assist systems. They provide lane-keeping assist, distance-keeping cruise control, automatic braking, automatic parking, and lane-change warnings.
These systems are modestly expensive options on mid-range cars, and occasionally available on the low-end models.
I've been using a system for about two years. I quickly changed my opinion of it from being a luxury, to being a safety system more valuable than ABS.
Several freeways around here have the typical California 70MPH-to-stopped for no apparent reason. That's reinforced because people learn to panic stop when they see the first brake light flash. The radar is much better than I am at tracking multiple cars and deciding if this will be a cascading panic stop, or just drivers cautiously having their foot ready on the brake.
I expect in other regions that functionality will just silently exist, protecting the driver without them ever realizing it.
[ link to this | view in chronology ]
Re: Re: Re: Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
You might have missed it, but that is already happening.
No - I knew that lots of vehicle manufacturers were doing this. My point was that really - that type of approach is the way forward. What Google, Uber etc are doing is probably a dead end.
Several freeways around here have the typical California 70MPH-to-stopped for no apparent reason.
Yes - I calculated once that the effect travels backwards up the carriageway at about 1500 mph!
[ link to this | view in chronology ]
Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
https://www.quora.com/How-many-drivers-are-on-the-road-at-any-given-time-in-the-US
That's around 5 million cars on the road in just the LA area per day. I'm seeing other numbers of over 250 million cars on the road in the U.S.
So these self-driving cars are a fraction, of a fraction of the cars on the road. Who can do the math. Number of normal cars on the road to people killed, to Self-Driving Cars and people killed. Does it basically end up close? Worse for Self Driving cars. What are they in total. Is it even more than 100 of them? I don't know.
This person that got hit wasn't in a crosswalk. Not that it's an excuse to get hit, but how did the person get hit? War it running out into the street from behind something where even a human would have never been able to stop? Seems to be a lot of dump being getting hit bad cars in that state!!!! Must be a lot of jay walkers.
Until we get the whole story, who knows. One thing is for sure, until you get the human element out of the way, you're going to have this transitional phase of humans doing dump things and crashing into self-driving cars that did nothing wrong. People running into the street is natural selection in action.
[ link to this | view in chronology ]
Re: Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
And that's an important question, because in most cases crossing in the middle of the block is legal. (There was a news story a few weeks back about police wrongly charging people with jay-walking - where illegal only where there's traffic lights at each end of the block.)
Walking from the bus stop my office, there's a stretch with no sidewalk. In the winter you must walk on the very busy road. It's not jay-walking, but I've nearly been hit a few times.
So if the software is giving pedestrian detection a lower priority away from crosswalks, it needs to stop doing that where there's no sidewalks, and where mid-block crossing is allowed. That's more data that needs to be in the car's internal map.
[ link to this | view in chronology ]
Re: Re: Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
So if the software is giving pedestrian detection a lower priority away from crosswalks, it needs to stop doing that
Actually it should NEVER give pedestrian detection a lower priority.
The problem, as I know from bitter experience, is that with hindsight it is always possible to see how the system could have been coded in such a way as to avoid a particular incident - but then when you do that something else breaks....
[ link to this | view in chronology ]
Re: Re: Re: Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
From context, I read the bit you quoted as "the priority it gives to detecting pedestrians when away from crosswalks is lower than the priority it gives to detecting them when at crosswalks", not "when away from crosswalks, the priority it gives to detecting pedestrians is lower than the priority it gives to something else".
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
As I said, I read that as "the priority on X in Y situation is lower than the priority on X in other situations".
To respond to that by saying "the priority on X should never be lower" seems nonsensical. The only way I can think of to make sense out of it, without assuming that you misunderstood the original statement, is as a confusing way of saying "the priority of X should always be maintained at the same high level".
The most natural way to read "the priority on X should never be lower", to my eye, is as being based on the assumption that the original statement was equivalent to "in Y situation, the priority on X is lower than the priority on something else". Since I don't read the original statement as saying that, I find this confusing, so I asked for clarification - although I may have done so in a less-than-ideally-clear way, myself.
[ link to this | view in chronology ]
Re: Human drivers outnumber autonomous by, say, 10,000 to 1...
[ link to this | view in chronology ]
Ban cars , go thru multiple checks to see that your a responsible person .
Beat your husband , wife , kid ? No car for you .
Ever been convicted of a crime with a jail time of over a year even if you get no time served ? No car for you .
Feel depressed ? No Car for you.
Ever think of driving angry ?
Well we have an an app for that , that makes you wait three days to drive if you already have a car {to cool down you know}
So hey KIDS get behind something that kills more people
each year than the 2nd ever did .
I mean hey just think of yourselves for once .
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Broken Sarcasm meter
Except he's seriously wrong on one point: We(USA) run about 30,000 firearms deaths per year, with 2/3 of them suicides. And ya, you can argue how many of the remaining 10K involve drugs and drug dealers; you wanna solve that problem, rationalize drug policy!
[ link to this | view in chronology ]
Re: Re: Re: Broken Sarcasm meter
[ link to this | view in chronology ]
Re: Re: Re: Re: Broken Sarcasm meter
concealed carry laws REDUCED
8.5%
Murders
5%
Rapes
7%
Aggravated Assaults
3%
Robberies
gun free zones With just one exception, every public mass shooting in the USA since 1950 has taken place where citizens are banned from carrying guns. Despite strict gun regulations, Europe has had 3 of the worst 6 school shootings.
But still more people are killed with cars every year
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Broken Sarcasm meter
Certainly gun control has meant no mass murders in the UK like you have regularly. Nor Australia.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Broken Sarcasm meter
Despite strict gun regulations, Europe has had 3 of the worst 6 school shootings.
And after each one the rules were tightened and there were no more incidents of that type in that place.
Whereas in the US - after each incident, much handwringing and "never againing" ... aaaand it happens again ....
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Broken Sarcasm meter
Most countries have compulsory 3rd party insurance for cars - and it is enforced with teeth. (In the UK - no insurance - take your car away and crush it)
Why not compulsory insurance for guns?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Broken Sarcasm meter
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Broken Sarcasm meter
No insurance covers the deliberate acts of the insured,
I think you'll find that that is not actually true - perhaps surprisingly:
https://www.digbybrown.co.uk/solicitors/clients/can-motor-insurers-be-liable-for-the-d eliberate-criminal-acts-of-drivers
[ link to this | view in chronology ]
Re: Re: Re: Re: Broken Sarcasm meter
I have already submitted my application for a concealed autonomous vehicle. Bad guys watch out.
[ link to this | view in chronology ]
Gotta keep an eye on how this will develop. I'm hoping it was some sort of negligence by the pedestrian. I'm not trying to blame the victim or take the death lightly but if it's something with the tech it's gonna be a blow to autonomous cars.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
But I suppose if it pulls over and waits until the monsoon is over, then it's smarter than 90% of Phoenix drivers.
[ link to this | view in chronology ]
Re: Re:
At least the self-driving cars will be programmed with some sensible snow related driving actions. People from southern states, not so much.
[ link to this | view in chronology ]
Re: Re: Re:
"You be trippin' yo"
[ link to this | view in chronology ]
Re:
Gotta keep an eye on how this will develop. I'm hoping it was some sort of negligence by the pedestrian.
In this situation it is ALWAYS the machine's fault.
That is how the public will view it .
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re:
That has already happened.
[ link to this | view in chronology ]
Re: Re:
1. or so the vendors claim
2. that you know of
3. today
4. and they don't need to be
There's a lot more to be said on this, but for the moment: a person is dead, and that's a tragedy. The debate can wait.
[ link to this | view in chronology ]
Re: Re:
So you are claiming there is no wireless interface?
I remember reading there is such an interface, why do you think it is impervious?
[ link to this | view in chronology ]
Re:
I know you're joking, but I am concerned about deliberate attacks, from simple griefing (painting weird shit on roads or signs to confuse the cars' image recognition) to complex attacks on the networks by organized crime and hostile nations.
I don't know how robust these cars are against external attacks, and I'm afraid the only way we're going to find out is the hard way.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
Signs are identified with cameras, not lidar. The cars image recognizers can be fooled.
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
People will be using this a "reason" to stop the self driving future. They will discount the fact that humans killed more people with cars than AI did as comparing apples to oranges. While they aren't the same, AI seems to be less murderous than humans.
We are well on our way to having the media hyped future of this brand of AI is more likely to kill & rules banning those AI's rather than noticing that the human overrode the AI in each of the cases.
The simple thing we have learned is even a driver who follows every rule still is no match for the human ability to ignore the world around them.
[ link to this | view in chronology ]
Re:
https://xkcd.com/1958/
[ link to this | view in chronology ]
Re: Re:
So much this. Also the fact that to be a great thing, AI cars only need to have fewer accidents than human driven ones. If statistics went from "37,461 people a year killed in car accidents with human drivers" to "20,000 a year killed in car accidents with AI drivers. There are zero human drivers." That would be a really great thing.
But I get the feeling, if 20,000 people were killed per year in AI driven cars, all the headlines would read "KILLER AI! It's Skynet! Hide yo wife! Hide yo kids!"
[ link to this | view in chronology ]
[ link to this | view in chronology ]
I'll put autonomous vehicles up against self-driven vehicles any day of the week.
And even the pedestrian deaths by AV will be found to be almost 100% pedestrian fault v. self-driven vehicles.
[ link to this | view in chronology ]
The goal is higher than that
That doesn't mean they were un-avoidable, though, even if the vehicle didn't actively chase and kill pedestrians.
AV have some serious potential, but based on the relative crudeness of even todays current computer vision/LIDAR systems, pedestrians are still a major problem for autonomous vehicles.
[ link to this | view in chronology ]
Re: The goal is higher than that
[ link to this | view in chronology ]
Re: Re: The goal is higher than that
[ link to this | view in chronology ]
Re: Re: Re: The goal is higher than that
[ link to this | view in chronology ]
Compare and contrast
I would like to see not only the police investigation reports on these 11 accidents, but the insurance companies reports on investigations into these 11 accidents. It's not like I don't trust the police reports, cause I don't, but I want to see the differences between the two reports.
In addition, I don't think we will have a complete understanding of the 'egregiousness' of the Uber car until we understand the nature of all the accidents. While no-fault might make the 'driver' liable, we all know that they are not always at fault.
[ link to this | view in chronology ]
Dangerous Deer....
Whether it's the kid running into traffic between two cars (or out of a crowd on the sidewalk) or the deer deciding to jump in front of a car, or even into a stationary car, it's one of those things that every driver should be aware is a potential accident situation.
Now, I do hope Uber isn't going to try to game its way out of this. Hopefully there's video footage to be examined; that's certainly a reasonable expectation for an autonomous vehicle.
[ link to this | view in chronology ]
Someone was killed. Doesn't matter whether the car driver was a human, software, or software that could be over-ridden by a human.
(If anything, perhaps collisions involving cars and deaths when only human operators are present are blown completely under proportion.)
Part of the problem is accountability. With a human driver, it's generally straight-forward to blame the human (of course, if the car accelerates on a whim without human intervention or whatever, that's a different story).
With software-driven cars, do you blame the technician in the driver's seat who may be able to override the software pilot? Do you blame the software developers? It can become very difficult to place blame. This is not just a matter of playing the "blame game". The concept of "accountability" has a positive reason for existing, which is to identify the problem so it can be fixed. It becomes more difficult to solve a problem if the source of the problem is difficult to determine.
[ link to this | view in chronology ]
Re:
Yes, if only there were some sort of legal entity whose purpose was to represent both the technician and the developers and assume liability for any accidents they caused.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
I'm talking about corporations.
It doesn't matter whether the accident was caused by the person behind the wheel or a defect in the software. In either one of those cases, the legally liable party is Uber, the company.
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
I must admit that I do not know the code well enough to tell for certain, but I don't think there is any way here that fault or blame can ever be placed on the pedestrian.
[ link to this | view in chronology ]
Re: Re:
Dial 1-800-eat-shit
[ link to this | view in chronology ]
Re:
You mean the official "safety driver", who had one job?
Accountability will be difficult when the tech becomes generally available and doesn't need to be monitored by humans. But at present, we have a person who is there for the purpose of being accountable.
[ link to this | view in chronology ]
Re:
In short, self driving cars would be treated like a tool, being the car owner the one responsible for it.
As an analogy, if your brakes are broken and you kill someone because you can't stop your car, you have to show that the car was broken and that you weren't negligent with the car maintenance.
At that point, depending on the legal system, either they'd make the car maker/shop responsible for that, whether is directly (suing them) or indirectly, by making you (or your insurance company) pay for the damages and then you (or the insurance company) sue the shop/maker.
I mean, responsibility is the last thing they will leave behind.
[ link to this | view in chronology ]
They're a little slow at intersections, I've never seen one on the freeway, and my wife says she saw one run a red light once. But on the whole, I trust them more than I trust Phoenix drivers.
This is a tragedy. I'm no fan of Uber as a company, but they appear to be responding correctly so far. We don't know who was at fault yet; I'll wait to hear what they find out in the investigation.
But, crass as it is to reduce a human life to a statistic, self-driving cars have killed fewer people than human drivers have in a comparable number of hours in the same area.
[ link to this | view in chronology ]
Gee whiz, what a surprise... I wondered what the odds of someone at TechDirt would use that forum to try sweep a fatality under the rug because that fatality affects google's bottom godless dollar line. And yep - sure as shit, and right on cue, someone at TechDirt is doing exactly that... trying to sweep a fatality under the rug as if it means "nothing". Perhaps an aluminum baseball bat to the side of your cranium would mean nothing to certain people. I know if I saw you go down in that fashion it would literally mean nothing to me.
[ link to this | view in chronology ]
Re:
someone at TechDirt is doing exactly that... trying to sweep a fatality under the rug as if it means "nothing".
Nothing in this post does that. What it does is put it into context -- and notes that if you really do care about lives, we should be pushing for making the technology better, faster, so that it can save more lives of all the people killed by cars in other instances.
But, of course, you'd have to read the post, and not just be looking for some bullshit fake way to attack us to understand that... and I guess that's too hard for some people.
[ link to this | view in chronology ]
Re:
And yet you took the time out of your busy day of sucking dicks to tell us that...
[ link to this | view in chronology ]
Mathematical Error: Apples and Oranges
2. The "merely one death" and "only rear-ended at a red light" defense is a absolutely inhumane argument.
People are not scared of the road-death statistics. They are scared of the robotic approach to trolley problems.
They don't care whether the AI is doing the moral calculation or the engineer.
They want to do the moral calculation. Like what they do now. When they drink and drive and text whilst driving.
That is the part that worries me. The two sides have no care for me and my family. They only fear, self-servingly.
[ link to this | view in chronology ]
Re: Mathematical Error: Apples and Oranges
It's Phoenix.
The phrase "10 times more dangerous" does not appear anywhere in the article.
"Merely one death" may be a callous way of putting it, but the goal is to reduce the number of deaths. We should always strive to make self-driving cars as safe as possible, but the question of whether or not they're ready for wide use is not "Are they perfectly safe?", it's "Are they at least as safe as human drivers?"
I'm not sure what you mean by "only rear-ended at a red light defense" but presumably you're referring to this part of the article:
I don't see what's inhumane about this. What they're saying is that most collisions involving autonomous vehicles are the fault of human drivers running into them.
I would add that, in my experience, "actually stopped before turning right on red" isn't quite a fair way of describing the situation. I don't see a lot of Waymo cars (they're more concentrated in the Chandler area; I live in Tempe and work in Phoenix), but the Uber cars I've seen really do behave unexpectedly in intersections; it's not just that they stop at red lights, it's that they slow down earlier than human drivers do (and not just for red lights) and proceed very slowly through intersections. This is probably safer for pedestrians and cyclists, but I can see how it would increase the likelihood of rearend collisions; I don't like being behind them.
Nonsense.
Most people don't even know what the trolley problem is.
I don't think most people consider the trolley problem or look at this from a moral perspective at all. I think what people are concerned with is:
[ link to this | view in chronology ]
They'll be safe, just give us your infrastructure
In truth, I don't see these working without human operators somewhere in the mix, and therefore they aren't really autonomous*. Call me when they can operate somewhere other than a sunny, dry state under almost ideal conditions.
* (This car had a driver. They just weren't driving.)
...........
I'm waiting for my flying autonomous car. (My autonomous bike is so convenient. And those autonomous roller skates... excellent!)
[ link to this | view in chronology ]
Re: They'll be safe, just give us your infrastructure
Given the local driving style, perhaps they already have.
[ link to this | view in chronology ]
Re: They'll be safe, just give us your infrastructure
That's sunny, dry city. Most of Arizona is not actually a desert.
And...you get that the reason they're going for locations with predictable weather, flat land, and simple block layouts is that this is early testing, yes? You have to start somewhere. It would be foolish for the initial test market to be rural Appalachia, San Francisco, or even northern Arizona.
Some five million people, however, do live in the Phoenix area. I'm one of them. I already share the road with these vehicles; for me, it's not a question of "call me when they're ready to operate in my area," because they already are operating in my area.
And, as I've said elsewhere, from what I've seen of them I trust them more than human drivers.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
What Was the Human Backup Doing?
[ link to this | view in chronology ]
Re: What Was the Human Backup Doing?
I can think of three possibilities:
The collision was one that a human driver couldn't have avoided. For example, someone walking into the street from between two cars.
The collision could have been avoided be a human, and the driver was attentive, but the extra time it took to realize that a collision needed to be avoided and take control didn't leave enough time to actually avoid the collision. For example, the pedestrian steps out onto the street at T-0:05, the driver realizes that the car isn't stopping at T-0:03, takes control at T-0:01, and cannot swerve in time.
Any of the three seem like viable possibilities at this point; we'll have to wait for further information comes out to know which.
[ link to this | view in chronology ]
It's true that human drivers don't always follow the laws, have less care and are, in general, worse drivers than self driven ones.
But that's when things work properly.
Get them blind (or rather, mess up the information they get, and I'm not talking just about some silly painting) and they are worse than humans.
You can't fool a human into thinking that he can see when he actually cannot (well, you can, but in politics and copyright). Though there are cases where humans went blindly where their GPSs told them, to the point of having an accident, but my guess is that the latter ones are the exception and not the norm.
Fooling a computer into thinking that "everything is fine" is easier; and my guess is that such will be the base of ITS network hacks.
Make the network believe that everything is right even if cars are being crashed left and right, keep that for a minute in a big city, and the death count will go up. Fast.
That's if cars aren't given erroneous orders (like, you know, make them think that they are in a highway and that they should be at 120 kph, in the middle of a city).
[ link to this | view in chronology ]
The Safe Way To Cross Streets.
I have no experience of Uber self-driving cars, but I presume that the stick's motion would be sufficient to trigger the car's sensors.
My major concerns have to do with cars coming around blind angles, where the driver literally cannot see more than thirty feet ahead of him, and is nonetheless, driving fairly fast. There is one location where I once saw a five-way fender-bender develop in the space of two hundred feet from a standing start, when the traffic light turned green. Based on stopping distances and traffic density, a reasonable speed limit would be five miles per hour, but the posted speed limit is forty-five miles an hour. There are also places where I can sometimes amuse myself by out-walking traffic, to the humiliation of the drivers.
[ link to this | view in chronology ]
Um, that is potentially a misleading statement. Wired has also reported on this incident:
https://www.wired.com/story/uber-self-driving-car-crash-arizona-pedestrian/
but they offer a rather different version of that statistic.
I notice Techdirt offered no actual numbers for its own claim.
For the record, TheVerge in a November 2017 article claimed Waymo had logged 4 million miles:
https://www.theverge.com/2017/11/28/16709104/waymo-self-driving-autonomous-cars-public-roads- milestone
BTW, according to Arstechnica just this February:
https://arstechnica.com/cars/2018/02/waymo-now-has-a-serious-driverless-car-rival-gms-crui se/
Waymo is way ahead of the competition (e.g. Google) in terms of actual mileage racked up by its automated cars..
Bearing in that mind, we come to another statistic, the one used in the title for this Techdirt article ("Pedestrian Deaths By Car In Phoenix Area Last Week: 11. But One Was By A Self-Driving Uber") is also potentially misleading.
There may have been more deaths from human-driven cars in Phoenix over the week in question, but then there are far more humans driving cars in Phoenix over that same period than were recorded by their automated counterparts, and their mileage is way greater, whether one considers all the manufacturers in one lump or just by Uber by themselves.
Which raises a question, just how many millions of miles HAD Uber's automated cars racked up prior to the accident?
[ link to this | view in chronology ]
Re:
Police chief: Uber self-driving car “likely” not at fault in fatal crash
[ link to this | view in chronology ]
Re:
These are good points, but one out of any number is not a statistically meaningful statistic; it's noise.
I'm not saying we should wait until we have more data to do something about this; Uber has halted the program and the NTSB is investigating, and those are the correct results. But I am saying that we can't make meaningful statistical comparisons between autonomous vehicles and human-driven ones based on the data we have now.
[ link to this | view in chronology ]
Just a note...
[ link to this | view in chronology ]
Re: Just a note...
[ link to this | view in chronology ]
Re: Re: Just a note...
The police have already said that the car and its driver were likely not at fault. See link in thread above
[ link to this | view in chronology ]
Re: Re: Just a note...
So now it's the victim's fault?
Your theory only works if the pedestrian was so close to the car when she stepped off the kerb that the car had no time to react; and none of the accounts I have read claim that. Instead they point out that the car did not slow, which in turn implies that the pedestrian was a fair distance from the car when she stepped off the kerb.
[ link to this | view in chronology ]
Re: Re: Re: Just a note...
[ link to this | view in chronology ]
Re: Re: Re: Just a note...
It is quite probably that the victim was behaving erratically, but it is also quite likely that the safety driver was making no effort whatsoever to observe the margins of the road. The safety driver would not be a very good witness in court, obviously. Uber would be well advised to settle, and quickly.
The Tempe police seem to be taking the view that "what's a homeless person, more or less." The locale appears to be more or less on the Arizona State University campus, and, no doubt, the police chase homeless people away on an ongoing basis.
---------------------------------------------------------------------------------------------------- ----------------------------------------------------------
Self-driving Uber vehicle strikes, kills 49-year-old woman in Tempe
Ryan Randazzo, Bree Burkitt and Uriel J. Garcia Published 10:13 a.m. MT March 19, 2018 | Updated 8:26 p.m. MT March 19, 2018
https://www.azcentral.com/story/news/local/tempe-breaking/2018/03/19/woman-dies-fatal-hit-strike s-self-driving-uber-crossing-road-tempe/438256002/
-------------------------------------------------- ---------------------------------------------------------------------------------------------------- ---------
[ link to this | view in chronology ]
Re: Re: Re: Re: Just a note...
Thanks for the link.
I wouldn't describe Mill and Curry as "more or less on the ASU campus", but it's not far from campus. There are some ASU-owned buildings up the road, but the edge of the campus proper is about a mile and a half away.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
News from 100 years ago
I just don't trust this new technology. Won't someone please stop Mr Ford before his "automobile company" gets too much bigger?
[ link to this | view in chronology ]
If You Want to Make a Rational Argument
I am strongly in favor of autonomous vehicle development advancing as fast as is safely possible, but I am averse to bogus and sloppy arguments.
[ link to this | view in chronology ]
Re: If You Want to Make a Rational Argument
[ link to this | view in chronology ]
Re: Re: If You Want to Make a Rational Argument
1 fatality out of "N" miles driven is, unfortunately, the best estimate going for autonomous cars. Yes, the variance is huge, and, more importantly, it's very unlikely that Uber or Waymo is going to leave their system alone, so next year's statistics will be different.
Now, if the Phoenix area is killing 11 pedestrians a week, it should be possible to get a reasonably good estimate of the hazard rates in the Phoenix area.
Finally, "per mile driven" is also really crude. Accidents typically happen at intersections, and when certain other opportunities present themselves...like pedestrians in the road, or other cars to run into, or late at night when there's a bunch of impaired drivers.
[ link to this | view in chronology ]
Re: Re: Re: If You Want to Make a Rational Argument
I don't think that it is.
I'll reserve final judgement until after the final report, but early indications are that the car was not at fault. Given that we have exactly one case to draw conclusions from, it makes a whole lot more sense to accept the findings in that single specific case than to try and extrapolate a trend.
There's no trend. A thing happening one time is not a trend. You can't draw a line from a dot.
Certainly. We can draw conclusions about how dangerous human drivers are in general, how dangerous certain intersections are, etc. We've got plenty of data on those things. But we've still got a pretty small dataset for self-driving cars in general, and a smaller one for Tempe-area Ubers in particular.
Right. There are a lot of variables to control for. (And I think I already noted somewhere in the thread that I've never seen a self-driving Uber on a freeway or in a storm.) Distance driven is a limited and crude one, but it's useful for illustrating just how small our data size is at this time.
[ link to this | view in chronology ]
Re: Re: Re: Re: If You Want to Make a Rational Argument
[ link to this | view in chronology ]
Re: tl;dr
You and me both.
About 8 hours before this accident occurred, and about two miles east of where it happened, a car cut me off and I had to slam my brakes so hard that my dog fell off the backseat and onto the floor.
That wasn't an autonomous car; it was some asshole.
[ link to this | view in chronology ]
Re: Re: If You Want to Make a Rational Argument
[ link to this | view in chronology ]
Re: Re: Re: If You Want to Make a Rational Argument
I read the headline as "the media are focusing on the dangers of self-driving cars while accepting the dangers of manually-driven cars as a given." I don't see anywhere in the headline or the article where Karl (or whoever wrote the headline; I know headlines and articles aren't always written by the same person) says anything resembling "autonomous cars are 10 times as safe as manually-driven ones."
[ link to this | view in chronology ]
Re: Re: Re: Re: If You Want to Make a Rational Argument
[ link to this | view in chronology ]
Re: tl;dr
You're right. How silly of me to have read it as "Pedestrian Deaths By Car In Phoenix Area Last Week: 11. But One Was By A Self-Driving Uber" instead of "Pedestrian Deaths By Car In Phoenix Area Last Week: 11. But One Was By A Self-Driving Uber, Which Means Human-Driven Cars are 10 Times as Dangerous as Autonomous Ones."
[ link to this | view in chronology ]
Re: Re: tl;dr
'I read the headline as "the media are focusing on the dangers of self-driving cars while accepting the dangers of manually-driven cars as a given."'
Why don't you re-read my original post with the actual title firmly in mind, and see if you have anything relevant to say.
[ link to this | view in chronology ]
Re: Re: Re: tl;dr
Oh, now I get it. You're being disingenuous.
I read the headline and I interpreted it.
You read the headline and you interpreted it.
When you interpret the headline to mean "autonomous cars are 10 times as safe as manually-driven ones," your interpretation is objectively correct, even though those words do not appear in the headline.
When I interpret the headline to mean "the media are focusing on the dangers of self-driving cars while accepting the dangers of manually-driven cars as a given," I am "read[ing] the headline as something other than what was actually written."
Of course. How silly of me.
You'll find, if you scroll up just a hair, that Christenson and I had a discussion about statistically meaningful data and other relevant variables besides the number of miles driven. I think it was productive and informative. And I think the reason that it was productive and informative is that when Christenson responded to me, he actually engaged with the points I made instead of just doubling down on nitpicking about perceived flaws in the headline.
But sure, why not, I'll go back and reread your first post in light of the conversation we've had since.
Hm, okay then.
It is somewhat ironic that you should lecture somebody else about proportionality and then spend four posts arguing about a headline without ever engaging the substance of the article that appears underneath that headline.
It is considerably more ironic that you should close out the fourth such post with a crack about how I don't have anything relevant to say.
[ link to this | view in chronology ]
Re: Re: Re: Re: tl;dr
[ link to this | view in chronology ]
XKCD
[ link to this | view in chronology ]
Re: XKCD
You don't say.
[ link to this | view in chronology ]