Arizona Bans Self-Driving Car Tests; Still Ignores How Many Pedestrians Get Killed

from the plenty-of-blame-to-go-around dept

By now, most folks have read about the fact that Uber (surprise) was responsible for the first ever pedestrian fatality caused by a self-driving car in the United States. Investigators in the case have found plenty of blame to go around, including a pedestrian who didn't cross at a crosswalk, an Uber driver who wasn't paying attention to the road (and therefore didn't take control in time), and Uber self-driving tech that pretty clearly wasn't ready for prime time compared to its competitors:

"Uber’s robotic vehicle project was not living up to expectations months before a self-driving car operated by the company struck and killed a woman in Tempe, Ariz.

The cars were having trouble driving through construction zones and next to tall vehicles, like big rigs. And Uber’s human drivers had to intervene far more frequently than the drivers of competing autonomous car projects."

All of the companies that contribute tech to Uber's test vehicle have been rushing to distance themselves from Uber's failures here. Many of them are laying the blame at the feet of Uber, including one company making it clear that Uber had disabled some standard safety features on the Volvo XC90 test car in question:

"Uber Technologies Inc. disabled the standard collision-avoidance technology in the Volvo SUV that struck and killed a woman in Arizona last week, according to the auto-parts maker that supplied the vehicle’s radar and camera.

“We don’t want people to be confused or think it was a failure of the technology that we supply for Volvo, because that’s not the case,” Zach Peterson, a spokesman for Aptiv Plc, said by phone. The Volvo XC90’s standard advanced driver-assistance system “has nothing to do” with the Uber test vehicle’s autonomous driving system, he said."

Mobileye, the company that makes the collision-avoidance technology behind Aptiv's tech, was also quick to pile on, noting that if implemented correctly, their technology should have been able to detect the pedestrian in time:

"Intel Corp.’s Mobileye, which makes chips and sensors used in collision-avoidance systems and is a supplier to Aptiv, said Monday that it tested its own software after the crash by playing a video of the Uber incident on a television monitor. Mobileye said it was able to detect Herzberg one second before impact in its internal tests, despite the poor second-hand quality of the video relative to a direct connection to cameras equipped to the car."

In response to Uber's tragic self-driving face plant, Arizona this week announced that it will be suspending Uber's self-driving testing technology in the state indefinitely:

Plenty have justly pointed out that Arizona also has plenty of culpability here, given the regulatory oversight of Uber's testing was arguably nonexistent. That said, Waymo (considered by most to be way ahead of the curve on self-driving tech) hasn't had similar problems, and there's every indication that a higher quality implementation of self-driving technology (as the various vendors above attest) may have avoided this unnecessary tragedy.

Still somehow lost in the finger pointing (including Governor Doug Ducey's "unequivocal commitment to public safety") is the fact that Arizona already had some of the highest pedestrian fatalities in the nation (of the human-caused variety). There were ten other pedestrian fatalities the same week as the Uber accident in the Phoenix area alone, and Arizona had the highest rate of pedestrian fatalities in the nation last year, clearly illustrating that Arizona has some major civil design and engineering questions of its own that need to be answered as the investigation continues.

Again, there's plenty of blame to go around here, and hopefully everybody in the chain of dysfunction learns some hard lessons from the experience. But it's still important to remember that human-piloted counterparts cause 33,000 fatalities annually, a number that should be dramatically lower when self-driving car technology is inevitably implemented (correctly).

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: arizona, autonomous vehicles, pedestrians, safety, self-driving cars
Companies: uber


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Anonymous Coward, 27 Mar 2018 @ 3:39pm

    I Didn't Know A Law Could Be Written And Passed That Quickly

    Has there been any trouble involving self-firing rifles or autonomous internet censorship lately? If we just need to blame robots to make motions pass I'm on board. Let's throw them tinheads under the self-driving bus.

    link to this | view in thread ]

  2. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 27 Mar 2018 @ 3:40pm

    So how many humans are you willing to kill?

    You've jumped to ultra-techno "mad scientist" level of indifference to "natural" persons. Ya gotta break a few eggs to make an omelet, huh?

    And as I've noted, this minion may be only a re-write bot. That'd also explain the at least seven "accounts" having six year gaps yet return without showing least awareness of that LONG time, nor how ODD that suddenly recall Techdirt and password...

    link to this | view in thread ]

  3. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 27 Mar 2018 @ 3:45pm

    PS: note also the plug for Waymo, GOOGLE subsidiary.

    "Waymo (considered by most to be way ahead of the curve on self-driving tech) hasn't had similar problems"

    But it will. Inevitable. We have ONLY Google's word for how often "intervention" is needed, no independent audit. Google surely also, as spook front, has its own "Men In Black" squad that covers up the incidents.

    And as minion hints, the Techno-Uber-Alles types are resolved to get this no matter what consequences to humans.

    link to this | view in thread ]

  4. identicon
    Nothing to see here, 27 Mar 2018 @ 3:51pm

    Nothing to see here

    I think what EVERY article I've read regarding this fails to point out is the following:

    NONE of the autonomous car companies testing in Arizona are required to do a basic test to put their cars on the road.

    Arizona requires no testing be done of the vehicles to determine how safe they are

    Arizona recently updated its rules and did not include any of the above provisions

    There is nothing preventing this from happening again with another car company, and the governor has done NOTHING to even appear to try and prevent it.

    link to this | view in thread ]

  5. identicon
    Anonymous Coward, 27 Mar 2018 @ 4:00pm

    Let's call these contraptions what they really are: Autonomous Killer Robots.

    If a human kills another human there is a process to deal with it. When a dev writes code for a machine that kills someone, why should they have zero culpability?

    link to this | view in thread ]

  6. identicon
    Bender, 27 Mar 2018 @ 4:06pm

    Re: I Didn't Know A Law Could Be Written And Passed That Quickly

    You can kiss my shiny metal ass

    link to this | view in thread ]

  7. identicon
    Anonymous Coward, 27 Mar 2018 @ 4:10pm

    I can see it now, the politicians will pat themselves on the back as they pass a law making jaywalking a felony.

    link to this | view in thread ]

  8. identicon
    tracyanne, 27 Mar 2018 @ 4:17pm

    Basically

    the way I see the issue is this.

    You program the AI to respond, "obstacles"... like pedestrians, like a human driver would, that is driving with the assumption that pedestrians can and will get out of the way, probably resulting in the "occasional" fatality.

    Or you program the AI to slow or stop for every pedestrian, resulting in a vehicle that takes forever to get anywhere.

    Option three is to create Autonomous vehicle roads, banning pedestrians from them. basically turning autonomous vehicles into some analogue of a light rail system.

    Personally I'm not sure I would want anything to do with any of those options (there may be others, I can't think of). But if autonomous vehicles are to respond like human drivers, we might just as well have human drivers... at least it's easier to find someone to blame in the event of an accident.

    I don't see the point of option 2

    and Option 3 seems like a set backwards, to me, plus it would be rather expensive to build and there's no guarantee pedestrians still won't get killed.

    I guess I'm just a reactionary who doesn't like ceding control to computers.

    link to this | view in thread ]

  9. identicon
    Anonymous Coward, 27 Mar 2018 @ 4:22pm

    Re: Basically

    " like a human driver would, that is driving with the assumption that pedestrians can and will get out of the way "

    you don't live in Arizona by any chance?

    link to this | view in thread ]

  10. identicon
    Anonymous Coward, 27 Mar 2018 @ 4:25pm

    Re:

    Let's call these contraptions what they really are: Developing Technology.

    link to this | view in thread ]

  11. identicon
    Anonymous Coward, 27 Mar 2018 @ 4:30pm

    I watched the video a number of times. If this was a HUMAN driver, it would be pretty clear-cut at it being the Jaywalkers fault!!! Not in the crosswalk, in a dark area. The driver, even IF paying 100% attention, the results would have been pretty much the same!!!

    BUT, this was a Self Driving UBER car. A car with LIDAR!!! Do you know what this means? it's like Radar and it paints a 360-degree picture of everything around it and it works just as well in the DARK as it does during the day!!!

    This person was crossing the street left to right. This person had a BIKE they were pushing also. It's a BIG target. There was no jumping out in front of the car. The person wasn't hidden where the LIDAR couldn't see.

    The person was dumb enough to walk right in front of a car in the dark, expecting it to stop. That's natural selection at work. On the other hand, that UBER car should have seen that person, further back in the dark, and stopped in more than enough time to not hit the idiot jaywalker!!!

    If anything, this was a perfect example of how Self Driving cars are better than a human, and yet it completely FAILED. The car didn't stop. it didn't swerve. Didn't slow down. It was like the person just wasn't there.

    Remember, a Pedestrial ALWAYS has the right away over a car. Even IF they are jaywalking. You take the risks that go along with that also. Just because someone is jaywalking, doesn't mean you can run them down, trying to score points.

    The UBER car completely FAILED. It clearly wasn't ready for primetime.

    link to this | view in thread ]

  12. identicon
    Anonymous Coward, 27 Mar 2018 @ 4:31pm

    Re: Re:

    Developing Technology shouldn't be on our public streets!!!

    Let me know when it's Developed.

    link to this | view in thread ]

  13. identicon
    Anonymous Coward, 27 Mar 2018 @ 4:35pm

    Re:

    Just as long as we agree that humans are oblivious killer idiots.

    link to this | view in thread ]

  14. icon
    Richard (profile), 27 Mar 2018 @ 4:43pm

    Correctly

    But it's still important to remember that human-piloted counterparts cause 33,000 fatalities annually, a number that should be dramatically lower when self-driving car technology is inevitably implemented (correctly).

    The keyword is correctly.

    If the collision avoidance system that Uber disabled was present and working on all those human driven cars that fatality figure would already drop dramatically.

    The point is this. A completely autonomous car, on the public roads, given current technology levels, is nothing other than a publicity stunt.

    To run such a publicity stunt at present is stupid and selfish - and can only delay the technology. (Which will of course cost lives).

    Don't get me wrong here. I fully support the use of technology to fix our road death problems - I just think that the way that Uber, Google etc are going about it is wrong.

    At present we should concentrate on systems that monitor the human driver and intervene to prevent accidents. (As others have pointed out - this IS happening anyway). Once these technologies are fully developed and universal we can move on to fully autonomous vehicles.

    link to this | view in thread ]

  15. icon
    Uriel-238 (profile), 27 Mar 2018 @ 4:44pm

    How do we deal with a machine kills a human?

    The same way we've always dealt with machines killing humans: regard it as an accident. Trace the situation with diagnostics.

    Ultimately we like having someone to blame. If a driver fails to manage a situation and accidentally kills a pedestrian, we use that as justification to make his life Hell, even if there was no way his limited reflexes and attention span could navigate the situation.

    The failure here is not the robot driver, it's circumstances that challenge the presumption that there has to be malice or stupidity but ultimately fault.

    Just as there's no malice behind industrial accidents regarding large machines (typically) there's also no malice behind a vehicle that hit a pedestrian. The failure may end up being a software glitch, a sensor failure or (probably) a combination of several factors including a bad situation.

    So rather than blaming, we should seek to fix it. Because I betcha our driving software is already better than most human drivers. We just like having a natural person to blame and reprise against.

    link to this | view in thread ]

  16. identicon
    Anonymous Coward, 27 Mar 2018 @ 4:44pm

    Re: Re:

    I'm just calling it SkyNet and hoping John Connor is on the ball.

    link to this | view in thread ]

  17. icon
    Richard (profile), 27 Mar 2018 @ 4:46pm

    Re: I Didn't Know A Law Could Be Written And Passed That Quickly

    It isn't a law.

    link to this | view in thread ]

  18. icon
    Roger Strong (profile), 27 Mar 2018 @ 4:52pm

    It appears this ban affects only Uber. Waymo cars were able to drive almost 5,600 miles last year without driver intervention. Uber’s cars weren’t able to meet a target goal of 13 miles per intervention.

    Uber is losing $billions a year, but got $3.5 billion from Saudi Arabia’s Public Investment Fund.

    Arizona didn't ban self-driving cars; they banned a method of extracting money from Saudis.

    link to this | view in thread ]

  19. identicon
    Anonymous Coward, 27 Mar 2018 @ 4:55pm

    Re:

    "If a human kills another human there is a process to deal with it."

    Sell more of the device used to kill humans for fear that we won't be able to arbitrarily kill humans with said device if the government outlaws said device?

    link to this | view in thread ]

  20. identicon
    Anonymous Coward, 27 Mar 2018 @ 5:02pm

    Re:

    "The person was dumb enough to walk right in front of a car in the dark, expecting it to stop. That's natural selection at work."

    Thanks, I'll keep that tip in mind for the next time I'm forced to drive through inner city ghetto streets (which I tend to avoid like the plague especially at night) where people habitually walk out in front of moving cars for reasons that I've yet to understand.

    link to this | view in thread ]

  21. identicon
    Anonymous Coward, 27 Mar 2018 @ 5:11pm

    Re:

    Although no state official will ever publicly admit this, Uber's infamous "screw your laws" attitude was likely a determining factor in getting slapped down.

    link to this | view in thread ]

  22. icon
    Roger Strong (profile), 27 Mar 2018 @ 5:13pm

    Re:

    She wasn't jaywalking.

    I know the police initially used the word, but there's a lot about what they said that turned out to be - to put it politely - inaccurate.

    In most places it's only jaywalking to cross in mid-block when the intersections at both ends of the block have lights. That wasn't the case here.

    The eight-lane road the victim was attempting to cross—has only one crosswalk in nearly two miles of road, making jaywalking a requirement of the urban design.

    Well. Perhaps it can be labelled jaywalking. But not to imply that it was illegal or even wrong.

    link to this | view in thread ]

  23. identicon
    Thad, 27 Mar 2018 @ 5:22pm

    Re: tl;dr

    ...it's "bite".

    link to this | view in thread ]

  24. identicon
    Thad, 27 Mar 2018 @ 5:27pm

    Ducey didn't do anything except election-year posturing. He banned Uber's AVs after Uber pulled them off the roads.

    There's an ongoing investigation. One of two things is going to happen: the investigation will conclude that Uber was at fault, or it won't.

    In the latter case, Ducey will rescind the ban. In the former, he could have waited to institute the ban until after the results were released.

    Announcing a ban before the end of the investigation, while the cars are already off the roads, accomplishes nothing except to make Ducey look like he's doing something about this.

    link to this | view in thread ]

  25. identicon
    Anonymous Coward, 27 Mar 2018 @ 5:28pm

    Re: Re: tl;dr

    With a 'y', not an i.

    link to this | view in thread ]

  26. identicon
    Anonymous Coward, 27 Mar 2018 @ 5:32pm

    >Uber Technologies Inc. disabled the standard collision-avoidance technology in the Volvo SUV that struck and killed a woman in Arizona last week

    Unless they could interface it to their own system, that makes perfect sense, as two computer fighting for control of the vehicle is a recipe for distaster. Does Volvo publish an API, or is their technology a trade secret?

    link to this | view in thread ]

  27. icon
    Uriel-238 (profile), 27 Mar 2018 @ 5:40pm

    You're going to have to explain this one to me.

    In California, pedestrians have the right of way. They are still required to cross at crosswalks, and being out in the street when you're not supposed to is considered jaywalking. But people get around without too many delays from obnoxious pedestrians.

    In New York, I understand that cars have the right of way on streets and pedestrians are expected to get out of the way. But at the same time I've also heard that drivers eagerly take their vehicles on sidewalks. Maybe pedestrians are saved by the daily traffic congestion reducing city traffic to five miles an hour.

    But I'm not sure why pedestrians in California would be less inclined to obey traffic laws when the cars are robot-driven than when they're human driven. Maybe you have a logic I haven't worked out.

    Here in California, pedestrians are not allowed on freeways, which makes sure that long trips are not hindered by pedestrian traffic. Again, maybe it's different on the East Coast.

    link to this | view in thread ]

  28. identicon
    Anonymous Coward, 27 Mar 2018 @ 5:45pm

    Re: Citation needed. -- Not from The GOOGLE, either.

    > Waymo cars were able to drive almost 5,600 miles last year without driver intervention.

    So you're told! But your only "evidence" is from the entity in question.

    link to this | view in thread ]

  29. icon
    Hugo S Cunningham (profile), 27 Mar 2018 @ 6:13pm

    Software to challenge and monitor human back-ups?

    It looks like the human back-up operator got bored and paid less attention after long periods of having nothing to do. That is a natural human reaction. Software should be added to keep the human operator engaged to a moderate degree and measure whether he seems alert.

    Similar software could be designed for human train operators in rural areas. Recent gruesome train accidents mostly seem to be due to operator inattention, not surprising when one considers how completely the details of operating a train are now automated.

    link to this | view in thread ]

  30. identicon
    Anonymous Coward, 27 Mar 2018 @ 6:21pm

    How about just ban cars?

    With this logic, they may as well just ban human driven cars. Human driven cars have killed other humans for the past century.

    link to this | view in thread ]

  31. identicon
    Anonymous Coward, 27 Mar 2018 @ 6:24pm

    (non) driver alertness

    While there have been numerous studies done on driver alertness, particularly in the case of long-haul truckers, I've not been able to find any research related to passenger alertness, which would be more applicable to the human "driver" riding in a self-driving car.

    This is purely anecdotal, but it certainly seems that a person riding as a front seat passenger is much MUCH more likely to doze off during a long trip than when that person is driving. As a self-driving car tends to make every human "driver" into basically a passenger, a big question is how much this might quantifiably affect that person's alertness, especially when lulled over time to trust the computer to drive safely and thus relax.

    Until the self-driving car is perfected and has no need for a human backup driver, one solution might be to see what can be done to both monitor the "driver" more effectively as well as perhaps provide some sort of suitable mental stimulation that would make up for the loss of driving sensation that would otherwise be keeping a hands-on driver alert.

    link to this | view in thread ]

  32. identicon
    Anonymous Coward, 27 Mar 2018 @ 7:04pm

    Many more manual cars

    I still don't understand this point. Do you realize how many more cars there are than self-driving cars?

    Like 4 or 5 few orders of magnitude more. Saying "more regular cars kill people than self-driving cars" means absolutely NOTHING.

    If anything I've even seen someone "do the math" and prove that if everyone had one of those Uber cars, there would be like 130x more accidents.

    Don't know how good that math was but the point is these self-driving cars, and especially Uber's self-driving cars may still be WAY too unreliable - and yes, even more unreliable than the "average human driver".

    But of course self-driving cars need to be way better than the average human drivers. If you think people are just going to accept car deaths from self-driving cars that are ANYWHERE CLOSE to the rate of regular car deaths, then you're nuts. The self-driving car killings needs to be WAY WAY WAY smaller. No question about it.

    link to this | view in thread ]

  33. identicon
    Anonymous Coward, 27 Mar 2018 @ 7:05pm

    Re: Re:

    Not only that, the videos posted by other people on youtube showing the accident scene on other nights with other cameras show there to be a lot more light than the Uber camera showed.

    I have no idea why the Uber camera showed the area to be as dark as it did.

    link to this | view in thread ]

  34. identicon
    Anonymous Coward, 27 Mar 2018 @ 7:23pm

    Re: How do we deal with a machine kills a human?

    Because I betcha our driving software is already better than most human drivers.

    Dream on. I betcha it isn't. Not even close.

    link to this | view in thread ]

  35. identicon
    Anonymous Coward, 27 Mar 2018 @ 7:27pm

    Re: Re: Re:

    > I have no idea why the Uber camera showed the area to be as dark as it did.

    You mean by the time it was finally released? Gee, I wonder what could account for that...

    link to this | view in thread ]

  36. identicon
    Anonymous Coward, 27 Mar 2018 @ 7:30pm

    Re: Re:

    I know the police initially used the word, but there's a lot about what they said that turned out to be - to put it politely - inaccurate.

    You're not suggesting that they might have been trying to protect corporate interests, are you? How shocking!

    link to this | view in thread ]

  37. icon
    Roger Strong (profile), 27 Mar 2018 @ 7:34pm

    Re: Re: Citation needed. -- Not from The GOOGLE, either.

    Waymo Via NYT

    Beyond that I won't accept that you're skeptical, with the only "evidence" coming from the entity questioning.

    link to this | view in thread ]

  38. identicon
    Anonymous Coward, 27 Mar 2018 @ 7:42pm

    3 things that go together

    1. E-voting machines
    2. The Internet Of Things
    3. Self driving cars

    link to this | view in thread ]

  39. identicon
    Anonymous Coward, 27 Mar 2018 @ 7:53pm

    Re: How do we deal with a machine kills a human?

    Why shouldn't there be fault? There is fault when a builder shoddily constructs a bridge that crushes people, when an industry pollutes a water source, when a missile launched from a drone kills innocent people.

    Autonomous vehicles were created by humans, they are not a force of nature.

    link to this | view in thread ]

  40. identicon
    Anonymous Coward, 27 Mar 2018 @ 7:57pm

    We can't even trust our phones to be secure, yet we are willing to entrust our lives to these algorithms?

    link to this | view in thread ]

  41. icon
    Uriel-238 (profile), 27 Mar 2018 @ 8:04pm

    Trusting our lives to algorithms

    The number of algorithms to which we trust lives is astounding. Some of the big ones are in oil tankers, train systems and power reactors.

    And generally they're way better than humans at preventing system failures.

    link to this | view in thread ]

  42. icon
    Uriel-238 (profile), 27 Mar 2018 @ 9:20pm

    Bad faith vs. Perfect Storms

    When a bridge has a known fatal flaw, when it fails due to shoddy materials, then yes, that can be traced back to a designer or an engineer or a contractor. But if the bridge was designed to withstand a 7.5 earthquake but falls apart when a 9.2 earthquake hits, then there's no human at fault. That was the risk taken when building a bridge that can withstand only so much. (And really, a bridge that could withstand a 9.2 would probably be too expensive to build)

    When the Three Mile Island meltdown occurred, it was determined to have been caused by a perfect storm of component statuses that resulted in system collapse. Maybe a nuclear supergenius might have been able to predict it, but no one had. Newer power reactors have safeguards that will help reduce future complex failures and will definitely prevent a failure like the one at Three Mile Island, but there are an unlimited number of ways a system can fail, and we can only ultimately reduce their probability.

    Now I'm not entirely sure what happened with the autonomous vehicle. Did the car detect Herzberg at all? If not was it a sensor failure or a processing failure? So far, we don't know. The AI may have been released for field testing prematurely. The driving system may have insufficient redundancy in its sensors. The vehicle's drive train or braking system might have failed. We don't yet know.

    But the question is, could someone have prevented it, and failed to do so, either due to neglect, shoddy work or malice? If the answer is yes, then yeah, we have someone we can blame. If not, then the only thing to be done is to learn from this incident, and add safety features that would prevent it from happening again.

    Now yes, the last line of defense, the safety driver, wasn't paying attention. But his responsibility is redundant. He is to blame for not catching a failure. He's not to blame for why it failed.

    The thing is we build systems bigger than ourselves all the time. A train going at full speed is a force of nature, as is a main electric power plant (nuclear or otherwise). We depend on countless systems that are beyond our control, and it's never clearer when those systems fail.

    link to this | view in thread ]

  43. icon
    Roger Strong (profile), 27 Mar 2018 @ 10:09pm

    Re: Bad faith vs. Perfect Storms

    Waymo cars were able to drive almost 5,600 miles last year without driver intervention. Uber’s cars weren’t able to meet a target goal of 13 miles per intervention.

    So the safety driver wasn't the redundant "last line of defense." The goal was that some day he might be, but for now he was the non-redundant FIRST line of defense.

    The problem of keeping safety drivers attentive in driverless or Tesla Autopilot cars isn't new either. Or even before that: Driver Attention Monitoring Systems - using eye tracking and more - have been in production cars for over a decade.

    This wasn't a 9.2 earthquake hitting. It wasn't a perfect storm. It was shoddily designed system, not ready for using an unsuspecting public as test subjects.

    link to this | view in thread ]

  44. identicon
    Anonymous Coward, 27 Mar 2018 @ 10:13pm

    Re: Re:

    Arizona Revised Statutes §28-793 says:

    A pedestrian crossing a roadway at any point other than within a marked crosswalk or within an unmarked crosswalk at an intersection shall yield the right-of-way to all vehicles on the roadway.

    She was within her rights to cross, but it was her duty to make sure it was safe, and not step in front of a car.

    It is, of course, also the duty of the driver to "Exercise due care to avoid colliding with any pedestrian on any roadway."

    link to this | view in thread ]

  45. identicon
    Andrew D. Todd, 27 Mar 2018 @ 10:17pm

    Some Choices are No Fun For Uber

    To expand on what I said previously:

    https://www.techdirt.com/articles/20180322/09215539477/ubers-video-shows-arizona-crash-vi ctim-probably-didnt-cause-crash-human-behind-wheel-not-paying-attention.shtml#c1918

    Based on the number of neurons in a human brain, the number of connections per neuron, and the rate at which neurons fire, a defensible performance estimate for the brain is something like 150 PetaFLops. There are a very few giant supercomputers in the world that fast. They fill up whole warehouses, cost hundreds of millions of dollars, and have power requirements in the tens of thousands of horsepower. None of them are old enough for kindergarten. That is, they simply have not been running long enough to achieve internal organization. No economically foreseeable computer is likely to match a human driver's ability to distinguish pedestrians from scattered garbage, etc.

    Some years ago, I was walking through a parking lot, past the entrance of a bowling alley. In the doorway, next to her father, was a little Italian-American girl, perhaps four or five years old., with auburn hair, peach skin, and black doe-eyes. Quite a little darling. Her father was teaching her how to cross the street. He told her to look left and right, and to see if anything was coming. She looked rather doubtfully at me, walking towards them. Her father followed her gaze, and laughed, with a rather apologetic gesture to me "Oh, not him, honey. He's not an automobile!" Listening to small children, one realizes their sense of the unreality of the world.

    That said, certain claims for Artificial Intelligence are, ipso facto, fraudulent. If someone says his system does it the way a human does it, he's lying. If you want to put it that way, Elaine Herzberg was killed so that Uber could perpetrate a fraud on prospective investors. You cannot get in on the ground floor of a government project. Government projects just don't work that way. Uber has an immense stake in convincing the public that self-driving cars will not require special roads, because special roads are the province of the government.

    Artificial Intelligence only works in certain extremely reductionist subjects, such as chess. A chess queen is defined in such a way that actual queens such as :Queen Semiramis; Queen Cleopatra,;Queen Zenobia; Queen Bodicea,; Empress Livia; Empress Theodora; Elizabeth of Hungary; Eleanor of Aquitaine Queen Phillipa (of Hainault); Anne Boleyn (Anne of the Thousand Days),; Queen Elizabeth I (Tudor); Mary Queen of Scots,; Catherine de Medici (France); Queen Isabella of Castille; Anne of Austria (queen of France); the Russian empresses Elizabeth and Catherine the Great,; Marie Antoinette; Queen Victoria; and Tsu Hsi (the Chinese Dowager Empress) are all irrelevant and prejudicial, A chess master is, first and foremost, a master of deliberate forgetting. He can create walls in his mind to exclude irrelevant knowledge. At that level, the sheer mental load of deliberately forgetting is so great that it is much easier to have never known the irrelevant facts in the first place. A chess program is ultimately a triumph of ignorance.

    link to this | view in thread ]

  46. identicon
    Anonymous Coward, 27 Mar 2018 @ 10:24pm

    overstating your case

    "There were ten other pedestrian fatalities the same week as the Uber accident in the Phoenix area alone, and Arizona had the highest rate of pedestrian fatalities in the nation last year, clearly illustrating that Arizona has some major civil design and engineering questions of its own that need to be answered as the investigation continues."

    - 4 of the fatalities the same week were due to driver error, jumping the curb and hitting someone. 1 was an impaired driver that killed 1 pedestrian. And one distracted driver in an SUV hopped a curb and hit 4 pedestrians, killing 3.

    That's not an engineering question, as those curbs have been there quite some time without being "hopped." It's plain, simple human stupidity and violation of existing laws.

    - 2 more were pedestrians jaywalking in the middle of the respective blocks where they were struck. One of those drivers was impaired.

    - 1 woman was hit by a guy that lost control of his car in a parking lot, straying into the street at the bus stop where she was walking.

    - 1 man was killed crossing in the walk, but the driver tested negative for impairment.

    - And another 1 was a woman walking in the middle of the road at midnight.

    - 1 was a woman in the crosswalk, and the driver fled the scene but was caught later.

    Every one of these accidents was in a city. Pheonix, Scottsdale, Tempe. They were all downtown, and at first glance, either the driver or pedestrian in each incident was distracted or DUI. There are plenty of crosswalks in each area.

    link to this | view in thread ]

  47. icon
    Uriel-238 (profile), 27 Mar 2018 @ 10:43pm

    Sometimes there is fault. (Just not always)

    If the Uber cars required intervention every 13 miles or less, then yeah, it sounds like the safety driver might not have been doing his job. You had more information than I did.

    I think the point I want to emphasize is that autonomous vehicle programs or the notion of self-driving cars should not be generally condemned even if the failure in the Herzberg incident turns out to be a bad actor, say a negligent operator or a poorly tested system introduced to the field sooner than due caution should have allowed.

    Anonymous Coward was suggesting that whenever a bridge collapses, whenever we have an industrial accident, whenever a train derails then we can always attribute it to wrongdoing of a human being (or a committee acting as a single entity). And I was trying to say it's not always so simple, that sometimes systems fail because systems are complex. Heck, the sinking of the Titanic came down to a general weakness of common rivets as they were made circa 1912, that no-one predicted.

    In the specific case of Herzberg's death, then yes, it sounds like there might have been bad actors after all.

    link to this | view in thread ]

  48. icon
    PaulT (profile), 28 Mar 2018 @ 1:27am

    Re: Re: How do we deal with a machine kills a human?

    OK, let's see those stats. Because I'm sure the single death related to an AI driver is far outweighed by the human caused deaths, even after adjusting for driving time, etc.

    link to this | view in thread ]

  49. icon
    PaulT (profile), 28 Mar 2018 @ 1:32am

    Re:

    "Let's call these contraptions what they really are: Autonomous Killer Robots. "

    Only if you also call normal cars Human Operated Killer Robots. Otherwise, you're being really, really stupid.

    "If a human kills another human there is a process to deal with it."

    Yes there is, it's called due process, in fact. Something that's been lacking in the attacks on this issue in the public sphere.

    "When a dev writes code for a machine that kills someone, why should they have zero culpability?"

    Is anyone saying they should have none? But, nobody's shown that their code caused the accident. In fact, the video appears to show that a human would not have fared any better. Baying for blood doesn't change the fact that genuine accidents happen.

    link to this | view in thread ]

  50. icon
    PaulT (profile), 28 Mar 2018 @ 1:34am

    Re: Re: Bad faith vs. Perfect Storms

    "It was shoddily designed system, not ready for using an unsuspecting public as test subjects."

    I'm all for punishment if this is shown in court. Until then, what we have is luddites attacking a new technology out of fear like they have every new technology for centuries.

    link to this | view in thread ]

  51. icon
    PaulT (profile), 28 Mar 2018 @ 1:41am

    Re: Many more manual cars

    "If anything I've even seen someone "do the math" and prove that if everyone had one of those Uber cars, there would be like 130x more accidents. "

    That's meaningless unless you compare those figures to the number of accidents that actually happen with humans. Did you do that? Every comparison I've ever seen suggests the figure will still be much less than we have now.

    "The self-driving car killings needs to be WAY WAY WAY smaller."

    There's still just the one. Run a comparison of that against the number of deaths on the road every single day, and see which figure is larger. Many, many more people have been killed and injured during the freakout about this single death than would have been if such vehicles were commonplace. Feel free to prove me wrong if you like, but I've not seen any convincing evidence as yet.

    link to this | view in thread ]

  52. icon
    PaulT (profile), 28 Mar 2018 @ 1:53am

    Re:

    Yep. I'll trust an algorithm against many of the ignorant assholes I see on the road. If I happen to suffer because of one, at least it wasn't some drunk tosser trying to text at 80mph.

    link to this | view in thread ]

  53. icon
    PaulT (profile), 28 Mar 2018 @ 2:02am

    Re: overstating your case

    Nice try at deflection. But your breakdown is clearly trying to skew things, by your own words.

    "4 of the fatalities the same week were due to driver error"

    OK.

    "2 more were pedestrians jaywalking in the middle of the respective blocks where they were struck. One of those drivers was impaired. "

    So, at least one was partially due to driver error. What were the circumstances of the other one? You have a fixation of impairment, but what were the other circumstances? Speeding, not driving correctly for the weather conditions, any other issues? There's possible fault there too even if they weren't a DUI.

    "1 woman was hit by a guy that lost control of his car in a parking lot"

    So... driver error. Why did you not count that with the 4 above?

    "1 man was killed crossing in the walk, but the driver tested negative for impairment."

    OK, I'll take that one.

    "And another 1 was a woman walking in the middle of the road at midnight."

    I'd need more info on that one, was it dark? No lights? If not, then how was that not driver error? So, 6 or 7 rather than the 4 you initially claimed.

    "1 was a woman in the crosswalk, and the driver fled the scene but was caught later."

    OK, so quite likely driver error as the natural reaction of someone not at fault, impaired or otherwise driving illegally is not to flee the scene.

    So, the case really wasn't overstated, unless you left out some vital details in your rebuttal. Even within your own counterargument, 3/4 of the drivers were clearly at fault.

    link to this | view in thread ]

  54. This comment has been flagged by the community. Click here to show it
    identicon
    aosindia, 28 Mar 2018 @ 2:31am

    aosindia

    AOS(Academy of Success) give the opportunity to learn digital marketing. You will learn how to gain leads through digital marketing also you will learn about how to grow your career in digital marketing. We provide all basic learning about SEO(Search engine optimization), SMM(Social media optimization) also affiliate marketing etc. we help new student to get jobs in Mnc. For more info: 9999710635.

    link to this | view in thread ]

  55. identicon
    Anonymous Coward, 28 Mar 2018 @ 2:50am

    Re: Re: Re:

    Videos taken on other nights are not definitive about the lighting conditions at the time of the accident, as their may have been a light failure that has since been repaired.

    link to this | view in thread ]

  56. icon
    Ninja (profile), 28 Mar 2018 @ 3:55am

    Re: So how many humans are you willing to kill?

    I'll bite. How many humans do you want to avoid killing? That's over 33000. Self-driving has killed a grand total of 1 in millions of miles tested. How does that compare to humans? And this was Uber, how is Waymo doing? And GM?

    If anything, you are being the one willing to keep the status quo of thousands dying. Who is killing people, the ones striving for safer cars, roads or the ones trying to block such developments? Did vaccines come to fruition without any death? Do you really think the ones developing medicine want to see people die due to the testing part? Are you that stupid?

    link to this | view in thread ]

  57. identicon
    Rich Kulawiec, 28 Mar 2018 @ 3:58am

    This is an apples-to-oranges comparison, and it's wrong

    "There were ten other pedestrian fatalities the same week as the Uber accident in the Phoenix area alone [...]"

    These raw numbers mean nothing. If you want to compare fatality rates, then use either "pedestrian fatalities per operator hours" or "pedestrian fatalities per vehicle miles".

    Using operator hours as a metric normalizes over the accumulated time an ensemble of vehicles was in use. Using vehicle miles normalizes over the accumulated distance that an ensemble of vehicles travels. Of course, the more hours that vehicles are in use and the more miles that they travel, the more opportunity they have to be involved in accidents, including but not limited to pedestrian fatalities. [1]

    To provide a hypothetical example, if 5000 vehicles were operated for exactly 1 hour each, that's 5000 operator-hours and if there were 10 pedestrian fatalities associated with those 5000 vehicles, then the that's a rate of .002 fatalities/operator-hour. (Similarly for vehicle miles.) And if -- during the same time period -- 1 self-driving vehicle was operated for 10 hours with 1 associated pedestrian fatality, then that's a rate of .1. Which is 50 times higher.

    The real numbers are far more skewed than this example, of course; Phoenix has a population of roughly 1.5M. If only 10% of those people drive only 20 minutes (during the time period in question), that's 45,000 operator-hours. And as large as that is, it's still too low to be realistic: consider all the vehicles that are operated all day longs (cabs, buses, trucks, delivery cars/vans/trucks, police cars, etc.) and consider the impact of twice-a-day commuting on the aggregate total. I wouldn't be surprised at all if the normalized pedestrian fatality rate per operator-hour for human-driven vehicles is a ten-thousandth or less or much less of that for driverless vehicles. (And the same goes for vehicle-miles, although obviously the numbers would be calculated differently.)

    Feel free to use your own back-of-the-envelope estimates for these. AAA has published the figure of 17,600 minutes/year as an estimate for all drivers; that's 338 minutes/week or 5.6 hours/week -- a lot higher than the 20 minutes I used above. The Car Insurance Institute estimates about 13,500 miles/year per driver. or about 260 miles/week. Obviously these vary by state and city, but I'm sure the actuaries who do this for a living have solid estimates for Phoenix. However you do the calculation, you'll find that in the Phoenix area, the pedestrian fatality rate for driverless vehicles is several to many orders of magnitude higher than that for human-driven ones.

    [1] Obviously the kind of accidents they're likely to be involved in varies with where the vehicles are. Pedestrian fatalities are more likely to happen on streets and less likely to happen on highways. On the other hand, high-speed collisions are more likely to happen on highways and less likely to happen in urban centers. However, calculating this based on an ensemble of vehicles which encompasses the entire area (downtown, city, suburbs, exurbs, etc.) and over a sufficient period of time (much more than a single day, in order to account for commuting/non-commuting days) smooths out the variations enough to yield useful metrics that are applicable to the entire region.

    link to this | view in thread ]

  58. icon
    PaulT (profile), 28 Mar 2018 @ 4:55am

    Re: This is an apples-to-oranges comparison, and it's wrong

    "If you want to compare fatality rates, then use either "pedestrian fatalities per operator hours" or "pedestrian fatalities per vehicle miles"."

    That's a reasonable metric, but the problem is that the miles travelled by automated vehicles is still so low that a single accident can heavily skew things, while the miles travelled by traditional vehicles is so high that dozens of fatalities barely registers a blip. Extrapolating from such a small dataset is going to give you bad results.

    https://xkcd.com/605/

    "Feel free to use your own back-of-the-envelope estimates for these."

    I'd rather see some real figures, but even with the caveat above that seems to be lacking. I'd rather people with any power be basing things of what's actually happening, not random guesswork.

    "However you do the calculation, you'll find that in the Phoenix area, the pedestrian fatality rate for driverless vehicles is several to many orders of magnitude higher than that for human-driven ones."

    Yep, and there's still only one of those. I'm sure the families of the other dead will be pleased that their deaths have been reduced to an even more meaningless statistic by those afraid of automation than they would have been if they happened at any other time. I also somehow doubt you'd have been so concerned about the death rate before this one happened, since that would have argued the opposite point for you.

    link to this | view in thread ]

  59. identicon
    Anonymous Coward, 28 Mar 2018 @ 5:11am

    Re: Re:

    Agreed.

    Bad self-driving algorithms can be changed with a software update and can be applied to every car of its kind on the road after just one of them gets in an accident.

    Bad human driving behaviours are much, much harder to fix. There are people who actively ignore good advice and resist training, plus you have to reeducate each individual driver one at a time. Also, blowjobs and cash bribes can be used in exchange for passing your test with merit.

    Your car will not accept a blowjob or an envelope full of money to ignore a software update. Machines have no morals to corrupt and learn immediately after new data has been uploaded to them. They are much more trustworthy than human beings when it comes to keeping our roads safe.

    Face it, meatbags, you suck at driving. Hand over your keys -- the machines are more sober, smarter and safer than you.

    link to this | view in thread ]

  60. identicon
    Anonymous Coward, 28 Mar 2018 @ 5:23am

    Re:

    Announcing a ban before the end of the investigation, while the cars are already off the roads, accomplishes nothing except to make Ducey look like he's doing something about this.

    It accomplishes keeping them off the roads. Sorry for your investment loss.

    link to this | view in thread ]

  61. identicon
    Rich Kulawiec, 28 Mar 2018 @ 5:26am

    Re: Re: This is an apples-to-oranges comparison, and it's wrong

    "That's a reasonable metric, but the problem is [...]"

    Agreed. It would probably be better to use statistics at the national level in order to better represent all driverless vehicles, but that still leaves the problem of the massive difference in scale between the two sets of statistics.

    "I'd rather see some real figures [...]"

    I'm working on getting those. I'm curious to see what they are as well. Of course, for a fair comparison, we'd also need figures on operator-hour and vehicle-miles for the driverless vehicles too. However, because there aren't many, we could deliberately overestimate those (e.g. 168 hours/week/vehicle, which is the theoretical maximum) and then see what those calculations tell us.

    "I also somehow doubt you'd have been so concerned about the death rate before this one happened, since that would have argued the opposite point for you."

    I've been arguing against driverless vehicles for a long time. I commented here on this specific point because the citation of the death rate is being used to suggest that driverless vehicles are safer. Personally, I think it would be better to compare all accidents (that is, fatal and non-fatal, pedestrian and non-pedestrian) in order to use larger data sets and perhaps gain better insight. But it should be clear to everyone that using raw numbers without normalization is just wrong.

    link to this | view in thread ]

  62. identicon
    Anonymous Coward, 28 Mar 2018 @ 5:33am

    Re: Re: Re:

    Umm, you do realize that software is written by the same "meatbags" you're proclaiming it superior to, don't you?

    link to this | view in thread ]

  63. identicon
    Anonymous Coward, 28 Mar 2018 @ 5:41am

    Re: Trusting our lives to algorithms

    You left out the big one: modern airliners. And yet, none of those are autonomous. (Pilot-less planes still are not carrying commercial passengers despite the airlines' desperate desire for such).

    link to this | view in thread ]

  64. identicon
    Anonymous Coward, 28 Mar 2018 @ 5:48am

    Re: Re: Re: How do we deal with a machine kills a human?

    OK, let's see those stats.

    You first.

    link to this | view in thread ]

  65. identicon
    Anonymous Coward, 28 Mar 2018 @ 5:53am

    Re: Re:

    "The person was dumb enough to walk right in front of a car in the dark, expecting it to stop. That's natural selection at work."

    Anybody stupid enough to get in my way deserves what they get.

    link to this | view in thread ]

  66. icon
    PaulT (profile), 28 Mar 2018 @ 6:09am

    Re: Re: Re: This is an apples-to-oranges comparison, and it's wrong

    "Agreed. It would probably be better to use statistics at the national level in order to better represent all driverless vehicles, but that still leaves the problem of the massive difference in scale between the two sets of statistics."

    Yes, which makes it useless for a direct comparison at the moment, unless you want to push the more scary-sounding ratio that this single death provides.

    "However, because there aren't many, we could deliberately overestimate those (e.g. 168 hours/week/vehicle, which is the theoretical maximum) and then see what those calculations tell us."

    Again, I'd rather get some valid data rather than try to randomly generate figures that will by nature be both fictional and skewed toward whatever the person guessing wants to prove.

    "I've been arguing against driverless vehicles for a long time."

    I'm yet to hear a valid reason, apart from "I don't trust them". Which is fine, but I trust human drivers less. It's highly subjective without any figures for hard proof, which means we're both just stating an opinion. My opinion is I'd rather have these out on the roads than the type of people I have to deal with every day on my commute.

    "I commented here on this specific point because the citation of the death rate is being used to suggest that driverless vehicles are safer"

    That's because until overall figures are provided that reliably show otherwise, the data proves that they are. We are literally talking so much about this accident because it's the only one that's ever happened to this point. A few weeks ago, nobody had ever died in such an accident, and the tally for most manufacturers is still zero. Everybody's scrambling to try and prevent the next one, at Uber, at their competitors and in the public sector. The other people who died that weekend will barely make a blip on traffic statistics and will largely be counted as simply the cost of people having private vehicles.

    Again, I agree that the figures skewed both ways, but there is nothing to show that automated vehicles are either more likely to crash or more likely to harm when they do. In fact, what we know so far indicates they are less likely, and we're still at the prototype stage (meaning, you expect more accidents at this stage). By nature, the technology and its safety will improve before they go into mass production. Until something shows the above assumption wrong, I'm going to go with what we know, and that is that they have a decent safety record thus far and nothing indicates that it will worsen.

    link to this | view in thread ]

  67. icon
    PaulT (profile), 28 Mar 2018 @ 6:13am

    Re: Re: Re: Re: How do we deal with a machine kills a human?

    How? I didn't make any claim, I just asked for stats to back up the guesswork being made by another.

    But, thanks for demonstrating how weak the grasp of logic is among the people arguing against this tech so far.

    link to this | view in thread ]

  68. identicon
    Anonymous Coward, 28 Mar 2018 @ 6:29am

    Re: Re: Re: Re: Re: How do we deal with a machine kills a human?

    I didn't make any claim...

    Oh, Pauly, Pauly, Pauly. You know your previous statements are still visible, right?

    link to this | view in thread ]

  69. icon
    PaulT (profile), 28 Mar 2018 @ 6:38am

    Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?

    Yes. The statement he was responding to makes no claim. If he's asking for data to back up some other statement of mine, then he should be quoting whatever he's asking for.

    You guys aren't really too stupid to understand how a conversation works, are you? You're just pretending, surely?

    link to this | view in thread ]

  70. identicon
    Anonymous Coward, 28 Mar 2018 @ 6:45am

    Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?

    What? You really don't understand how threading works? Click the "Thread" link at the beginning of the comments section. You may be shocked!

    link to this | view in thread ]

  71. identicon
    Anonymous Coward, 28 Mar 2018 @ 6:51am

    Re: Re: Re: Re:

    If the street lights failed, the car was still overdriving its headlights. It could have used highbeams there, or slowed to a safe speed.

    Of course if the street lights failed, Uber would be telling us incessantly. It didn't happen. Their video was a lie.

    link to this | view in thread ]

  72. identicon
    Anonymous Coward, 28 Mar 2018 @ 6:55am

    Tim Lee calculated "that Uber's cars have killed people at roughly 25 times the rate of a typical human-driven car in the United States".

    link to this | view in thread ]

  73. identicon
    Anonymous Coward, 28 Mar 2018 @ 7:11am

    Re:

    Small price to pay for such a fun new toy.

    link to this | view in thread ]

  74. icon
    PaulT (profile), 28 Mar 2018 @ 7:13am

    Re: Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?

    Yes, perfectly. I still haven't made any claim in this thread. I did state an opinion, but I have not claimed to have any data, so I cannot provide what AC moron #2 asked for.

    Are you hallucinating or are you just getting desperate for attention again?

    link to this | view in thread ]

  75. identicon
    Anonymous Coward, 28 Mar 2018 @ 7:19am

    Re: Re: Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?

    I that case, he was was just asking for stats to back up your guess work.

    I gave you the benefit of the doubt that maybe you were just demonstrating ignorance rather than duplicity. I can see now that I was wrong. Go ahead now, have your final say, as I really have nothing more to say to you.

    link to this | view in thread ]

  76. icon
    PaulT (profile), 28 Mar 2018 @ 7:22am

    Re:

    There's still only one death no matter how scary you want to make it sound, and it's highly debatable whether the tech was at fault for any element of the cause. If only you people weren't so transparently desperate.

    link to this | view in thread ]

  77. identicon
    Anonymous Coward, 28 Mar 2018 @ 7:25am

    I live in the area and at least until now saw self driving cars all over the place between Waymo and Uber. The city of Tempe is at fault here too. Bus stop across the dark, busy road with a large median in the middle of the street with paved walkways and signs instructing people not to talk on the median. The intersection is a quarter mile away and is 8 lanes. So people use that paved median to J walk. Doesn't excuse the woman who was looking straight ahead seemingly oblivious that shows walking across a road with moving vehicles. I don't think a driver could have avoided hitting her either with how quick she appeared in front of the car out of the dark on a 40 MPH road.

    link to this | view in thread ]

  78. icon
    PaulT (profile), 28 Mar 2018 @ 7:56am

    Re: Re:

    Certainly a smaller price than people are paying for the old toys.

    link to this | view in thread ]

  79. icon
    The Wanderer (profile), 28 Mar 2018 @ 8:01am

    Re: Re:

    If they wouldn't have gone back on the roads until after the conclusion of the investigation anyway, then announcing a ban before the end of the investigation accomplishes no such thing, as compared against waiting to announce one (or not) after the investigation ends.

    link to this | view in thread ]

  80. icon
    Cdaragorn (profile), 28 Mar 2018 @ 8:05am

    Re: Re: Re:

    So at what point do you consider it to be "developed"?

    There's no such thing as perfect technology, so that can't be the standard. It's already better then humans in normal driving conditions, so what exactly are you looking for?

    Technology is always developing. The cars you drive every day on your public streets are developing technology. You're imagining that there's some finish line that simply doesn't exist.

    link to this | view in thread ]

  81. icon
    The Wanderer (profile), 28 Mar 2018 @ 8:08am

    Re: Re: overstating your case

    I think that the initial group of four were not supposed to be the exhaustive set of the cases that were due to "driver error", but rather the cases that were due to "the form of driver error which involves the driver jumping the curb and hitting someone". It could certainly have been phrased more clearly, but I think the intent is visible.

    link to this | view in thread ]

  82. icon
    Cdaragorn (profile), 28 Mar 2018 @ 8:10am

    Re: Re: Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?

    Your "opinion" was just as much a claim as the person you're attacking.

    To quote: "Because I'm sure the single death related to an AI driver is far outweighed by the human caused deaths, even after adjusting for driving time"

    The fact that you also did not offer up any data to back your opinion still leaves it just as much a "claim" as his was.

    link to this | view in thread ]

  83. icon
    PaulT (profile), 28 Mar 2018 @ 8:30am

    Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?

    Wow, is English really hard today or something? There's three people stating an opinion. Nobody stated anyone made a claim until this idiot came in demanding data for a claim that wasn't made. Is this some weird transatlantic thing where something stated clearly as an opinion suddenly isn't one, or are people really, really unable to actually address the facts on this one so have to start attacking imagined claims?

    "The fact that you also did not offer up any data to back your opinion still leaves it just as much a "claim" as his was."

    Yes, so why is some tosser then demanding proof for a claim that was never made? Opinions don't need proof so long as they're stated as opinions. Which is what the words I typed were. Jesus.

    link to this | view in thread ]

  84. icon
    PaulT (profile), 28 Mar 2018 @ 8:34am

    Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?

    "I that case, he was was just asking for stats to back up your guess work."

    Why? As you stated it was a guess, I never stated it as fact. Why are you pretending it wasn't a guess, which by definition doesn't need proof as long as it's not being stated otherwise? Are you really that desperate to say things?

    "I can see now that I was wrong."

    No, I can still see that you're lying about what I said, though.

    link to this | view in thread ]

  85. icon
    PaulT (profile), 28 Mar 2018 @ 8:43am

    Re: Re: Re: overstating your case

    The intent appeared to be to blur the lines between driver and pedestrian error, and I'm always suspicious of such detailed listings with no source citation. It just jumped out at me that he grouped 4 of them together then went on to separately detail other incidents that clearly belonged in the same group.

    link to this | view in thread ]

  86. identicon
    From the Fuzzy Math departement, 28 Mar 2018 @ 9:00am

    Techdirt Big Tech bias

    Another obvious example of Techdirts bias toward Big Tech companies. The writer is the one ignoring the numbers in Arizona (2016):
    - ~2.400.000 cars (statista.com)
    - 962 automotive deaths (azdot.gov)
    - 600 autonomous cars (azcentral.com)
    - 1 death in autonomous vehicle accident.
    So obviously autonomous vehicles are much more dangerous with 1 death on 600 vehicles compared to 1 death per 2.500 normal vehicles.

    link to this | view in thread ]

  87. icon
    The Wanderer (profile), 28 Mar 2018 @ 9:07am

    Re: Re: Re: Re: overstating your case

    But if the initial group is "fatalities from jumping the curb due to driver error", then the others don't belong in the same group.

    I'm not disputing that that group definition wasn't all that clearly conveyed, nor that in order to avoid being misleading it probably should have been expressed more clearly, but I still see it as being present.

    While I can see where you arrive at that assessment of intent, I do not see that intent as being apparent in the way that you apparently do.

    link to this | view in thread ]

  88. identicon
    Anonymous Coward, 28 Mar 2018 @ 9:15am

    Re: Re: Re:

    Please, this is Uber; They would've been back on the roads without anybody's permission anyway. The company loves giving the finger to authority and common sense.

    link to this | view in thread ]

  89. icon
    PaulT (profile), 28 Mar 2018 @ 9:18am

    Re: Techdirt Big Tech bias

    "The writer is the one ignoring the numbers"

    No he isn't. He's just not wilfully misrepresenting them like you assholes.

    Your desperation is clear, but your conscience must sting a little? Lying to try and suppress a technology that stands to save many lives in favour of a technology that kills thousands each month? Just a little bit, surely?

    link to this | view in thread ]

  90. icon
    PaulT (profile), 28 Mar 2018 @ 9:19am

    Re: Re: Re: Re: Re: overstating your case

    Fair enough, we can agree to disagree. Like I say, I'm always suspicious of uncited lists of facts where the writer goes into any detail. Almost without fail, they are misrepresenting something, deliberately or not.

    link to this | view in thread ]

  91. identicon
    Anonymous Coward, 28 Mar 2018 @ 9:25am

    A hundred years from now, people will laugh at the notion that humans used to be allowed to drive cars on public roads.

    link to this | view in thread ]

  92. icon
    An Onymous Coward (profile), 28 Mar 2018 @ 9:40am

    Re: So how many humans are you willing to kill?

    If more people cared about long gaps in posting history there'd be two of you!

    link to this | view in thread ]

  93. identicon
    Anonymous Coward, 28 Mar 2018 @ 9:44am

    Karl, the constant attempts to re-focus the issue onto how human drivers are just oh-so deadly comes off less like "We should not lose sight of this issue" and more like "Please ignore the manner in which a big tech company cut corners, and don't get suspicious of how other companies could do so in the future."

    I would personally love to see a series of articles on Techdirt of the ways in which self-driving cars will change the way we interact (or don't interact) with the world, and the technology's implications for our personal freedoms.

    For instance, the privacy implications of self-driving cars, and how a future in which humans are straight-up banned from driving in most countries could be detrimental to civil liberties. The tech is basically a repressive government's wet dream.

    There's also questions to be asked about if we'll be able to own our own AVs, or if all vehicles will be owned and operated by tech companies and car-companies-turned-tech-companies. If we do own our own AVs, how much would we be allowed to tinker with them or fix them ourselves, or will they be locked down with restrictive DRM that mandates you visit an authorized dealer?

    These are issues where the "The sooner humans are out from behind the wheel, the better" crowd and folks like the EFF who support tech-based freedoms (and more freedom in general) would butt heads. Articles about said issues would make for interesting reads and comment threads.

    link to this | view in thread ]

  94. icon
    An Onymous Coward (profile), 28 Mar 2018 @ 9:45am

    Re: PS: note also the plug for Waymo, GOOGLE subsidiary.

    Apart from your churlish insistence on name-calling, I somewhat agree with you on this specific point. Self-driving capabilities are still in their infancy and there *will* be more fatalities. AI will never be able to predict individual human behavior. As long as there are humans anywhere near where self-driving vehicles operate there will be problems.

    The tech will get better over time but I find it odd that we're doing live testing in crowded urban settings already. We're just not there yet, it's still experimental (to wit: the useless human "driver" just in case).

    link to this | view in thread ]

  95. icon
    An Onymous Coward (profile), 28 Mar 2018 @ 9:49am

    Re: Re: Re: Re:

    The difference here is that it's not a single individual driver you can hold accountable when they screw up. In this case you have to try to hold the corporation behind the tech accountable... Good luck with that.

    There may be no "finish line" but there is a point when the general public learns to trust automated vehicles more than human-operated vehicles. Until we get there the corporations ought to be forced to bond their testing so there is a readily accessible fund when they inevitably screw up. Something easier to access than having to sue some entity with massively deep pockets.

    link to this | view in thread ]

  96. icon
    An Onymous Coward (profile), 28 Mar 2018 @ 9:51am

    Re: How do we deal with a machine kills a human?

    If there was malice then it was no accident, by definition. It's not machine malice we should be afraid of, it's AI's inability to navigate a world full of unpredictable humans. Despite humans being murderous morons, AI is worse at this than other humans.

    link to this | view in thread ]

  97. icon
    An Onymous Coward (profile), 28 Mar 2018 @ 9:56am

    Re: Re: Re: Re: Re: How do we deal with a machine kills a human?

    I've been working as a software engineer for close to 3 decades. I've worked on embedded system, including medical devices, where bugs are intolerable. And yet... I avoid most tech, particularly that which presents a hazard to life and limb as I *know* there is no such thing as bug-free software.

    As we've entered the world of "Deploy it quick! We can patch it later." software quality has gone down the crapper. Most embedded systems are riddled with issues, the type of software that should be least prone to such problems (to wit: IoT, including vehicles). There is no reason in the world anyone should trust any software-driven system any more. Given that vehicles are basically guided cannonballs we should be especially careful with how they're deployed.

    No, I'm not a fan of this tech but I do see it is inevitable. Some day we'll get there but for right now we should not be testing this experimental technology in crowded urban areas.

    link to this | view in thread ]

  98. identicon
    Anonymous Coward, 28 Mar 2018 @ 9:56am

    Re: Re: tl;dr

    damn - lol

    link to this | view in thread ]

  99. icon
    An Onymous Coward (profile), 28 Mar 2018 @ 9:59am

    Re: Re: Re: Re: Re:

    The car's and ambient lighting are irrelevant. Lidar doesn't require lighting.

    link to this | view in thread ]

  100. identicon
    Anonymous Coward, 28 Mar 2018 @ 10:02am

    Re: Re: Re:

    They knew what they were getting into.

    link to this | view in thread ]

  101. identicon
    Thad, 28 Mar 2018 @ 10:04am

    Re: Re: So how many humans are you willing to kill?

    I'll bite. How many humans do you want to avoid killing? That's over 33000. Self-driving has killed a grand total of 1 in millions of miles tested. How does that compare to humans?

    It doesn't. A single datapoint is not statistically significant. You can't draw a trend line if you only have one point.

    link to this | view in thread ]

  102. identicon
    Rich Kulawiec, 28 Mar 2018 @ 10:07am

    Re: Re: Re: Re: This is an apples-to-oranges comparison, and it's wrong

    "Again, I'd rather get some valid data rather than try to randomly generate figures that will by nature be both fictional and skewed toward whatever the person guessing wants to prove."

    Do note by using the theoretical maximum that I suggested that this stacks the deck *against* my point. I did so deliberately to avoid skewing the numbers of favor of my argument.

    "I'm yet to hear a valid reason, apart from "I don't trust them""

    I've provided some in previous commentary here, and I've referenced others. I'm overdue to write a long-form piece laying out some of them -- and there are plenty. One of my principle concerns is that driverless vehicles aren't special -- they're just another thing in the IoT, and the entire IoT is an enormous, raging dumpster fire of security and privacy failures. There are ZERO reasons to think that cars will be any better than toasters, and a lot of reasons to think that they'll be worse.

    I'll publish it when I have the time so that the arguments are laid out more clearly for analysis/critique. If you want to see a draft version, drop me an email and I'll send you what I have so far.

    link to this | view in thread ]

  103. icon
    An Onymous Coward (profile), 28 Mar 2018 @ 10:09am

    Re: Re: Many more manual cars

    There is insufficient data to draw any conclusions either way. Let's hope they find a safer way to develop their tech while we collect more data.

    link to this | view in thread ]

  104. identicon
    Thad, 28 Mar 2018 @ 10:11am

    Re: Software to challenge and monitor human back-ups?

    It seems like a second safety driver would help significantly. That would, of course, double the cost of drivers and reduce the maximum number of paying passengers by one per car; it would be a significant expense. And it's not clear that it would have helped in this case. But if I were making recommendations, that one would be up there.

    link to this | view in thread ]

  105. icon
    An Onymous Coward (profile), 28 Mar 2018 @ 10:11am

    Re: Re: Re:

    How so? If the stated fact is true, the old toys are 25 times safer...

    link to this | view in thread ]

  106. identicon
    Anonymous Coward, 28 Mar 2018 @ 10:12am

    Re: Re: Trusting our lives to algorithms

    Comparing modern airliner software with driverless vehicle software is a bit silly.

    The flight certified software used on commercial airliners receives much more scrutiny during code review, integration, test and subsequent field trials.

    The uncertified driverless vehicle software receives ???

    Modern airliners are capable of landing autonomously but airports/regulatory agencies are not ready to let them do so I guess. Also, the shuttle is/was capable and did.

    link to this | view in thread ]

  107. icon
    An Onymous Coward (profile), 28 Mar 2018 @ 10:15am

    Re: Re: Techdirt Big Tech bias

    I don't get how the ratios are a misrepresentation. If there were 4000 times as many self-driving cars (to get to 2.4m) and the kill rate holds there would have been 4000 deaths due to autonomous cars in the same period. Compared to 962 from human-driven vehicles that's a pretty abysmal rate.

    Granted, we don't have nearly enough data on self-driven cars to draw any solid conclusions but the above isn't misrepresentation, just incomplete data.

    link to this | view in thread ]

  108. identicon
    Thad, 28 Mar 2018 @ 10:23am

    Re: Re: Techdirt Big Tech bias

    And I think anybody who's trying to extrapolate a ratio of deaths from anonymous cars versus manually-driven ones at this stage, with a single data point, is either being intentionally disingenuous or does not understand how statistics works. Whichever argument they're making -- "autonomous cars are safer" or "human drivers are safer" -- they simply don't have the data to back that claim up.

    I've had my disagreements with Rich Kulawiec on the subject of AVs, but he's absolutely right that if we want to draw any conclusions about the relative safety of AVs and traditional drivers, it would make a lot more sense to compare all the accidents we have data for, as "all accidents" make for a more reliable data set than "fatalities" (which is not a data set at all, it's a datum, singular).

    I do think Karl's right that the number of pedestrian deaths in the Phoenix area suggest some serious city design problems, as well. (I'd also be interested in seeing a breakdown by month. I suspect that people drive more aggressively when it's hot.)

    link to this | view in thread ]

  109. icon
    Uriel-238 (profile), 28 Mar 2018 @ 10:29am

    Re: This is an apples-to-oranges comparison, and it's wrong

    Last I checked in Statistics class a sampling of one is useless. And every time we have a Hurricane Sandy (or even the fucked up season that was 2017) our climatologists in their intellectual honesty have to admit this is one data point based on probability.

    We have to face the truth of a really huge error margin, that Autonomous cars may have been super lucky so far, or that Herzberg got super unlucky and driverless cars are safe as houses. The actual probability is somewhere in there, not necessarily near the midpoint.

    link to this | view in thread ]

  110. identicon
    Annonymouse, 28 Mar 2018 @ 10:35am

    Re: Basically

    Option 3 is called a railroad abd stupid humans have been killed on those even if the train was stopped

    link to this | view in thread ]

  111. identicon
    Anonymous Coward, 28 Mar 2018 @ 10:35am

    Re:

    Sorry, API's are copyrighted now (see Oracle case), so nobody can use them...

    link to this | view in thread ]

  112. identicon
    Anonymous Coward, 28 Mar 2018 @ 10:48am

    Re: Re: Re: Re: Re: SKYNET

    I know, lets design an AI to arbitrate all future Human/AI disagreements. It will be the System Keeping Your Neural Entity Tethered (SKYNET for short) and will ensure that we 'rogue' humans don't get too wild in our decision making processes...

    I'm sure it wouldn't be biased towards it's own kind, or decide to take over the entire world either, but if it did... we can just patch it later, right?

    link to this | view in thread ]

  113. identicon
    Rich Kulawiec, 28 Mar 2018 @ 10:49am

    Re: Re: This is an apples-to-oranges comparison, and it's wrong

    "Last I checked in Statistics class a sampling of one is useless."

    Two responses to that.

    First, if we accept that statement, then it is useless in support of the claim "driverless cars are more safe" and equally useless in support of the claim "driverless cars are less safe".

    Second, that's why I suggested approaches that (a) normalize and (b) use many more data points. If -- and I'm fabricating these numbers to illustrate -- driverless cars have been on the road for 4000 hours in Phoenix, then we have substantially more than one data point about them. Of course while that was happening, human-driven cars might have been on the road for 315,000 hours, so we still have the problem posed by the enormous disparity in the raw numbers. But at least we're past the problem of a singular data point.

    What we need to better understand this are the real numbers for both human-driven and driverless cars. I'm working on the former at the moment.

    link to this | view in thread ]

  114. identicon
    Anonymous Coward, 28 Mar 2018 @ 11:14am

    Re: Re: So how many humans are you willing to kill?

    Uber killed a person with bad driving. Now their license was suspended—temporarily—and they'll need to reapply and convince the DMV they can drive safely. Other self-driving companies can continue operating.

    Is that unreasonable? I'd expect the same if a human killed someone with their car.

    Also see the Ars stories saying that Uber cars are 25 times as deadly as humans, and far less safe than other self-driving cars. OK, sample size is small, but a temporary pause isn't crazy.

    link to this | view in thread ]

  115. identicon
    Anonymous Coward, 28 Mar 2018 @ 11:18am

    Re: Re:

    Several self-driving experts have said it was avoidable, and that other self-driving projects would have done better. The company making the car's built-in anti-collision feature said their software would have stopped it, if not disabled. And Uber's video was transparently misleading (poorly lit, no Lidar etc.)—or if accurate, the car couldn't stop within the area illuminated by its headlights, which is plainly unsafe.

    link to this | view in thread ]

  116. identicon
    Anonymous Coward, 28 Mar 2018 @ 11:22am

    Re: Re: Techdirt Big Tech bias

    Who's trying to suppress anything? We're talking about Uber only, not all self-driving cars, and people mostly want it to improve rather than go away.

    link to this | view in thread ]

  117. identicon
    Anonymous Coward, 28 Mar 2018 @ 11:34am

    Re: Re: Re: Re: Re: Re:

    Yes, but how well will lidar handle dark non-reflective clothing? Also, it is mainly a range measuring device for what the camera can see, so did the software thing it was giving the range to the car a few second ahead.

    link to this | view in thread ]

  118. icon
    Uriel-238 (profile), 28 Mar 2018 @ 11:48am

    Supporting the claim "driverless cars are more / less safe..."

    The problem is that you're still working with an anomaly. Even if you offset deaths to miles (miles in which someone died in contrast do miles in which someone did not die) you're dealing with a status set of one incident.

    All that tells us is that deaths by autonomous cars are not impossible.

    link to this | view in thread ]

  119. identicon
    Anonymous Coward, 28 Mar 2018 @ 11:55am

    Re:

    It's hard to predict a hundred years out. Will they be laughing at the concept of single-occupant vehicles, or the waste of everyone having their own? The existence of roads? The presence of cars on those roads? (As opposed to the sky, tunnels, ...?)

    We could all be riding small pods on tracks, or subway-like hyperloops. Maybe horses and bicycles come back in a major way.

    link to this | view in thread ]

  120. icon
    Uriel-238 (profile), 28 Mar 2018 @ 12:03pm

    Certified vs. uncertified software

    There was a point when flight-certified software was not yet so reviewed, integrated, scrutinized, etc. There was a point it was installed on a plane for the first time.

    Driverless vehicle software are in various stages. Some of them, meant to assist drivers rather than replace them, are installed in production vehicles, such as the Mercedes autonomous cruise control system and Tesla Autopilot.

    The same thing with various algorithmic systems that make trains, power plants and industrial parks not explode.

    (Fun trivia: Engineers originally ran steam engines which were complicated contraptions with a propensity for exploding spectacularly and catastrophically, often killing the engineer and anyone in proximity. Engines got better as we added mechanical, and later electronic regulators to prevent some kinds of catastrophic system failure, to the point that these days we only need a train operator rather than someone with sophisticated engineering training.)

    link to this | view in thread ]

  121. icon
    Uriel-238 (profile), 28 Mar 2018 @ 12:12pm

    Bicycles

    Madeline L'engle predicted the power-assisted bicycle in A Wrinkle In Time not a moped or motorized bicycle, but one whose propulsion engine was small enough that it could be lugged around as needed.

    They've appeared in the last decade and are still early-adopter expensive (and short range).

    Thanks to XKCD I'm fantasizing about computer assisted autogyros. (Airplanes got started, like cars, from bicycles, which in turn got started with horses.)

    link to this | view in thread ]

  122. identicon
    Anonymous Coward, 28 Mar 2018 @ 12:14pm

    Re: Re: This is an apples-to-oranges comparison, and it's wrong

    Last I checked in Statistics class a sampling of one is useless.

    Is it really "one" here? What about all the times a self-driving car did stop? Or would stop—we don't need to do human testing here, we can feed sensor data to algorithms to see what they'd do.

    The companies should have logs of these things, from real-world tests, closed-track tests, and simulations. Too bad we don't have anything to compare against; no human driver reports that they almost hit someone.

    link to this | view in thread ]

  123. icon
    Uriel-238 (profile), 28 Mar 2018 @ 12:42pm

    Recommended Reading

    The Ones Who Walk Away From Omelas by Ursula K. Le Guin. In Omelas Guinn posits a joyful, healthy, utopian community that is powered by a single forsaken child, a literal kid who is shown no compassion or mercy and seldom lives to her teens. Upon the child's demise, she is replaced with another child given the same treatment, and by this inflicted misery, Omelas' luster, prosperity and tranquility are preserved.

    The story is a classic, commonly inflicted on 7th-to-8th grade English students in the US to make them cry and fiercely discuss morality.

    The notion of such a place is horrific until we realize that we fuel the well being of our own society with countless miserable lives who suffer and perish with little meaning, whether we work them to death, or pack our prisons with them or throw them into meaningless wars. And the society propped up by all these squandered lives is a far, far reach from paradise or utopia. It's certainly not Omelas

    So we are evidently willing to kill thousands. Millions.

    And any effort to advance technology to reduce that number is a good investment out of desperation that some day things might be better.

    link to this | view in thread ]

  124. identicon
    Anonymous Coward, 28 Mar 2018 @ 1:55pm

    Self driving cars may eventually be safer than human drivers, and Waymo is well on their way to getting to that point, but at the moment nothing suggests that Uber is anywhere close to that. When a human recklessly causes an avoidable accident (or drives impaired so there is an increased chance of causing an accident), they will often find their license suspended. Why is it a surprise when the state takes the same approach to Uber, which based on the Ars Technica article caused what should have been avoidable accident after it somehow failed to notice the pedestrian in the middle of the road?

    link to this | view in thread ]

  125. identicon
    Data Wofl, 28 Mar 2018 @ 2:53pm

    Re: Re: So how many humans are you willing to kill?

    Wrong. That's 1 death in roughly 10 Million miles for robots. Human drivers kill 1 in 100 Million miles. So far with the limited data we have, robots are an order of magnitude less safe than humans. As simple as that.

    link to this | view in thread ]

  126. icon
    An Onymous Coward (profile), 28 Mar 2018 @ 3:23pm

    Re: Re: Re: Re: Re: Re: Re:

    Lidar is much more than that. It is capable of drawing objects in view in near-real time without the need for cameras to give those objects texture. It can "see" for quite a distance, too. There's no reason in the world that the lidar on that car could not have "seen" the pedestrian and reacted as it can also scan and rescan fast enough to detect objects in motion. That's its whole purpose.

    link to this | view in thread ]

  127. identicon
    Coward, 28 Mar 2018 @ 3:45pm

    Re: Re: So how many humans are you willing to kill?

    Just watched the Nvidia GTC keynote. While this technology is incredible and I cannot wait to get into a safe car that is not being driven by a meat computer, that technology is not here yet.

    It takes one billion miles of motoring for 660 accidents with human drivers, so even Google has not done enough miles to be able to say that it better than a human. I suggest that a benchmark should be 10 million miles, in which case humans would have had a few accidents.

    I think self driving cars should be completely banned from the road until they have done 10 million miles of simulated driving. By that I mean that the software should be QAed for 10 million miles before being allowed on the road, and if you change the code you need to do it again -sort of like validating drugs. This is an infant industry and it needs to prove that it is better than humans and we can't risk it failing because some company pushes out an OTA release because of time pressures.

    The new Nvidia technology makes this completely possible and car companies need to buy the tech and do the job properly.

    link to this | view in thread ]

  128. identicon
    Thad, 28 Mar 2018 @ 3:58pm

    Re: tl;dr

    Last I checked in Statistics class a sampling of one is useless.

    You've got the right idea, but this isn't even a sampling of one, it's an entire dataset of one.

    link to this | view in thread ]

  129. identicon
    Thad, 28 Mar 2018 @ 4:03pm

    Re: tl;dr

    So far with the limited data we have, robots are an order of magnitude less safe than humans. As simple as that.

    Except it's not as simple as that, because the limited data we have are so limited that they don't support the comparison you're making.

    https://en.wikipedia.org/wiki/Significance_arithmetic

    https://en.wikipedia.org/wiki/Propagat ion_of_uncertainty

    link to this | view in thread ]

  130. identicon
    Anonymous Coward, 28 Mar 2018 @ 4:43pm

    Re: Re: Re:

    Bad self-driving algorithms can be changed with a software update and can be applied to every car of its kind on the road after just one of them gets in an accident.

    One security snafu, and a malicious update can be applied to every car of its kind on the road simultaneously. And we all know how good at security car makers are.

    link to this | view in thread ]

  131. identicon
    Anonymous Coward, 28 Mar 2018 @ 5:03pm

    Re: Re: Re: Re: Re: Re: Re: Re:

    Their Lidar is not designed to capture a 3d model at close ranges. but rather to detect and range objects at distances relevant to safe driving. Light speed is about 300,000,000 meters a second, and allow for the return trip, and ignoring pulse length etc, a Lidar, or Radar use less 750,000 pulses per second for a 200 meter range. More range reduce the pulses even further, and if you want more frequent scans, your resolution drops as you divide those pulses over multiple scans.

    This makes Lidar useful for detecting, ranging and tracking targets, but requires a camera to identify them, and maybe makes the camera the prime detector of long range targets.

    link to this | view in thread ]

  132. identicon
    Anonymous Coward, 28 Mar 2018 @ 5:16pm

    Re: Re: Re: Techdirt Big Tech bias

    Since the comment is written by "Fuzzy Math department" I assume it's just trolling.

    link to this | view in thread ]

  133. icon
    Uriel-238 (profile), 28 Mar 2018 @ 5:37pm

    Car security

    I suspect that between failed proprietary device-security and state-mandated backdoors, that's how we'll get right-to-jailbreak (a la right-to-repair) and we'll see an uptick in open-source offerings.

    I suspect once someone willfully murders someone else using an IoT exploit (say, to make a robotic insulin dispenser OD a diebetic victim) our fear of cyber-assassins will become greater than our fear of rebellious / buggy robots.

    That was the point of a recent XKCD post.

    link to this | view in thread ]

  134. icon
    JMT (profile), 28 Mar 2018 @ 5:45pm

    Re: Re: PS: note also the plug for Waymo, GOOGLE subsidiary.

    "AI will never be able to predict individual human behavior."

    Humans will never be able to predict individual human behavior, so we can never expect AI to do so either.

    "The tech will get better over time but I find it odd that we're doing live testing in crowded urban settings already."

    Because you have to test in real world situations to know whether your isolated testing has actually simulated the real world effectively. And given how complex the real world is, it's never never going to be practical to perfect these systems without leaving the testing grounds. This case seems to be more about Uber's poor implementation and Arizona's overly permissive regulations than proof we're doing this too soon.

    link to this | view in thread ]

  135. icon
    JMT (profile), 28 Mar 2018 @ 5:51pm

    Re: Re: Re: Re: Re:

    "The difference here is that it's not a single individual driver you can hold accountable when they screw up. In this case you have to try to hold the corporation behind the tech accountable..."

    But what's the alternative? No individual is ever going to develop a self-driving car, so there will always be a corporation responsible. It's not that different to a conventional car with a design flaw that proves fatal. It's not an individual engineer that (maybe) gets held to account, it's the company with their name on the back.

    link to this | view in thread ]

  136. icon
    JMT (profile), 28 Mar 2018 @ 5:59pm

    Re: Re: Re: Re:

    The difference in lighting is so dramatic it's very unlikely to have been a lighting failure. Even with no street lights at all the car's headlights should've provided far more range than the video showed. It's the video that's suspect here, not the lighting.

    link to this | view in thread ]

  137. icon
    JMT (profile), 28 Mar 2018 @ 6:11pm

    Re:

    "The person was dumb enough to walk right in front of a car in the dark, expecting it to stop. That's natural selection at work."

    That might sound pretty harsh, but in all the talk about this incident there has been very little said about the fact that the car would've been just as visible to the pedestrian as the pedestrian supposedly should've been to the car. And on the road, the car wins. She deserves just as much blame as Uber does.

    link to this | view in thread ]

  138. icon
    JMT (profile), 28 Mar 2018 @ 6:14pm

    Re: Re: Re: Re:

    Humans are quite capable of writing software to do things better than we do. It's literally why we do it.

    link to this | view in thread ]

  139. icon
    JMT (profile), 28 Mar 2018 @ 6:16pm

    Re:

    Tee Lee, like most people making these claims, doesn't seem to understand how statistics work. One data point is pretty much worthless for analysis.

    link to this | view in thread ]

  140. icon
    JMT (profile), 28 Mar 2018 @ 6:17pm

    Re: Re:

    Ugh, brain fart... Tim Lee

    link to this | view in thread ]

  141. icon
    JMT (profile), 28 Mar 2018 @ 6:19pm

    Re: Re: Re: Techdirt Big Tech bias

    Your second paragraph proves why your first is wrong. It *is* a misrepresentation, because statistically one data point can't represent anything.

    link to this | view in thread ]

  142. icon
    PaulT (profile), 29 Mar 2018 @ 12:52am

    Re: Re: Re: Re: Re: Re: How do we deal with a machine kills a human?

    What you say is true... yet it's likely that the software-driven systems will still be safer than the current traffic system.

    "we should not be testing this experimental technology in crowded urban areas"

    The single death being discussed happened near a public park while there were few people around. The problem is, the thing that caused the crash was erratic human behaviour, and you're not going to account for that with lab testing. You have to expose your systems to the real world.

    link to this | view in thread ]

  143. icon
    PaulT (profile), 29 Mar 2018 @ 1:00am

    Re: Re: Re: Re:

    Only if you're deliberately dishonest about the statistics. You must be so glad, lying about technology that could prevent thousands of deaths a year because a single other death happened in a way that allows you to lie.

    link to this | view in thread ]

  144. icon
    PaulT (profile), 29 Mar 2018 @ 1:06am

    Re: Re: Re: Techdirt Big Tech bias

    "If there were 4000 times as many self-driving cars (to get to 2.4m) and the kill rate holds"

    ...but there's no evidence that it would hold. You are misrepresenting the data if you don't admit that you're extrapolating a single data point, which is a dishonest thing to do and will not give you anything approaching a valid prediction.

    "the above isn't misrepresentation, just incomplete data."

    If you present it without some caveat about your guesswork, then you're just making shit up. It's complete misrepresentation without you providing the context.

    link to this | view in thread ]

  145. icon
    PaulT (profile), 29 Mar 2018 @ 1:10am

    Re: Re: Re: Re: Re: This is an apples-to-oranges comparison, and it's wrong

    "Do note by using the theoretical maximum that I suggested that this stacks the deck *against* my point"

    Yes, but most would not be so generous.

    "There are ZERO reasons to think that cars will be any better than toasters, and a lot of reasons to think that they'll be worse."

    But, will they still be better than drunk, distracted, recklessly driving human beings? I believe they will. If so, this is a reason to exercise caution, not the remove the tech.

    I look forward to a detailed overview, but my general thinking is very simple - while the tech has many issues that need to be addressed, it will be an improvement. I hope an analysis will cover these and explain why you think it won't be a net benefit, but all I'm seeing so far is a lack of trust in the tech.

    You're right to be cautious. I just think we need something more as a reason not to explore this kind of tech, which I do believe will be a net benefit once it matures.

    link to this | view in thread ]

  146. icon
    PaulT (profile), 29 Mar 2018 @ 1:32am

    Re: Re: Re: This is an apples-to-oranges comparison, and it's wrong

    "First, if we accept that statement, then it is useless in support of the claim "driverless cars are more safe" and equally useless in support of the claim "driverless cars are less safe"."

    Yes. So, we ignore that single data point. What do the rest of the statistics show once that outlier has been removed? One claim is supported more than the other, I'm sure you'll find, although we do still need more data to be confident. The only way to get that data is to continue public testing.

    Besdies - data IS available on hours travelled, accidents, incidents requiring human intervention, etc.. I'm not sure how complete it is in the public record, but there's certainly more than one data point available surrounding other activities.

    The only thing that is a single data point, and the one people are freaking out about is the single death that's ever happened in a collision involving one of these vehicles. So, we shouldn't be treating that as the all important issue.

    link to this | view in thread ]

  147. icon
    PaulT (profile), 29 Mar 2018 @ 1:34am

    Re: Re: Re: So how many humans are you willing to kill?

    "Is that unreasonable? I'd expect the same if a human killed someone with their car."

    No, it's not unreasonable. The people calling for the entire sector of tech development to be discontinued due to a single incident are the unreasonable ones.

    "Also see the Ars stories saying that Uber cars are 25 times as deadly as humans"

    The ones that lie with statistics manipulation to advance a certain narrative? No matter how you spin it, that claim is a ridiculous spin aimed at scaring the ignorant.

    link to this | view in thread ]

  148. identicon
    Anonymous Coward, 29 Mar 2018 @ 2:14am

    Re: Re: Re: So how many humans are you willing to kill?

    And if the robots go on to rack up another 90 million miles and counting without another fatality, they become safer than humans.

    link to this | view in thread ]

  149. identicon
    tracyanne, 29 Mar 2018 @ 5:28am

    Re: Re: Basically

    Queensland actually, probably similar driving styles. I hope Arizonians don't have the same problem navigating roundabouts as Queensland drivers seem have.

    link to this | view in thread ]

  150. identicon
    Anonymous Coward, 29 Mar 2018 @ 5:36am

    Re: Re: Basically

    my point actually.

    The way I see it people are just plain stupid. If you program the AI in such a way it avoids people being stupid, people will take advantage of that, and self drive cars will get nowhere fast.

    program the AI to be less accommodating, and I suspect the accident rate will go up to the point that it makes not much difference if there's an AI or a human driving

    create AI only roads, and we're back to railways.

    link to this | view in thread ]

  151. icon
    PaulT (profile), 29 Mar 2018 @ 6:20am

    Re: Re: Re: Basically

    "The way I see it people are just plain stupid."

    All the more reason to get them out of the way as much as possible. Not being able to predict the stupid things people will still do despite the tech, is not a reason to avoid implementing the tech.

    "create AI only roads, and we're back to railways."

    Which with the added bonuses of not having to fit around a predetermined schedule and being able to choose the start/end points of the journey would be absolutely fine for the needs of most people.

    Rail systems work pretty well as mass transit systems through most of the world. In the US, the problems with them are often due to lower population density and entire cities being designed with the assumption everyone will have a car. One of the main problems getting people to switch to public transport is the travel between them and the station/hub they need to use to get to it, the other is trying to fit around a predetermined timetable. If you can use special roads and get the travel done door-to-door and at the time the traveller themselves wishes, that's more than sufficient for most commuter traffic.

    There's still use cases where the use of a standard vehicle might be preferable, but if all you're trying to do is get from A to B and the traffic flow can be reliably optimised, then a "railway" might be all people actually need.

    link to this | view in thread ]

  152. identicon
    Anonymous Coward, 29 Mar 2018 @ 10:01am

    Re: Nothing to see here

    lol funny how everyone avoided this particular comment.

    link to this | view in thread ]

  153. identicon
    Thad, 29 Mar 2018 @ 12:41pm

    Re: tl;dr

    "Also see the Ars stories saying that Uber cars are 25 times as deadly as humans"

    Would that be the article where the author clarified that claim after about a dozen people in the comments pointed out that the data -- or rather, datum -- we have is not significant enough to support it?

    link to this | view in thread ]

  154. identicon
    Thad, 29 Mar 2018 @ 12:43pm

    Re: tl;dr

    They do, but we don't actually have many roundabouts here, so it's mostly not an issue.

    link to this | view in thread ]

  155. identicon
    Thad, 29 Mar 2018 @ 12:47pm

    Re: tl;dr

    That might sound pretty harsh, but in all the talk about this incident there has been very little said about the fact that the car would've been just as visible to the pedestrian as the pedestrian supposedly should've been to the car.

    I...can't speak for where you live, but I can state with some confidence that in Tempe, most pedestrians are not equipped with LIDAR.

    link to this | view in thread ]

  156. identicon
    tracyanne, 30 Mar 2018 @ 1:28am

    Re: Re: Re: Re: Basically

    Actually the point I meant to make there was that because people are basically stupid, creating AI only roads (Railways) won't stop people getting killed by AI vehicles. Plenty of people already get themselves killed by trains, by simply ignoring the fact that trains don't stop easily, and ignoring signs warning them of the danger.

    But yeah it would probably be a better way to utilise AI vehicles.

    link to this | view in thread ]

  157. icon
    JMT (profile), 30 Mar 2018 @ 3:16pm

    Re: Re: tl;dr

    You can't see a car coming towards you at night with it's headlights on? Really?

    link to this | view in thread ]

  158. icon
    Uriel-238 (profile), 30 Mar 2018 @ 5:14pm

    Pedestrian in the headlights.

    I'm still wondering about that part. Herzberg is unfortunately not alive to ask if she saw the approaching Uber car.

    BART trains in the Bay Area have LIDAR to monitor the tracks and break if an obstacle is detected, and this system is (by magnitudes) faster than the reflexes of train operators. Still, occasionally we'll get a suicide that leaps in front of a BART train.

    link to this | view in thread ]

  159. identicon
    Annonymouse, 30 Mar 2018 @ 9:58pm

    Re: Basically

    Option 3 is called a railroad and stupid humans have been killed on those even if the train was stopped

    link to this | view in thread ]

  160. icon
    PaulT (profile), 31 Mar 2018 @ 1:09am

    Re: Re: Re: Re: Re: Basically

    "creating AI only roads (Railways) won't stop people getting killed by AI vehicles"

    Not completely, but a lot of deaths will still be prevented. One of the silliest arguments you can make on this issue is that it shouldn't be attempted because the results won't be perfect. No technology is perfect, but the results will still be there to see even when mistakes happen.

    link to this | view in thread ]

  161. icon
    PaulT (profile), 31 Mar 2018 @ 1:16am

    Re: Re: Nothing to see here

    Because it's not particularly relevant to the accident, since the fault clearly was due to the pedestrian's actions and not a failure by the car.

    I'll guess there's some hyperbole in there too, but I don't see the point of doing the research to answer another tiring post by someone desperate to pass blame on to Uber rather than examining what really happened.

    link to this | view in thread ]

  162. identicon
    Andrew D. Todd, 1 Apr 2018 @ 11:43am

    Re: Re: Re: Re: Re: Re: Basically

    The simplest way to do it is to use transporter vehicles.

    1. An automobile drives onto a transporter vehicles, parks, and thereby becomes baggage. Any old automobile will do-- you don't need a Tesla.

    2. The transporter vehicle carries the automobile to a location close to the final distination.

    3. The automobile drives off the transporter vehicle, and proceeds to its final destination, subject to a speed limit low enough that it can be safe.

    The transporter vehicles are specific to one improved road system. They do not present compatibility problems. New York and Los Angeles do not have to use the same kinds of vehicle transporters. The transporter vehicle systems can grow organically, adding one route at a time, and taking over one additional lane at a time.

    An automobile can drive off one transporter vehicle and onto another, just as a railroad passenger changes trains at a station.

    For a long time, there have been "auto trains," of two basic kinds. One is for carrying automobiles through railroad tunnels, eg. the English Channel tunnel, and the vaarious Alpine tunnels. The passengers drive their cars onto the train, and stay in the cars. Road tunnels are much harder to build than railroad tunnels, and the road tunnel under a given Alpine pass was typically built fifty or a hundred years after the rail tunnel.

    The other kind of auto-train is for going long distances, typically from the winter to the sun. The American Auto-Train runs from Northern Virginia to Orlando, Florida, a matter of eight-hundred-and-fifty miles, and I understand there are similar services from Germany to Italy. Passengers have their cars loaded onto "auto-racks," and then move into sleeper cars for an over-night journey. The French system is different. Passengers check their automobiles as baggage, but then take a high-speed train, at anything up to two hundred miles an hour, and stay in hotels until their autoobles, hauled on ordinary trains, catch up with them.

    link to this | view in thread ]

  163. identicon
    Anonymous Coward, 1 Apr 2018 @ 3:21pm

    Re: Pedestrian in the headlights.

    Really? Do you know how far is required to stop a train? There is not a lot of margin between when the lights flash and the barrier comes down at a level, and the approaching train passing the signal that would tell it to stop at a level crossing. That is not the signal at the level crossing but the one further up the track telling the driver to stop at the next signal.

    link to this | view in thread ]

  164. icon
    Uriel-238 (profile), 1 Apr 2018 @ 4:18pm

    Re: Re: Pedestrian in the headlights.

    BART is light rail, and has its own proprietary standards. Still, they're trains and they can't stop on a dime. But BART is pretty proud how difficult it is to get run over by a train when one isn't willfully trying. I think it's less about breaking power but early and thorough detection.

    link to this | view in thread ]

  165. identicon
    Thad, 2 Apr 2018 @ 10:00am

    Re: Re: Re: Nothing to see here

    Blame isn't a zero-sum game; it's possible to assign blame to multiple parties. The victim shouldn't have crossed the street where she did; the safety driver should have had her eyes on the road; the car should have recognized an obstacle and swerved or braked. And that's just for starters. As the article notes, the Phoenix area's high number of pedestrian fatalities indicates a failure of city planning. It's possible that the speed limit should be lower on that stretch of road, or that other mitigating factors could be implemented. (That section of Mill also curves, which isn't really a city planning problem, it's a geography problem; there's a mountain that it has to curve around.)

    It's also possible that Arizona's lack of regulations for AVs may be partially to blame, though I'm not prepared to say that for certain before the investigation concludes. This may be an issue where additional regulation could have prevented whatever issue that resulted in the car failing to brake or swerve, or it may not.

    Regardless, there were multiple points of failure here. There usually are.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.