There is no dissonance if one of the two competing "ideas" is all a lie fabricated to protect the other. He gets paid, he does his "job" for the telcos. That just happens to involve lying to the nation about why he's doing it./div>
They don't even have to yell racial slurs. McDonalds will kick you out if you haven't bathed in weeks, yell incomprehensible babble, sit and talk to your cheeseburger for 4 hours or anything else they don't like or that breaks their "rules". They are under no obligation to provide anything to anyone despite being a "public business".
Lidar is much more than that. It is capable of drawing objects in view in near-real time without the need for cameras to give those objects texture. It can "see" for quite a distance, too. There's no reason in the world that the lidar on that car could not have "seen" the pedestrian and reacted as it can also scan and rescan fast enough to detect objects in motion. That's its whole purpose./div>
...the argument that Huawei (and other Chinese network vendors) are rampant national security threats, while ignoring we routinely engage in the same or worse behavior...
Maybe it is protectionism. Maybe it's a guarding against a legitimate threat. We can't and don't know until we're shown the evidence you point out is absent. The absence, so far, of that evidence is not proof this is protectionism.
But even if it is, the heavy rhetoric about the US engaging in the same behavior is pure whataboutery. Argue the issue on its own merits. If it stands on its own then great. But the way this article is written is pure sensationalism.
I don't get how the ratios are a misrepresentation. If there were 4000 times as many self-driving cars (to get to 2.4m) and the kill rate holds there would have been 4000 deaths due to autonomous cars in the same period. Compared to 962 from human-driven vehicles that's a pretty abysmal rate.
Granted, we don't have nearly enough data on self-driven cars to draw any solid conclusions but the above isn't misrepresentation, just incomplete data./div>
I've been working as a software engineer for close to 3 decades. I've worked on embedded system, including medical devices, where bugs are intolerable. And yet... I avoid most tech, particularly that which presents a hazard to life and limb as I *know* there is no such thing as bug-free software.
As we've entered the world of "Deploy it quick! We can patch it later." software quality has gone down the crapper. Most embedded systems are riddled with issues, the type of software that should be least prone to such problems (to wit: IoT, including vehicles). There is no reason in the world anyone should trust any software-driven system any more. Given that vehicles are basically guided cannonballs we should be especially careful with how they're deployed.
No, I'm not a fan of this tech but I do see it is inevitable. Some day we'll get there but for right now we should not be testing this experimental technology in crowded urban areas./div>
If there was malice then it was no accident, by definition. It's not machine malice we should be afraid of, it's AI's inability to navigate a world full of unpredictable humans. Despite humans being murderous morons, AI is worse at this than other humans./div>
The difference here is that it's not a single individual driver you can hold accountable when they screw up. In this case you have to try to hold the corporation behind the tech accountable... Good luck with that.
There may be no "finish line" but there is a point when the general public learns to trust automated vehicles more than human-operated vehicles. Until we get there the corporations ought to be forced to bond their testing so there is a readily accessible fund when they inevitably screw up. Something easier to access than having to sue some entity with massively deep pockets./div>
Apart from your churlish insistence on name-calling, I somewhat agree with you on this specific point. Self-driving capabilities are still in their infancy and there *will* be more fatalities. AI will never be able to predict individual human behavior. As long as there are humans anywhere near where self-driving vehicles operate there will be problems.
The tech will get better over time but I find it odd that we're doing live testing in crowded urban settings already. We're just not there yet, it's still experimental (to wit: the useless human "driver" just in case)./div>
Re:
Re: MEDIUM ZOMBIE ALERT! TWO COMMENTS TOTAL SINCE 2015!
Re: Re: Re: Public Place
The same is true of Google.
Abide by the rules or GTFO./div>
Re:
Re:
Re: Re: Re: Re: Re: Re: Re:
Whataboutery
Maybe it is protectionism. Maybe it's a guarding against a legitimate threat. We can't and don't know until we're shown the evidence you point out is absent. The absence, so far, of that evidence is not proof this is protectionism.
But even if it is, the heavy rhetoric about the US engaging in the same behavior is pure whataboutery. Argue the issue on its own merits. If it stands on its own then great. But the way this article is written is pure sensationalism.
/div>Re: Re: Techdirt Big Tech bias
Granted, we don't have nearly enough data on self-driven cars to draw any solid conclusions but the above isn't misrepresentation, just incomplete data./div>
Re: Re: Re:
Re: Re: Many more manual cars
Re: Re: Re: Re: Re:
Re: Re: Re: Re: Re: How do we deal with a machine kills a human?
As we've entered the world of "Deploy it quick! We can patch it later." software quality has gone down the crapper. Most embedded systems are riddled with issues, the type of software that should be least prone to such problems (to wit: IoT, including vehicles). There is no reason in the world anyone should trust any software-driven system any more. Given that vehicles are basically guided cannonballs we should be especially careful with how they're deployed.
No, I'm not a fan of this tech but I do see it is inevitable. Some day we'll get there but for right now we should not be testing this experimental technology in crowded urban areas./div>
Re: How do we deal with a machine kills a human?
Re: Re: Re: Re:
There may be no "finish line" but there is a point when the general public learns to trust automated vehicles more than human-operated vehicles. Until we get there the corporations ought to be forced to bond their testing so there is a readily accessible fund when they inevitably screw up. Something easier to access than having to sue some entity with massively deep pockets./div>
Re: PS: note also the plug for Waymo, GOOGLE subsidiary.
The tech will get better over time but I find it odd that we're doing live testing in crowded urban settings already. We're just not there yet, it's still experimental (to wit: the useless human "driver" just in case)./div>
Re: So how many humans are you willing to kill?
Re: Holy crap!
Re: Consumer advocates
Re: Good Riddance Oracle
Re:
More comments from An Onymous Coward >>
Techdirt has not posted any stories submitted by An Onymous Coward.
Submit a story now.
Tools & Services
TwitterFacebook
RSS
Podcast
Research & Reports
Company
About UsAdvertising Policies
Privacy
Contact
Help & FeedbackMedia Kit
Sponsor/Advertise
Submit a Story
More
Copia InstituteInsider Shop
Support Techdirt