Do Robots Need A Section 230-Style Safe Harbor?
from the future-questions dept
Forget Asimov's three laws of robotics. These days, there are questions about what human laws robots may need to follow. Michael Scott points us to an interesting, if highly speculative, article questioning legal issues related to robots, questioning whether or not a new arena of law will need to be developed to handle liability when it comes to actions done by robots. There are certain questions concerning who would be liable? Those who built the robot? Those who programed it? Those who operated it? Others? The robot itself? While the article seems to go a little overboard at times (claiming that there's a problem if teens program a robot to do something bad since teens are "judgment proof" due to a lack of money -- which hardly stops liability on teens in other suits) it does make some important points.Key among those is the point that if liability is too high for the companies doing the innovating in the US, it could lead to the industry developing elsewhere. As a parallel, the article brings up the Section 230 safe harbors of the CDA, which famously protect service providers from liability for actions by users -- noting that this is part of why so many more internet businesses have been built in the US than elsewhere (there are other issues too, but such liability protections certainly help). So, what would a "section 230"-like liability safe harbor look like for robots?
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: liability, robots, safe harbors, section 230
Reader Comments
Subscribe: RSS
View by: Time | Thread
We're doomed!
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
I, Robot
[ link to this | view in chronology ]
Re: I, Robot
[ link to this | view in chronology ]
Re: Re: I, Robot
[ link to this | view in chronology ]
All of the above.
I can tell you what the copyright industry would say: "All of the above." And since they seem to run the government that's probably the way it'll be.
[ link to this | view in chronology ]
I, Robot
[ link to this | view in chronology ]
Re: I, Robot
[ link to this | view in chronology ]
Re: I, Robot
do robots dream of electric sheep?
[ link to this | view in chronology ]
the three laws
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I think if we can hard program that in, there would be no reason to have a Section 230 for robotics.
Asimov had this all planned out over 60 years ago. Our retarded legal system is too blind by money, er "justice", to see that simple works.
[ link to this | view in chronology ]
Re: the three laws
Anyway, I'm not the only "hobby robotics guy" out there pushing the envelope for what machines can do. Just because it's not under military contract, don't assume it's not a very capable machine.
[ link to this | view in chronology ]
Re: Re: the three laws
I think many here today do not understand what you said.
[ link to this | view in chronology ]
Re: the three laws
Yeah, good luck with that. Seriously, "hard coding" the three laws is such a huge crock of bullshit that every media outlet and every movie seems to think is some magical solution to every robot. There's no "Don't hurt the human" command in C++ last time I checked.
Every time any industrial accident has been caused by a robot or by an automated system it was because the system wasn't aware that it was hurting a human or causing shit to happen. No one programs the robot to move the crane through someone's head, it just happens because the capability does not exist yet for a robot to be aware of what it's doing. Sure, we can put sensors and safeties and shit all over the place, but it's the same damn thing as the rest of the machine. The computer reads inputs, processes data, and controls actuators.
Until a computer can be self-aware, something that ain't going to happen for at least the next 30 years, if not more, we aren't going to be able to make robots obey magic three laws.
[ link to this | view in chronology ]
Re: Re: the three laws
a robot is so much more.
And it wouldn't be that hard to program in a "do not hurt humans" function in c++. it is just another class.
[ link to this | view in chronology ]
Re: Re: the three laws
AC - Where have you been?
MS robotics studio actually does have that command!!
:)
[ link to this | view in chronology ]
Re: the three laws
[ link to this | view in chronology ]
Asimov doesn't account for accidents.
Liability will play a role in that universe of course.
And there is the fact that any machine can be reprogrammed, security can be bypassed and a lot of other factors.
Could an exo-skeleton malfunction when you are carrying an old guy and crush something or drop the guy on the floor?
Who would be liable? the guy using the suit?
Personaly I think autonomous robots and semi autonomous robots should offer no risk of litigation to any person who did not directly or indirectly tried to cause harm to another person or property.
[ link to this | view in chronology ]
More seriously, though, robots aren't that different from any other electrical appliance, so the legalities should be the same. If a robot malfunctions, it's no different from a washing machine malfunctioning. If a robot catches a virus or has a bug, it's no different from any software disaster. If somebody programs a robot to kill their wife, it's no different from them killing their wife some other way (ah yes, Murder-she-wrote with robots).
Perhaps robots will need a "black box" like airplanes that records everything that happens. Also, a big off switch might be nice.
[ link to this | view in chronology ]
Re:
Sure, Asimov was a sci-fi writer, but "Philosophers of Artificial Intelligence" aren't to be taken any more seriously. There's nothing that makes their opinions on the subject any better than Asimov's.
Show me an engineer with a law degree (or a lawyer with an engineering degree) and I'll listen.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re:
First off, science fiction authors write about things they wish to happen, 'philosophers' of AI write about things they wish to happen...Looks to me, neither should be taken seriously, or maybe both should be taken seriously? But, first off, tell me one thing, technology wise, that has been invented in say the last hundred years, that wasn't originally dreamed up by some Science Fiction Writer? On top of that, show me something these 'philosophers' have done that is in use and not some college project waiting for the next darpa handout?
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
Wow. What an ignorant and egotistical statement.
Those who write stories, not just fiction, have many good ideas. Maybe you should try to read a few of them.
[ link to this | view in chronology ]
Do Robots Need A Section 230-Style Safe Harbor?
[ link to this | view in chronology ]
Re: Do Robots Need A Section 230-Style Safe Harbor?
Let's realize what this article is all about. The person that wrote it wants attention and intentionally overlooked every rational answer to her question so that they could write about it as if they had just thought of some amazing new ethical dilemma. They're really just pretty whiny.
[ link to this | view in chronology ]
Re: Re: Do Robots Need A Section 230-Style Safe Harbor?
Mike isn't fond of stories that can be found to go directly against his mantras.
[ link to this | view in chronology ]
Re: Re: Re: Do Robots Need A Section 230-Style Safe Harbor?
[ link to this | view in chronology ]
Really I can't see any definitive way we could make a robot know it was (possibly) hurting someone. The main problem is it's both hardware and software. Making both act in unison can be hard enough..
[ link to this | view in chronology ]
My point is, in the book, the computer essentially breaks it's own code and self destructs. Once you program a hard set of rules, and then let the unit become aware of said rules, and it knows there are limitations it will eventually find ways to break those rules (ask any teenager).
230 against the designers would definitely be needed, especially in a self-aware machine, as it will have the ability to go far beyond what the original designers planned, or even hoped to control.
When They (the bots) finally get to that level, then they will have to be responsible for their own actions.
[ link to this | view in chronology ]
senshikaze suggests that the Asmovian laws are all that are needed, but when my robotic lawn mower loses its bearings and mows down his prize-winning bonsai garden (without ever violating an Asmovian law), he might reconsider.
I can guarantee there will be moral panics where people will demand laws against "robostalking", "robobullying", etc., when in fact these are just stalking and bullying with the robot as a tool. And people will undoubtedly sue robot manufacturers when robots do what their owners told them to do. So I'm sure that some sort of safe harbor will be needed to protect manufacturers from the actions of users.
[ link to this | view in chronology ]
The First Wave of robots will create a bad name for themselves.
[ link to this | view in chronology ]
Products Liability
[ link to this | view in chronology ]
Liability
[I personally have no opinion on this. I, for one, welcome our future cybernetic overlords and wish to please them in any way I can.]
[ link to this | view in chronology ]
Blame the entire universe on the digital equivalent of two black youths. It's worked for years, why not just take it digital? 230 is just digital SODDI.
[ link to this | view in chronology ]
Re:
I'm afraid that you appear to be quite confused over how Section 230 works. The issue is not about avoiding liability, but about properly placing the liability on the correct party. There is nothing in Section 230 that allows for avoidance of liability by the parties actually involved in the action.
[ link to this | view in chronology ]