Luddite Redux: Don't Kill The Robots Just Because They Replace Some Jobs
from the first,-do-no-harm dept
Here are a couple points to ponder:Fun fact #1: California prison guards are expensive.I'm sure the prisoners welcome their new robot overlords, but I bet the prison guards union doesn't. Or any other union for that matter. And they're not alone. Over the past few weeks, tech industry commentators spent slightly more time than usual wringing their hands over whether technology was killing jobs. I think this video captures the debate pretty well.
Fun fact #2: South Korea's getting robot prison guards.
This isn't a theoretical issue. Automation and efficiency have always threatened certain jobs and industries -- and one of the standard reactions is to somehow blame the technology itself and seek to hinder it, quite frequently by over-regulation. Of course, the extreme version of this is where the term "luddite" came from -- an organized effort to attack more efficient technology. Of course, that resulted in violence against the machines. More typical were overly burdensome regulations, such as "red flag laws," that said automobiles could only be driven if someone walked in front of them waving a red flag to "warn people" of the coming automobile. Supporters of this law, like supporters of secondary liability laws for robots, can and will claim that there are "legitimate safety reasons" for such laws and that the impact on holding back the innovation and extending the lifetime of obsolete jobs is just a mere side benefit. But like those red flag laws, applying secondary liability to robotics would significantly hinder a key area of economic growth.
Techdirt has covered the question of a secondary liablity safe harbor for robots before, and Ryan Calo's written a great paper about the legal issues coming out of the robotics arena, but an even more important (and specific) point is exactly why these safe harbors matter for job creation -- even as some continue to argue the other way (that such safe harbors will destroy jobs).
Technology has been replacing human labor since humans invented, well, technology. But while technology may get rid of inefficient jobs, it eventually creates replacements. To cite one commonly-used example, the switched telephone network put operators out of a job, but it created plentiful new jobs for telemarketers (and other businesses that relied upon the packet-switched phone network... including everything built on and around the internet today). The problem is that while it was obvious how many operators would be out of a job, it wasn't immediately clear how lucrative (or annoying) telemarketing could be, let alone the eventual transformation of the phone lines into a vast global information sharing network, and the hundreds of millions of new jobs created because of it.
Erik Brynjolfsson and Andrew McAfee examine this problem in detail in their book, which I recommend. But much of it boils down to this. Technology creates jobs, yet it's not obvious where the new jobs are, so we need bold, persistent experimentation to find them:
Parallel experimentation by millions of entrepreneurs is the best and fastest way to do that. As Thomas Edison once said when trying to find the right combination of materials for a working lightbulb: "I have not failed. I've just found 10,000 ways that won't work." Multiply that by 10 million entrepreneurs and you can begin to see the scale of the economy's innovation potential.This is especially important for robotics. It's obvious how robots make certain jobs obsolete -- e.g. driverless cars don't need drivers -- but it's less clear what new job opportunities they open up. We need to try different things.
Unfortunately, secondary liability creates problems for robot manufacturers who open up their products for experimentation. Ryan Calo explains this in more detail, but the basic problem is that, unlike computers, robots can easily cause physical harm. And under product liability law in most states, when there's physical harm to person or property, everyone involved in the manufacturing and distribution of that product is legally liable.
Ideally, we'd want something like a robot app store. But robot manufacturers would be unwilling to embrace commercial distribution of third-party apps if it increased their chances of being sued. There's evidence that Section 230's safe harbors (and, to some extent, the DMCA's safe harbors) play a key role in facilitating third-party content on the web. Absent a similar provision for robots, manufacturers are more likely to limit their liability by sticking to single-purpose robots or simply locking down key systems. That's fine, if we know exactly what we want our robots to do -- e.g. replace workers. But if we want robots to create jobs, it'd help to limit secondary liability for the robotics industry, open things up, and let widespread experiments happen freely.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: automation, human labor, jobs, robots, secondary liability, unions
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in thread ]
No story here...
Another link
One more
It looks like the WSJ post is gone.
[ link to this | view in thread ]
Re: No story here...
http://www.huffingtonpost.com/2011/04/21/california-prison-guards-_n_852075.html
[ link to this | view in thread ]
[ link to this | view in thread ]
http://search.yacy.net/
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
It will never work
[ link to this | view in thread ]
You can't scare robots, you can argue with them, you can pound and pound and they will be just there in front of you until your anger goes away.
Of course it depends on how you program them, they can also be ruthless and use deadly force for no good reason at all which could increase aggressive behavior levels inside an enclosed environment that is highly stressful.
The thing is, robots are nowhere near those capabilities yet, they can be teleoperated though, we have the hardware to do it, we don't have the AI to make it a reality.
It is possible to do those things because if it was impossible humans wouldn't be able to do them as well.
[ link to this | view in thread ]
I would not support limiting liability for a single industry like that. They can play by the same rules as everyone else.
[ link to this | view in thread ]
Re:
Are you sure? You wouldn't program the robot to act a little differently towards someone who is a chronic problem?
"robots don't use more force than they are suppose too because they lack emotions,"
Well, they won't use more force because of emotions. But they might use more force because they LACK emotions such as compassion. They might also use more force due to a less intuitive grasp of the situation, programming errors, malfunctions, etc.
"You can't scare robots, you can argue with them, you can pound and pound and they will be just there in front of you until your anger goes away."
Somehow I don't think this would be good for the prisoners on a psychological level.
You can't assume a perfect AI. You can say it's "possible" to make one, but that doesn't mean we're actually going to be able to do that in the next hundred years...
[ link to this | view in thread ]
Re: No story here...
[ link to this | view in thread ]
Re:
In contrast, Apple isn't liable when an iOS app wipes out all your data. And Internet companies get special safe harbors under Section 230 and the DMCA.
Under existing law, robots are treated more like cars than smartphones. The proposal is that, once you start installing apps on your robot, it makes more sense to flip that.
[ link to this | view in thread ]
I once heard Issac Asimov speak in the early 80s. At the time, the Japanese had just starting introducing robots into the auto assembly line and reporters were calling up Asimov for a comment since he had created the term, "robotics". This article above hints at the future Asimov wrote about in all his books and Asimov even alluded to it in his lecture—when robots can replace humans, humans can finally move on to do more important things... but wait, robots are replacing humans! It's the paradox of efficiency: the more work you have taken away, the less work you have to do.
What makes this article so interesting to me is that it shows why secondary liability protection is so important when the "worker" wades closer into tort law. A telephone switchboard can't hurt anyone and it put many many people of a job. But a robot worker whose laser can slice you in half? Yeah, problem.
Do those automated Predator drones have secondary liability protections?
[ link to this | view in thread ]
Re: Re:
One plus for the robot prison guard is that they're easier to fix. Suppose a robot does make a mistake and uses excessive force. Once a programmer identifies what went wrong, the fix can easily be pushed to all of the other robots very quickly. In contrast, remedying police brutality requires extensive training. And lot of what appears as excessive force may really be a gut self-protective instinct on the part of the officer that's very hard to figure out.
Will we replace all cops with machines? Probably not, you want a human to have final say over use of force for Isaac Asimov-type reasons. But I wouldn't be surprised if, in 30 years, we saw a 3 to 1 ratio of robots to humans in corrections and law enforcement.
[ link to this | view in thread ]
Re:
Doubtful.
But suppose the cops took a military-grade Predator drone and installed their own custom software on it, and ... bad things happen. I wouldn't blame the Predator manufacturer liable for that, just because they let people install custom software on the drones. Might be a different story if the manufacturer was actively involved in making the custom software.
[ link to this | view in thread ]
Re: Re: Civilian Drones
http://seattletimes.nwsource.com/html/nationworld/2016882681_drones29.html
Police agencies want drones for air support to find runaway criminals. Utility companies expect they can help monitor oil, gas and water pipelines. Farmers believe drones could aid in spraying crops with pesticides.
"It's going to happen," said Dan Elwell, vice president of civil aviation at the Aerospace Industries Association. "Now it's about figuring out how to safely assimilate the technology into national airspace."
[ link to this | view in thread ]
not defective?
I think the link isn't quite the right one - it goes to a section on liability where the product is defective which isn't quite the point you're making?
In any case for non-defective robots I would hope that legal suits focus more on the user than the manufacturer. If they didn't I'm amazed that you are still able to buy guns, cars and even hammers in the US - after all they are surely used in causing harm every year.
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
I'd rather interact with a robot
One would hope the robots are not programmed to be sadistic.
With humans you get variability and some of the guards are going to be worse human beings than the best humans who are locked up. And the robots are not going to have emotions get in the way of the interactions with the prisoners - IE the Prisoner #6 interacts with Prisoner 571-AZ and Prisoner #6 spits in the face of prison guard Zimbardo. Guard Zimbardo than abuses Prisoner 571-AZ out of frustration because he can't get to Prisoner #6. Prisoner 571-AZ is being abused because he knows #6 and #6 did something to Guard Zimbardo - where is the justice in this situation?
[ link to this | view in thread ]
Re: Re:
You're not wrong, but my understanding is the safe harbors are not there to remove any liability service providers would have ordinarily, but to ensure that the liability is placed where it ought to have been anyway: on the user actually performing the illegal act. I think it's more of a defense against technologically illiterate judges and juries than anything else.
[ link to this | view in thread ]
Re: not defective?
I think the difference is the difficulty in correctly determining the liability. If the robot was modified and then harmed someone, how do you determine why it harmed someone? Sometimes it might be clear, but definitely at other times an analysis of the customer modifications won't leave an obvious conclusion.
This uncertainty may lead some manufacturers to stay out of the business, or to try to prevent modifications. Possibly products will just be a lot more expensive because of all the insurance and lawyers. I guess the best case scenario would be complicated waivers you have to sign to buy a robot, and by "sign" I don't mean tick a box online.
All these drawbacks make it worth considering some kind of rule to attempt to more clearly draw the liability line. That won't be easy though, since any simple rule (eg "manufacturers are not liable for anything that happens no matter what") will almost certainly be a bad one.
[ link to this | view in thread ]
Re: Re: Re: Civilian Drones
[ link to this | view in thread ]
Prison Guards...
In another situation, a prisoner asked for a second helping of lunch. A (new)surley guard told him no. The prisoner said, "I'm in for life. I've got no reason to take this" and proceeded to beat the guard to a pulp before someone pulled him off. The guard found out the hard way that respect is the best policy.
My buddy says there ARE guards with bad attitudes. They also have very short life expectancies.
[ link to this | view in thread ]
Re: Re:
"Under existing law, robots are treated more like cars than smartphones. The proposal is that, once you start installing apps on your robot, it makes more sense to flip that."
But as was mentioned in the article, "the basic problem is that, unlike computers, robots can easily cause physical harm."
(Even though I'm arguing on this side, I'm not totally convinced I'm right, by the way.)
[ link to this | view in thread ]
All non-creative work will be automated.
This, of course, is only a problem for as long as we cling to the outmoded idea that you need a "job" to get "money" so you can get everything else. If we can the idea of money and start running the world on sensible real-world premises and just provide people with what they need, then automation isn't a threat to us - it's the single greatest thing that has ever happened to humanity. 100% unemployment for all - and all the housing, clothing, food etc everyone could possibly need in spite of or even because of that.
Society is broken right now. It's not the fault of the one thing that has ever been necessary to and instrumental in raising human standards of living - technological progress.
[ link to this | view in thread ]
Re: All non-creative work will be automated.
Robots have already painted pictures. Someday computers will create music, literature, and other pieces of art that will be indistinguishable from human handiwork.
If we can the idea of money and start running the world on sensible real-world premises and just provide people with what they need, then automation isn't a threat to us - it's the single greatest thing that has ever happened to humanity. 100% unemployment for all - and all the housing, clothing, food etc everyone could possibly need in spite of or even because of that.
Sounds beautiful. It will be a rocky road to get there, though.
[ link to this | view in thread ]