from the first,-do-no-harm dept
Here are a couple points to ponder:
Fun fact #1: California prison guards are expensive.
Fun fact #2: South Korea's getting robot prison
guards.
I'm sure the prisoners welcome their new robot overlords, but I bet the prison guards union doesn't. Or any other union for that matter. And they're not alone. Over the past few weeks, tech industry commentators spent slightly more time than usual
wringing their hands over whether technology was killing jobs. I think this video captures the debate pretty well.
It might sound paradoxical, but this replacement of humans by machines is actually
a good reason to limit secondary liability for the robotics industry. And I'm not just referring to secondary liability in the copyright sense, but to
any liability incurred by robot manufacturers because of how others use their robots.
This isn't a theoretical issue. Automation and efficiency have
always threatened certain jobs and industries -- and one of the standard reactions is to somehow blame the technology itself and seek to hinder it, quite frequently by over-regulation. Of course, the extreme version of this is where the term
"luddite" came from -- an organized effort to attack more efficient technology. Of course, that resulted in violence against the machines. More typical were overly burdensome regulations, such as
"red flag laws," that said automobiles could only be driven if someone walked in front of them waving a red flag to "warn people" of the coming automobile. Supporters of this law, like supporters of secondary liability laws for robots, can and will claim that there are "legitimate safety reasons" for such laws and that the impact on holding back the innovation and extending the lifetime of obsolete jobs is just a mere side benefit. But like those red flag laws, applying secondary liability to robotics would significantly hinder a key area of economic growth.
Techdirt has covered the question of a secondary liablity
safe harbor for robots before, and
Ryan Calo's written a great paper about the legal issues coming out of the robotics arena, but an even more important (and specific) point is exactly why these safe harbors
matter for job creation -- even as some continue to argue the other way (that such
safe harbors will
destroy jobs).
Technology has been replacing human labor since humans invented, well, technology. But while technology may get rid of inefficient jobs, it eventually
creates replacements. To cite one commonly-used example, the switched telephone network put operators out of a job, but it created plentiful new jobs for telemarketers (and other businesses that relied upon the packet-switched phone network... including everything built on and around the internet today). The problem is that while it was obvious how many operators would be out of a job, it wasn't immediately clear how lucrative (or annoying) telemarketing could be, let alone the eventual transformation of the phone lines into a vast global information sharing network, and the hundreds of millions of new jobs created because of it.
Erik Brynjolfsson and Andrew McAfee examine this problem in detail in
their book, which I recommend. But much of it boils down to this. Technology creates jobs, yet it's not obvious where the new jobs are, so we need bold, persistent experimentation to find them:
Parallel experimentation by millions of entrepreneurs is the best and fastest way to do that. As Thomas Edison once said when trying to find the right combination of materials for a working lightbulb: "I have not failed. I've just found 10,000 ways that won't work." Multiply that by 10 million entrepreneurs and you can begin to see the scale of the economy's innovation potential.
This is especially important for robotics. It's obvious how robots make certain jobs obsolete -- e.g. driverless cars don't need drivers -- but it's less clear what new job opportunities they open up. We need to try different things.
Unfortunately, secondary liability creates problems for robot manufacturers who open up their products for experimentation. Ryan Calo explains this in more detail, but the basic problem is that, unlike computers, robots can easily cause physical harm. And under
product liability law in most states, when there's physical harm to person or property, everyone involved in the manufacturing and distribution of that product is legally liable.
Ideally, we'd want something like a robot app store. But robot manufacturers would be unwilling to embrace commercial distribution of third-party apps if it increased their chances of being sued. There's evidence that
Section 230's safe harbors (and, to some extent, the
DMCA's safe harbors) play a key role in facilitating third-party content on the web. Absent a similar provision for robots, manufacturers are more likely to limit their liability by sticking to single-purpose robots or simply locking down key systems. That's fine, if we know exactly what we want our robots to do -- e.g. replace workers. But if we want robots to create jobs, it'd help to limit secondary liability for the robotics industry, open things up, and let widespread experiments happen freely.
Filed Under: automation, human labor, jobs, robots, secondary liability, unions