from the place-the-blame-correctly-please dept
For years, the Copyright Office has been working on a report that is expected to be released sometime soon, about whether or not the DMCA's Section 512 "notice and takedown" regime needs to be changed. The big Hollywood and recording industry lobbyists have been gearing up to push for new rules, a la the European Copyright Directive, that put even more liability on intermediaries. Of course, what they really want is to force Google and Facebook to just hand them some cash because they've failed to adapt their business models while those two companies have thrived. Those legacy copyright-focused industries have already been pushing for things like mandatory licensing and "notice-and-staydown" rules, whereby if something that was taken down once gets re-uploaded, the hosting site becomes liable. Indeed, the industry already seems to have political support for some of these changes.
What's odd, however, is how little attention people seem to be paying in most of these discussions to whether or not we need to fix the DMCA in the other direction -- to fix for the fact that the notice-and-takedown provisions of the DMCA are regularly used for censorship, even of news. Late last week, the Wall Street Journal had a very thorough article (possibly paywalled) detailing how they found hundreds of news articles that were taken out of Google's search due to what appears to be bogus DMCA takedowns. After contacting Google about this, the company said that it had found approximately 52,000 news articles that had been deleted from its index via bogus copyright notices:
After the Journal shared its findings with Google, the company conducted a review and restored more than 52,000 links it determined it had improperly removed, she said. Google said its review identified more than 100 new abusive submitters, declining to discuss individual cases.
Think about that for a second. While we frequently hear Hollywood insisting that (1) copyright is never used for censorship, and (2) that Google is too permissive in allowing infringement, this simple case is showing just how wrong that is and how serious an impact that can have. When important news stories are being censored as an illicit form of a "right to be forgotten" that should concern all of us (not to mention raise some 1st Amendment questions about the DMCA):
When a Colorado man, Dak Steiert, faced state-court charges of running a fake law firm in 2018, he sent Google a series of copyright claims against blogs and a law-firm website that discussed his case, claiming they had copied the posts from Mr. Steiert’s own website. That wasn’t true, the Journal determined, but Google erased the pages from its search engine anyway.
Last year, Mr. Steiert, who didn’t respond to requests for comment, pleaded guilty in Colorado state court to one count of false advertising in his business. The Colorado Supreme Court closed his practice. The articles remained invisible in Google searches until the Journal flagged the cases to Google, which then reinstated the links.
Later in the article, they detail one trick used by some to delete articles: copying the articles to a blog, and then changing the date to make it look like theirs was first, and the real version came later and is infringing:
Financial-news site Benzinga fell victim to a common tactic to trick Google: backdating. Someone wanting Google to hide a webpage will find a little-trafficked blog and post a copy of the content from the legitimate webpage. After backdating the plagiarized post, the complainant will file an electronic notice with Google claiming the real article is a copyright violation.
Benzinga, after publishing an August 2015 article about financial difficulties at publicly traded Amira Nature Foods Ltd., began getting emails demanding it take the story down, said its managing editor, Jason Shubnell. The emails came from three different names, including a Richa Parikh who wrote that the Amira article was “spoiling our online reputation” and offered to pay Benzinga to remove it, according to a copy of an email the Journal reviewed.
Benzinga largely ignored the emails. In 2018, Google wiped the Benzinga story from its search results after a blog masquerading as CNN made a copyright claim. Mr. Shubnell said Benzinga wasn’t aware Google had hidden its story until the Journal contacted it, adding: “There was nothing wrong with the actual content of the article.”
Of course, over the past few years, we've certainly seen similar scammy attempts to abuse copyright law to remove stories. Revenge porn scammer and pretend politician Craig Brittain famously tried to get a ton of news articles about his past behavior removed from Google via bogus DMCA notices, and, in the last few years, we've seen a cottage industry pop up of "reputation management firms" that use dirty tricks, including fake DMCA notices and faked legal filings as a way to get stories delisted from Google.
While the WSJ article is very well researched and reported, and highlights this huge problem, my one complaint with it is that it barely acknowledges how the real problem here is the structure of the DMCA 512's notice and takedown structure -- which is heavily one-sided. Under the rules of 512, if you receive a takedown notice, you don't have to remove the content, but the legal pressure and liability is heavily weighted in that direction. If you remove the contested content, you are then (mostly) free from any copyright liability. If you refuse to remove the content, you are not. And while you might still prevail should a lawsuit be brought and you can make use of other defenses, the 512 safe harbor means that you'll get out of the case faster and easier if you just remove the content (and you're much less likely to be sued).
But, if a platform removes content in error? Well, there's no liability at all. And, while there is 512(f) of the DMCA that provides some theoretical legal liability for those who issue bogus takedowns, in practice the courts have largely rendered 512(f) moot. Add to that the already imbalanced nature of copyright's "statutory damages," which allow for up to $150,000 in damages for each infringed work (even if you can't show any actual damages), and the scale is heavily, heavily tilted towards censorship.
Just to review: a DMCA takedown strongly encourages content to be removed from the internet, even absent any actual infringement. There is almost no punishment for those who send bogus takedowns. There are strong legal liability reasons for recipients of notices to take content down upon receipt, and no corresponding liability for making the wrong call in taking down work that wasn't infringing. And the potential amounts of liability for leaving something up can be overwhelming. All that adds up to a law that is almost perfectly designed to encourage censorship.
Indeed, this rather unique aspect of the DMCA's notice-and-takedown provision is why academics have been pointing out that this law is frequently repurposed as a "general-purpose privacy and reputation" protecting law despite it not being designed for that (and the fact that if it were, it would clearly violate the 1st Amendment).
So, if I had one complaint about the WSJ piece, it is that it brushes over these structural flaws in the law, and that leaves many to believe that the faulty takedowns the WSJ discovered are somehow a Google problem, rather than a problem with copyright law today and the DMCA 512 specifically. Indeed, watching the discussion about this article flow through Twitter, it seemed that many wanted to blame Google for these mistaken story removals. That seems kind of silly. Indeed, of all the internet platforms out there, Google has historically been among the most aggressive in reviewing takedown notices and pushing back on bogus ones (and even sometimes suing abusers). Indeed, the few times that people have tried to remove Techdirt stories via bogus DMCA copyright claims, Google has always refused.
Yes, you can point to the 52,000 stories that Google has reversed itself on and reinstated and say the company should have done better -- but that gets back to the whole impossibility theorem when you have to do content moderation at a massive scale. Mistakes are going to be made, and that's especially true when you have a professionalization of the scamming via these sketchy reputation management firms, which explore many different ways to fool Google, utlizing the extremely unbalanced nature of the law itself.
Unfortunately, though, when a story is positioned this way, it would not surprise me to see even some Google haters who want to make copyright law worse, use this very article as an example of why 512 should be made worse, not better. Anything to blame Google, I guess, even when the actual problem is with the law itself.
Filed Under: censorship, copyright, dmca 512, dmca takedowns, news, reputation management, search
Companies: google