Computers Don't Have Good Faith Beliefs
from the fun-with-takedowns dept
My soon-to-be colleague David Robinson has a great post about the recent dancing toddler copyright story, in which he tries to puzzle out the DMCA's implications for automated takedown programs. The DMCA provides copyright holders with a remedy for online materials they believe to be infringing: they may send a notice to a relevant ISP demanding that the materials be removed. ISPs have a strong incentive to comply with such requests, because doing so gives them immunity from liability for the copyright-infringing activities of their customers. Hollywood has used this process aggressively, sending thousands of takedown notices to companies like YouTube. To prevent abuse of the takedown power, the DMCA also provides that anyone who "knowingly materially misrepresents" the copyright status of a work is liable to the target for damages and attorney's fees.
One interesting question is whether the DMCA allows fully automated takedown requests, or whether the law requires that a human being review each takedown notice before it is sent. The law requires copyright holders to state that "the complaining party has a good faith belief that use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law." The key phrase here is "good faith belief." In order to state that one has a good-faith belief, one presumably has to form a good-faith belief in the first place. And obviously, an automated script is incapable of forming a good-faith belief about anything, so any takedown sent by an automated script would be a lie.
David suggests that copyright holders could form "good faith beliefs" in a statistical sense—that if their script were accurate enough, they could form a "good faith belief" that the vast majority of materials identified by the script was infringing, even if they hadn't identified each one individually. But I don't think this line of reasoning works. As EFF's Fred Von Lohmann notes in the comments, the liability provision isn't an aggregate inquiry. It asks, for each takedown, whether the copyright holder had misrepresented the copyright status of the work in question. If a copyright holder sends an erroneous takedown notice, it is of no comfort to the recipient—and of no relevance to the law—that the copyright holder also sent a number of valid takedown notices the same day. For each mistaken takedown notice, the question the courts must ask is whether the misrepresentation was "knowing" and "material."
One plausible interpretation of this language would be that since no human being reviewed the takedown notice, the mistake couldn't have been "knowing," and therefore the sender of an automated takedown could never be liable. This, however, would make a mockery of the purpose of the statute, which was to deter reckless or malicious use of the takedown power. If failing to examine material at all before issuing a takedown were sufficient to confer immunity, that would totally undermine the purpose of the statute. For this reason, I think the test put forward by EFF in the dancing toddler case—whether a copyright holder exercising reasonable care should have known the material was not infringing—makes more sense. And on this reading, companies would likely be free to issue automated takedowns, but they would be liable for any takedowns that were clearly erroneous. As Fred points out, this gets the incentives right, because it gives Hollywood a strong incentive to use automated takedown scripts judiciously.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: automated takedown, dmca, good faith beliefs, takedown
Reader Comments
Subscribe: RSS
View by: Time | Thread
It's my opinion
Again, just my opinion. I am sure others will be quick to point out that its just far cheaper and easier to not pay any attention, but the way they (**AA) loves to waste money, I highly doubt they give a rats ass about efficiency in any regard.
[ link to this | view in thread ]
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
And neither do the **AA
[ link to this | view in thread ]
Of course...
[ link to this | view in thread ]
Tit4Tat
And since most corporations are collectively evil with regard bias opinions to opt-ins, the receiver can just push the blame down the line that all users are defaulted to auto-counter-notify unless the opt out.
Frankly this good faith and knowing is crap, why can't criminals used the obvious loop-hole with the same level obtuse acceptance. I mean no one would reasonably accept if a defendant stated "Sorry,I had know idea that shooting a gun would have resulted in anyone's injury" It just astounds me that courts can accept that anyone that submits an invalid DMCA notification isn't knowingly doing so, it's BS.
[ link to this | view in thread ]
The do knowingly send false takedowns
[ link to this | view in thread ]
Willful Blindness
[ link to this | view in thread ]
An interesting court case waiting to happen
A) subpoena the process/algorithm that was used to generate the takedown
B) Attempt to find classes of target documents (e.g. YouTube videos) that would register false positives (fair use, satire, mashups, etc.)
C) Argue that because the process/algorithm includes these identifiable false positive classes, that the the entire process/algorithm represents a bad faith action.
The process may be "automatic" but it was designed by humans.
[ link to this | view in thread ]