Why (Allegedly) Defamatory Content On WordPress.com Doesn't Come Down Without A Court Order
from the the-automattic-doctrine dept
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event. Between last week and this week, we're publishing a bunch of these essays, including this one.
WordPress.com is one of the most popular publishing platforms online. We host sites for bloggers, photographers, small businesses, political dissidents, and large companies. With more than 70 million websites hosted on our service, we unsurprisingly receive complaints about all types of content. Our terms of service define the categories of content that we don't allow on wordpress.com.
We try to be as objective as possible in defining the categories of content that we do not allow, as well as in our determinations about what types of content fall into, or do not fall into, each category. For most types of disputed content, we have the competency to make a judgment call about whether it violates our terms of service.
One notable and troublesome exception is content that is allegedly untrue or defamatory. Our terms prohibit defamatory content, but it's very difficult if not impossible for us, as a neutral, passive host, to determine the truth or falsity of a piece of content hosted on our service. Our services are geared towards the posting of longer form content and we often receive defamation complaints aimed at apparently well-researched, professionally written blog posts or pieces of journalism.
Defamation complaints put us in the awkward position of making a decision about whether the contents of a website are true or false. Moreover, in jurisdictions outside of the United States, these complaints put us on the hook for legal liability and damages if we don't take the content down after receiving an allegation that it is not true.
Making online hosts and other intermediaries like WordPress.com liable for the allegedly defamatory content posted by users is often criticized for burdening hosts and stifling innovation. But intermediary liability isn't just bad for online hosts. It's also terrible for online speech. The looming possibility of writing a large check incentivizes hosts like Automattic to do one thing when we first receive a complaint about content: Remove it. That decision may legally protect the host, but it doesn't protect users or their online speech.
The Trouble with "Notice and Takedown"
Taken at face value, the notice-and-takedown approach might seem to be a reasonable way to manage intermediary liability. A host isn't liable absent a complaint, and after receiving one, a host can decide what to do about the content.
Internet hosts like Automattic, however, are in no position to judge disputes over the truth of content that we host. Setting aside the marginal number of cases in which it is obvious that content is not defamatory—say, because it expresses an opinion—hosts are not at all equipped to determine whether content is (or is not) true. We can't know whether the subject of a blog post sexually assaulted a woman with whom he worked, if a company employs child laborers, or if a professor's study on global warming is tainted by her funding sources. A host does not have subpoena power to collect evidence. It does not call witnesses to testify and evaluate their credibility. And a host is not a judge or jury. This reality is at odds with laws imputing knowledge that content is defamatory (and liability) merely because a host receives a complaint that content is defamatory and doesn't remove it right away.
Nevertheless, the prospect of intermediary liability encourages hosts to make a judgment anyway, by accepting a complaint at face value and removing the disputed content without any vetting by a court. This process, unfortunately, encourages and rewards abuse. Someone who does not like a particular point of view, or who wants to silence legitimate criticism, understands that he or she has decent odds of silencing that speech by lodging a complaint with the website's host, who often removes the content in hopes of avoiding liability. That strategy is much faster than having the allegations tried in a court, and as a bonus, the complainant won't face the tough questions—Did he assault a co-worker? Did she know that the miners were children? Did he fib his research?
The potential for abuse is not theoretical. We regularly see dubious complaints about supposedly defamatory material at WordPress.com. Here is a sampling:
- A multi-national defense contractor lodged numerous defamation complaints against a whistleblower who posted information about corruption to a WordPress.com blog.
- An international religious/charitable group brought defamation charges against a blogger who questioned the organization's leadership.
- A large European pharmaceutical firm sought, on defamation grounds, to disable a WordPress.com blog, which detailed negative experiences with the firm's products. A court later determined that this content was true.
Of course, valid defamation complaints should be resolved and a system exists for doing so: the complainant can take legal action against the person who posted the content. This process keeps decisions about freedom of expression where they belong—with a court.
Our Approach at Automattic
The threat to legitimate speech posed by the notice and takedown process is behind our policy for dealing with defamation complaints. We do not remove user content based only on an allegation of defamation. We require a court order or judgment on the content at issue before taking action.
The third example above illustrates why we do not honor takedown demands that aren't accompanied by a court order. If we chose not to wait for a court order, but instead eliminated any potential liability by immediately disabling the site, we would have taken an important, and truthful, voice offline.
Our policy is the right one for us, but it can also be costly. We are often sued in defamation cases around the world based on our users' content. At any given time, we have upwards of twenty defamation cases pending against us around the globe. This is an inevitable side effect of our policies, and we try to be judicious about our involvement in the cases that we do see. Some cases result in a quick and straightforward judgment, but others require more fact-finding and we often face a choice about what our level of involvement should be. Ideally, we want to spend our resources fighting cases that matter–either because there is a serious risk to the freedom of speech of users who want their content to remain online, or because there is a serious risk to the company or our people. We recognize that we have some power as a host to not only demand a court order before removing content, but also that we can play a part in ensuring a more fair adjudication of some disputes if we are actively involved in a case. We view this as an important role, both for our users and for the values of free speech, especially in cases where important speech issues are at stake and/or there is a very clear differential in power between the complaining party and our user.
In each lawsuit, we ask ourselves a few questions: What is this case about? Does the user want to keep the content to remain online, and could we make a difference on the user's behalf? What is the blog about? Are there any political or other important speech issues? Is there a potential monetary award against us?
We like to call our rubric for making decisions on when to step in to help defend our users "The Automattic Doctrine", and the answers to the questions above help us decide how actively participate in the lawsuit. In our experience, the determinative question is most often whether the user wants to be involved in the defense and work with us to keep their ideas and opinions online.
Our approach ultimately puts the decision about whether content is defamatory, or instead, protected speech, in front of the right decision maker: a neutral court of law. Leaving such important decisions to the discretion of Internet hosts is misplaced and tilts the balance in favor of silencing often legitimate voices.
Paul Sieminski is General Counsel at Automattic. Holly Hogan is Associate General Counsel at Automattic.
Filed Under: censorship, content moderation, court orders, defamation, free speech, moderation, takedowns