from the this-doesn't-end-well dept
As part of the Digital Single Market (DSM) communication, the European Commission has discussed the possibility of imposing a "duty of care" on Internet intermediaries, which would require Internet platforms to take a more active role in policing user content. Forcing Internet intermediaries to monitor and remove their users' communications is ill-advised from both an economic and human rights standpoint.
The rapid growth of the Internet was not merely the function of technological innovation. This fundamental restructuring of commerce and communications would not have been possible but for substantive legal reforms that adapted legacy legal concepts to comport with the realities of a hyper-connected Internet age. Arguably the most important legal and legislative development of the Internet era was the concept of intermediary liability limitations for Internet service providers. Or, stated in a less legalistic way, the policy choice that Internet services should not bear blame for bad people saying or doing bad things on the Internet. Given the size and scope of the Internet and the volume of online communications, it is safe to say that Facebook, Twitter, Google, Yelp, YouTube, Allegro, and Dailymotion would not exist today if the law evolved to hold websites and Internet services liable for the actions of their users. Further, imagine operating a telecommunications network with the sum of all this information passing through without being shielded from responsibility for the actions of all of your users. What venture capitalist in her right mind would invest in a platform that was exposed to liability for billions of websites beyond its control or trillions of posts composed by third parties? What would Internet business models look like if companies had to pre-screen all user communications before they went live?
Recent developments in Europe, including the Delfi ruling and the DSM "duty of care" proposal, suggest that Internet services may soon be asked to take a more active role in filtering user content. Yet even with advanced filtering tools, unlawful speech is almost always context dependent. Libel and defamation would not be obvious to a filter. Even more complex is when lawful speech is used unlawfully, as in the case of copyright and trademark infringement. Given that rules about these various types of speech are often the product of complex legal cases, even human review of every online communication would not completely shield an Internet company from liability, given that different people can come to different conclusions about whether speech is "harmful." Not to mention that standards for what is permissible speech vary widely from country to country.
Besides the commercial impact, the implications for free speech would also be disastrous. Protections from intermediary liability enable platforms to give people around the world a simple way to express themselves and to share what they love with the world, and to challenge the restrictions of oppressive governments. One study found that when online platforms are regulated on the basis of content submitted by their users, they remove large amounts of controversial but legal content for fear of facing penalties. The UN's Joint Declaration on Freedom of Expression on the Internet recognizes the success of laws such as the CDA, DMCA, and the E-Commerce Directive, stating that "intermediaries should not be required to monitor user-generated content and should not be subject to extrajudicial content takedown rules which fail to provide sufficient protection for freedom of expression."
Even if pre-screening and filtering at scale were feasible, the value of each individual communication — whether it takes the form of a website, a tweet, a Facebook post or a YouTube video — is negligible, where the potential legal exposure is huge; the potential damages for copyright law can reach $150,000 per work infringed. So, in a world where Internet companies were liable for the communications of their users, a rational company would be incentivized to aggressively censor content, leading to significant blocking of ostensibly legal speech as the costs of under blocking are significantly more than the costs of over blocking.
Historical Context on Intermediary Liability
Initially, the legal status of Internet companies was uncertain. Problematic cases arose where the courts found Internet companies liable for user-generated content. However, both the United States and the European Union were relatively quick to act. In the United States, Congress passed Section 230 of the Communications Decency Act in 1996, which shielded Internet service providers from liability for a variety of actions that were committed by their users. Although §230 specifically did not include intellectual property infringement, Congress passed the Digital Millennium Copyright Act in 1998, which shielded Internet platform providers from liability for their users' infringement provided they acted quickly to remove infringing content when notified. In the subsequent report that accompanied the bill, the Senate Judiciary Committee made clear that these intermediary liability provisions were necessary given that the size and scope of the Internet made it functionally impossible for Internet service providers to monitor all the material that they served or indexed. At the time, Yahoo!, the illustrious example used by the Committee, indexed 800,000 websites. Counting just websites, and not other forms of content on social media, the number of sites Yahoo! indexed in 1998 was less than 0.1% of the size of the current web. As new Internet entrepreneurs seek to join the Yahoos, Twitters, and Facebooks of the world, it is imperative that those companies are afforded the same legal protections that allowed the prior generation of Internet success stories to achieve scale.
Europe was also quick to embrace the concept of liability limitations. In 2000, the European Union adopted the e-Commerce Directive, which endorsed a similar notice-and-takedown framework of Internet service providers for most Internet content. Since it was a directive that needed to be interpreted by individual European countries, it resulted in some inconsistency of application that provided somewhat less certainty to Internet companies than the U.S. versions of Intermediary liability. Nevertheless, it has provided the necessary legal foundations for the Internet to grow and expand in Europe.
These laws do not create any general monitoring or filtering obligations for illegal or harmful content. US law states that a service provider need not monitor its service or seek out infringing activity in order to qualify for the DMCA safe harbor. (This provision is intended to protect user privacy. Without safe harbors in place, website administrators might be required to search through and peer into their users' otherwise hidden conversations.) Under the e-Commerce Directive, states may not impose general obligations on intermediaries to monitor information or to actively seek out unlawful activity.
If any reform of the e-Commerce Directive is needed, it should be in the direction of giving EU startups and platforms greater assurance that they will not be found liable for the speech of their users. Adopting a liability regime closer to §230 would provide a critical boost to the growth and global competitiveness of EU communications platforms, review sites, social media platforms, dating apps, e-commerce sites, and the next generation of digital innovators.
Intermediary Liability Enables Flexible Responses to Harmful Content
The existing rules in place in the US and EU have led to strong respect for rights. Internet platforms already take down a significant amount of content that infringes copyright. In addition, platforms respond to court orders and cooperate with law enforcement on issues like child sexual abuse imagery. Finally, while there's no one-size-fits-all solution to the problem of online abuse, many platforms have evolved highly effective community policing and report abuse systems that help stop the spread of harassment, hate speech, and other harmful content. For example, anyone on YouTube can flag a video for review, and Google employees review those flagged videos for abuse 24 hours per day. In 2014, 14 million videos were removed from YouTube for violation of the site's Community Guidelines. (Twitter and Facebook also have similar guidelines and flagging procedures that can lead to removal of content and the termination of accounts.)
Unfortunately, as part of its DSM initiative, some are calling for a re-opening of the e-Commerce Directive and implementing a new "duty of care" on Internet service providers. This "duty of care" would be effectuated by either narrowing, or completely removing, the liability safe harbors available to Internet companies under the e-Commerce Directive. According to the Staff Working Document that accompanied the DSM communication, the European Commission noted that it was considering "whether to enhance the overall level of protection from harmful material through harmonised implementation and enforcement of conditions which allow online intermediaries to benefit from the liability exemption." Furthermore, as part of this examination, the Commission is also asking "whether to ask intermediaries to exercise greater responsibility and due diligence in the way they manage their networks and systems… so as to improve their resistance to the propagation of illegal content."
The same document also acknowledges that the intermediary liability safe harbors included in the e-Commerce Directive "underpinned the development of the Internet in Europe." Let's hope this last statement is borne clearly in mind if any updating takes place, remembering that a consistent process across Europe could be useful, but that a weakening of the liability shield and an extension of proactive monitoring would be economically and socially disastrous.
Economic Effects of a "Duty of Care"
The ramifications for future competition and innovation are also dire if the European Union were to enact a broader duty of care provision for Internet intermediaries. Given the limitations of automatic filters, and the fact that harmful, illegal content is context dependent, new online companies that offer communications platforms will need to employ large teams of human filters to review user-generated content.
What does that mean for competition and innovation? It likely means that startups and new business models will be disproportionately affected. From a venture capital perspective, the imposition of new regulatory costs on traditionally lean startups means that fewer new ideas get funded. Given that Internet platforms often spend years developing a user base before they devise ways of turning their popularity into revenue, the added costs will also mean that many of these ideas either don't get funded, run out of money before achieving the necessary scale, or simply prove unable to turn a profit.
The companies best positioned to bear these new cost burdens are the current Internet incumbents who have large legal departments and significant revenue. And depending on how new regulation might be written, even a large number of human reviewers still cannot catch everything. The number of humans needed to review 500 million tweets a day or 1 trillion Facebook posts is mind-boggling. In the YouTube example alone, 300 hours of video uploaded per minute makes 18,000 hours per hour or 432,000 hours per day. That would require non-stop oversight from 18,000 people to vet everything–and that's assuming they never get a day off and never sleep. (Not to mention, this makes it harder to push back against laws from more authoritarian regimes demanding the censorship of "harmful information".)
Higher regulatory and legal burdens threaten the robust competition and disruptive innovation that has characterized the Internet ecosystem over its short history. High regulatory costs entrench incumbents. This could disproportionately affect European companies, as none of the top 15 global Internet companies are European. However, Europe does have a high-share of the so-called "unicorns" — startups that are valued at more than $1 billion. (And, according to a recently released study, the unicorn population of Europe grew by 13 between 2014 and 2015.) Furthermore, nearly one quarter of the startups at the 2015 Consumer Electronics Show were French! As higher regulatory burdens harm smaller companies more than large ones, such an imposition of regulatory costs hurts these plucky startups more than the Googles and Facebooks of the world.
Increasing Liability Would Harm Startups, SMEs and the Broader European Economy
Imposing a duty of care on Internet services wouldn't just harm new startups in the EU. As we have discussed before, the Internet has transformed the world's economy and spurred economic productivity. The majority of these benefits accrue to traditional industry, not "Internet companies," through streamlining logistics and reducing frictions in international commerce. In fact over the last 20 years, the U.S.'s greater economic productivity growth relative to Europe has been powered by its better integration of Internet and networked technology into the overall economy. Given the role Internet platforms have played in this economic transformation, creating legal frameworks that enable the creation and foster the growth of Internet platforms — such as social media, e-commerce, financial services, crowdfunding, cloud computing, advertising, etc. — is important not just for the Internet sector, but for the broader economy that relies on them to enable their day-to-day business operations.
Not surprisingly, small and medium size businesses often benefit the most from these platforms, as Internet platforms enable them to immediately build infrastructure, conduct traditional back office operations, take payments, easily target advertising to prospective consumers, and reach consumers worldwide through e-commerce platforms in ways that were not possible before the rise of Internet commerce. With this in mind, it's not surprising that SMEs who heavily utilize that Internet are 10% more productive (and export two times as much products and services) as companies that do not. Also not surprisingly, SMEs were the key to the U.S. economic and jobs recovery, and are seen as key to powering the European economic recovery. As European policymakers sit poised to decide the nature and scope of Internet regulation as part of the Digital Single Market (DSM) initiative, creating a legal and regulatory environment for Internet platforms to thrive (and new ones to be created) should be a top priority. Ensuring that robust online intermediary liability protections extend across the European Union stands as arguably the most important policy imperative of the DSM if Europe wants its vibrant, connected digital economy to thrive in the 21st century.
If anything, for the Internet to continue to grow and thrive, liability limitations for online companies should be expanded and strengthened. It is a worthwhile and important endeavor for the European Commission to clarify and harmonize liability safe harbors across Europe. However, harmonization with a weakening of these safe harbors will have negative effects for both freedom of expression and Internet commerce.
Reposted from the Disruptive Competition Project
Filed Under: duty of care, europe, free speech, intermediary liability