from the cleaning-up-the-'net dept
Paid advertising content should not be covered by Section 230 of the Communications Decency Act. Online platforms should have the same legal risk for ads they run as print publishers do. This is a reform that I think supporters of Section 230 should support, in order to save it.
Before I explain why I support this idea, I want to make sure I'm clear as to what the idea is. I am not proposing that platforms be liable for content they run ads next to -- just for the ads themselves. I am not proposing that the liability lies in the "tools" that they provide that can be used for unlawful purposes, that's a different argument. This is not about liability for providing a printing press, but for specific uses of the printing press -- that is --publication.
I also don't suggest that platforms should lose Section 230 if they run ads at all, or some subset of ads like targeted ads, like a service-wide on/off switch. The liability would just be normal, common-law liability for the content of the ads themselves. And “ads” just means regular old ads, not all content that a platform commercially benefits from.
It's fair to wonder whom this applies to. Many of the examples listed below have to do with Facebook selling ads that are displaying on Facebook, or Google placing ads on Google properties, and it's pretty obvious that these companies would be the ones facing increased legal exposure under this proposal. But the internet advertising ecosystem is fiendishly complex, and there are often many intermediaries between the advertiser itself, and the proprietor of the site the ad is displayed on.
So at the outset, I would say that any and all of them could be potentially liable. If Section 230 doesn't apply to ads, it doesn't apply to supplying ads to others; in fact, these intermediary functions are considered a form of "publishing" under the common law. Which party to sue would be the plaintiff's choice, and there are existing legal doctrines that prevent double recovery, and to allow one losing defendant to bring in, or recover from, other responsible parties.
It's important to note, too, that this is not strict or vicarious liability. In any given case, it could be that the advertiser could be found liable for defamation or some kind of fraud but the platform isn't, because the elements of the tort are met for one and not the other. Whether a given actor has the "scienter" or knowledge necessary to be liable for some offense has to be determined for each party separately -- you can impute the state of mind from one party onto another, and strict liability torts for speech offenses are, in fact, unconstitutional.
The Origins Of An Idea
I first started thinking about it in the context of monetized content. After a certain dollar threshold is reached with monetized content, there should be liability for that, too, since the idea that YouTube can pay thousands of dollars a month to someone for their content but then have a legal shield for it simply doesn't make sense. The relationship of YouTube to a high-paid YouTuber is more similar to that between Netflix and a show producer, than it is between YouTube and your average YouTuber whose content is unlikely to have been reviewed by a live, human YouTube official. But monetized content is a marginal issue; very little of it is actionable, and frankly the most detestable internet figures don't seem to depend on it very much.
But the same logic runs the other way, to when the content creator is paying a platform for publishing and distribution, instead of the platform paying the content creator. And I think eliminating 230 for ads would solve some real problems, while making some less-workable reform proposals unnecessary.
Question zero should be: Why are ads covered by Section 230 to begin with? There are good policy justifications for Section 230 -- it makes it easier for there to be sites with a lot of user posts that don't need extensive vetting, and it gives sites a free hand in moderation. Great. It's hard to see what that has to do with ads, where there is a business relationship. Business should generally have some sense of whom they do business with, and it doesn't seem unreasonable for a platform to do quite a bit more screening of ads before it runs them than of tweets or vacation updates from users before it hosts them. In fact, I know that it's not an unreasonable expectation because the major platforms such as Google and Facebook already do subject ads to heightened screening.
I know I'm arguing against the status quo, so I have the burden of persuasion. But in a vacuum, the baseline should be that ads don't get a special liability shield, just as in a vacuum, platforms in general don't get a liability shield. The baseline is normal common law liability and deviations from this are what have to be justified.
I'm Aware That Much "Harmful" Content is not Unlawful
A lot of Section 230 reform ideas either miss the mark or are incompletely theorized, since, of course, much -- maybe even most -- harmful online content is not unlawful. If you sued a platform over it, without 230, you'd still lose, but it would take longer.
You could easily counter that the threat of liability would cause platforms to invest more in content moderation overall, and I do think that this is likely true, it is also likely that such investments could lead to over moderation that limits free expression by speakers that are even considered mildly controversial.
But with ads, there is a difference. Much speech that would be lawful in the normal case -- say, hate speech -- can be unlawful when it comes to housing and employment advertisements. Advertisements carry more restrictions and regulations in any number of ways. Finally, ads can be tortious in the standard ways as well: they can be fraudulent, defamatory, and so on. This is true of normal posts as well -- but with ads, there's a greater opportunity, and I would argue obligation, to pre-screen them.
Many Advertisements Perpetuate Harm
Scam ads are a problem online. Google recently ran ads for scam fishing licenses, despite being told about the problem. People looking for health care information are being sent to lookalike sites instead of the site for the ACA. Facebook has even run ads for low-quality counterfeits and fake concert tickets. Instead of searching for a locksmith, you might as well set your money on fire and batter down your door. Ads trick seniors out of their savings into paying for precious metals. Fake customer support lines steal people's information -- and money. Malware is distributed through ads. Troublingly, internet users in need of real help are misdirected to fake "rehab clinics" or pregnancy "crisis centers" through ads.
Examples of this kind are endless. Often, there is no way to track down the original fraudster. Currently, Section 230 allows platforms to escape most legal repercussions for enabling scams of this kind, while allowing the platforms to keep the revenue earned from spreading them.
There are many more examples of harm, but the last category I'll talk about is discrimination, specifically through housing and employment discrimination. Such ads might be unlawful in terms of what they say, or even to whom they are shown. Putting racially discriminatory text in a job or housing ad can be discriminatory, and choosing to show a facially neutral ad to just certain racial groups could be, as well. (There are tough questions to answer -- surely buying employment ads on sites likely to be read by certain racial groups is not necessarily unlawful -- but, in the shadow of Section 230, there's really no way to know how to answer these questions.)
In many cases under current law, there may be a path to liability in the case of racially discriminatory ads, or other harmful ads. Maybe you have a Roommates-style fact pattern where the platform is the co-creator of the unlawful content to begin with. Maybe you have a HomeAway fact pattern where you can attach liability to non-publisher activity that is related to user posts, such as transaction processing. Maybe you can find that providing tools that are prone to abuse is itself a violation of some duty of care, without attributing any responsibility for any particular act of misuse. All true, but each of these approaches only addresses a subset of harms and frankly seem to require some mental gymnastics and above-average lawyering. I don't want to dissuade people from taking these approaches, if warranted, but they don't seem like the best policy overall. By contrast, removing a liability shield from a category of content where there is a business relationship and a clear opportunity to review content prior to publication would incentivize platforms to more vigorously review.
A Cleaner Way to Enforce Anti-Discrimination Law and Broadly Police Harm
It's common for good faith reformers to propose simply exempting civil rights or other areas of law from Section 230, preventing platforms from claiming Section 230 as a defense of any civil rights lawsuit, much as how federal criminal law is already exempted.
The problem is that there is no end of good things that we'd like platforms to do more of. The EARN IT Act proposes to create more liability for platforms to address real harms, and SESTA/FOSTA likewise exempts certain categories of content. There are problems with this approach in terms of how you define what platforms should do, and what content is exempted, and issues of over-moderation in response to fears of liability. This approach threatens to make Section 230 a Swiss cheese statute where whether it applies to a given post requires a detailed legal analysis, which has other significant harms and consequences.
Another common proposal is to exempt "political" ads from Section 230, or targeted ads in general (or to somehow tackle targeting in some non-230 way). There are just so many line-drawing problems here, making enforcement extremely difficult. How, exactly, do you define "targeted"? How, by looking at an ad, can you tell whether it is targeted, contextual, or just part of some broad display campaign? With political ads, how do you define what counts? Ads from or about campaigns are only a subset of political ads--is an ad about climate change "political"--or an ad from an energy company touting its green record? In the broadest sense yes, but it's hard to see how you'd legislate around this topic.
Under the proposal to exempt ads from Section 230, the primary question to answer is not what is the content addressed to and what harms it may cause, but simply, whether it is an ad. Ads are typically labeled as such and quite distinct--and it may be the case that there need to be stronger ad disclosure requirements and penalties for running ads without disclosure. There may be other issues around boundary-drawing as well--I perfectly well understand that one of the perceived strengths of Section 230 is its simplicity, relative to complex and limited liability shields like Section 512 of the DMCA. Yet I think they're tractable.
Protection for Small Publishers
I've seen some publishers respond to user complaints of low-quality or even malware-distributing ads running on sites where the publishers point out that they don't see or control the ads--they are delivered straight from the ad network to the user, alongside publisher content. (I should say straight away that this still counts as "publishing" an ad--if the user's browser is infected by malware that inserts ads, or if an ISP or some other intermediary inserts the ad into the publisher's content, then no it is not liable, but if a website embeds code that serves ads from a third party, it is "publishing" that ad in the same sense as a back page ad on a fancy Conde Nast magazine. Whether that leads to liability just depends on whether the elements of the tort are met, and whether 230 applies, of course.)
For major publishers I don't have a lot of sympathy. If their current ad stack lets bad ads slip through, they should use a different one, if they can, or demand changes in how their vendors operate. The incentives don't align for publishers and ad tech vendors to adopt a more responsible approach. Changing the law would do that.
At the same time it may be true that some small publishers depend on ads delivered by third parties, and not only does the technology not allow them to take more ownership of ad content, they lack the leverage to demand to be given the right tools. Under this proposal, these small publishers would be treated like any other publisher for the most part, though I tend to think that it would be harder to meet the actual elements of an offense with respect to them. That said I would be on board with some kind of additional stipulation that ad tech vendors are required to defend and pay out for any case where publishers below a certain threshold are hauled into court for distributing ads they have no control over, but are financially dependent on. Additionally, to the extent that the ad tech marketplace is so concentrated that major vendors are able to shift liability away from themselves to less powerful players, antitrust and other regulatory intervention may be needed to assure that risks are borne by those who can best afford to prevent them.
The Tradeoffs That Accompany This Idea Are Worth It
I am proposing to throw sand in the gears on online commerce and publishing, because I think the tradeoffs in terms of consumer protection and enforcing anti-discrimination laws are worth it. Ad rates might go up, platforms might be less profitable, ads might take longer to place, and self-serve ad platforms as we know them might go away. At the same time, fewer ads could mean less ad-tracking and an across-the-board change to the law around ads should not tilt the playing field towards big players any more than it already is, and would not likely lead to an overall decline in ad spending, just a shift in how those dollars are spent (to different sites, and to fewer but more expensive ads)..
This proposal would burden some forms of speech more than others, too, so it’s worth considering First Amendment issues. One benefit of this proposal over subject matter-based proposals is that it is content neutral, applying to a business model. Commercial speech is already subject to greater regulation than other forms of speech, and this is hardly a regulation, just the failure to extend a benefit universally. Though of course this can be a different way of saying the same thing. But, if extending 230 to ads is required if it’s extended anywhere, it would seem that same logic would require that 230 be extended to print media or possibly even first-party speech. That cannot be the case. And I have to warn people that if proposed reforms to Section 230 are always argued to be unconstitutional, that makes outright repeal of 230 all the more likely, which is not an outcome I’d support.
Fans of Section 230 should like this idea because it forestalls changes they no doubt think would be worse. Critics of 230 should like it because it addresses many of the problems they've complained about for years, and has few if any of the drawbacks of content-based proposals. So I think it's a good idea.
John Bergmayer is Legal Director at Public Knowledge, specializing in telecommunications, media, internet, and intellectual property issues. He advocates for the public interest before courts and policymakers, and works to make sure that all stakeholders -- including ordinary citizens, artists, and technological innovators -- have a say in shaping emerging digital policies.
Filed Under: ads, advertisements, content moderation, liability, section 230