How To Think About Online Ads And Section 230

from the oversimplification-avoidance dept

There's been a lot of consternation about online ads, sometimes even for good reason. The problem is that not all of the criticism is sound or well-directed. Worse, the antipathy towards ad tech, regardless of whether it is well-founded or not, is coalescing into yet more unwise, and undeserved, attacks on Section 230 and other expressive discretion the First Amendment protects. If these attacks are ultimately successful none of the problems currently lamented will be solved, but they will create lots of new ones.

As always, effectively addressing actual policy challenges first requires a better understanding of what these challenges are. The reality is that there are at least three separate issues that are raised by online ads: those related to ad content itself, those related to audience targeting, and those related to audience tracking. They all require their own policy responses—and, as it happens, none of those policy responses call for doing anything to change Section 230. In fact, to the extent that Section 230 is even relevant, the best policy response will always require keeping it intact.

With regard to ad content, Section 230 applies, and should apply, to the platforms that run advertiser-supplied ads for the same reasons it applies, and should apply, to the platforms hosting the other sorts of content created by users. After all, ad content is, in essence, just another form of user generated content (in fact, sometimes it's exactly like other forms of user content). And, as such, the principles behind having Section 230 apply to platforms hosting user-generated content in general also apply – and need to apply – here.

For one thing, as with ordinary user-generated content, platforms are not going to be able to police all the ad content that may run on their site. One important benefit of online advertising versus offline is that it enables far more entities to advertise to far larger audiences than they would be able to afford in the offline space. Online ads may therefore sometimes be cheesy, low-budget affairs, but it's ultimately good for the consumer if it's not just large, well-resourced, corporate entities who get to compete for public attention. We should be wary of implementing any policy that might choke off this commercial diversity.

Of course, the flip side to making it possible for many more actors to supply many more ads is that the supply of online ads is nearly infinite, and thus the volume is simply too great for platforms to be able to scrutinize all of them (or even most of them). Furthermore, even in cases where platforms might be able to examine an ad, it is still unlikely to have the expertise to review it for all possible legal issues that might arise in every jurisdiction where the ad may appear. Section 230 exists in large part to alleviate these impossible content policing burdens to make it possible for platforms to facilitate the appearance of any content at all.

Nevertheless, Section 230 also exists to make it possible for platforms to try to police content anyway, to the extent that they can, by making it clear that they can't be held liable for any of those moderation efforts. And that's important if we want to encourage them to help eliminate ads of poor quality. We want platforms to be able to do the best they can to get rid of dubious ads, and that means we need to make it legally safe for them to try.

The more we think they should take these steps, the more we need policy to ensure that it's possible for platforms to respond to this market expectation. And that means we need to hold onto Section 230 because it is what affords them this practical ability.

What's more, Section 230 affords platforms all this critical protection regardless of whether they profit from carrying content or not. The statute does not condition its protection on whether a platform facilitates content in exchange for money, nor is there any sort of constitutional obligation for a platform to provide its services on a charitable basis in order to benefit from the editorial discretion the First Amendment grants it. Sure, some platforms do pointedly host user content for free, but every platform needs to have some way of keeping the lights on and servers running. And if the most effective way to keep their services free for some users to post their content is to charge others for theirs, it is an absolutely constitutionally permissible decision for a platform to make.

In fact, it may even be good policy to encourage as well, as it keeps services available for users who can't afford to pay for access. Charging some users to facilitate their content doesn't inherently make the platform complicit in the ad content's creation, or otherwise responsible for imbuing it with whatever quality is objectionable. Even if that an advertiser has paid for algorithmic display priority, Section 230 should still apply just as it applies to any other algorithmically driven display decision the platform employs.

But on the off-chance that the platform did take an active role in creating that objectionable content, Section 230 has never stood in the way of holding the platform responsible. What Section 230 simply says is that making it possible to post unlawful content is not the same as creating content; for the platform to be liable as an "information content provider," aka a content creator, it had to have done something significantly more to birth its wrongful essence than simply be a vehicle for someone else to express it.

It's even true if the platform allows the advertiser to choose its audience. After all, the content has already been created. Audience targeting is something else entirely, but it's also something we should be wary of impinging upon.

There may, of course, be situations where advertisers try to target certain types of ads (ex: jobs, housing offers) in harmful ways. And when they do it may be appropriate to sanction the advertiser for what may amount to illegally discriminatory behavior. But not every such targeting choice is wrongful; sometimes choosing narrow audiences based on protected status may even be beneficial. But if we change the law to allow platforms be held equally liable with the advertiser for their wrongful targeting choices, we will take away the ability for platforms to offer audience targeting for any reasons, even good ones, by making it legally unsafe in case the advertiser does it for bad ones.

Furthermore, doing so will upend all advertising as we've known it, and in a way that's offensive to the First Amendment. There's a reason that certain things are advertised during prime time, or during sports broadcasts, or on late night tv, just as there's a reason that ads appearing in the New York Times are not necessarily the same ones running in Field & Stream or Ebony magazines. The Internet didn't suddenly make those choices possible; advertisers have always wanted the most bang for their buck, to reach the people most likely to be their ultimate customers as cost effectively as possible. And as a result they have always made choices about where to place their ads based on the demographics those ads likely reach. To now say that it should be illegal to allow advertisers to ever make such choices, simply because they may sometimes make these decisions wrongfully would disrupt decades upon decades of past practice and likely run afoul of the First Amendment, which generally protects the choice of whom to speak to. In fact, it protects it regardless of the medium in question, and there is no principled reason why an online platform should be any less protected than a broadcaster or some sort of printed periodical (especially not the former).

Even if it would be better if advertisers weren't so selective—and it's a fair argument to make, and a fair policy to pursue—it's not an outcome we should use the weight of legal liability to try to force. It won't work, and it impinges on important constitutional freedoms we've come to count on. Rather, if there is any affirmative policy response to ad tech that is warranted it is likely with the third constituent part: audience tracking. But even so, any policy response will still need to be a careful one.

There is nothing new about marketers wanting to fully understand their audiences; they have always tried to track them as well as the technology of the day would allow. What's new is how much better they now can. And the reality is that some of the tracking ability is intrusive and creepy, especially to the degree it happens without the audience being aware of how much of their behavior is being silently learned by strangers. There is room for policy to at minimum encourage, and potentially even require, such systems to be more transparent in how they learn about their audiences, tell others what they've learned, and give those audiences a chance to say no to much of it.

But in considering the right regulatory response there are some important caveats. First, take Section 230 off the table. It has nothing to do with this regulatory problem, apart from enabling platforms that may use ad tech to exist at all. You don't fix ad tech by killing the entire Internet; any regulatory solution is only a solution when it targets the actual problem.

Which leads to the next caution, because the regulatory schemes we've seen attempted so far (GDPR, CCPA, Prop. 24) are, even if well-intentioned, clunky, conflicting, and with plenty of overhead that compromises their effectiveness and imposes their own unintended and chilling costs, including on expression itself (and of more expression than just that of advertisers).

Still, when people complain about online ads this is frequently the area they are complaining about and it is worth focused attention to solve. But it is tricky; given how easy it is for all online activity to leave digital footprints, as well as the many reasons we might want to allow those footprints to be measured and then those measurements to be used (even potentially for advertising), care is required to make sure we don't foreclose the good uses while aiming to suppress the bad. But for the right law, one that recognizes and reasonably reacts to the complexity of this policy challenge, there is an opportunity for a constructive regulatory response to this piece of the online ad tech puzzle. There is no quick fix – and ripping apart the Internet by doing anything to Section 230 is certainly not any kind of fix at all – but if something must be done about online advertising, this is the something that's worth the thoughtful policy attention to try to get right.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: advertising, business models, internet, online ads, section 230


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Prezal Mallicani, 10 Feb 2021 @ 2:08pm

    Internet blackout time to save section 230

    Come on guys. If you remember google, Facebook or Twitter never blackout there sites during SOPA. It was small tech companies like Reddit and Wikipedia that saved the internet then. We can do it again. Section 230 is a must for the net to function. #internetblackout2021. Let’s save section 230. Do not let government no nothings BREAK THE INTERNET.

    link to this | view in chronology ]

    • icon
      Jojo (profile), 10 Feb 2021 @ 5:23pm

      Re: Internet blackout time to save section 230

      Okay mate. I’m just as enthusiastic to stop the evisceration of Section 230, but at this point you’re just reducing to spamming.

      link to this | view in chronology ]

  • identicon
    Christenson, 10 Feb 2021 @ 2:30pm

    You forgot advertising is content and content is advertising

    Seriously...

    Any decent targeting tool allows any advertiser to discriminate protected audience classes for any purpose. Legal liability for illegal intent has to be on the advertiser. See roommates.com.

    There's huge value in transparency tools
    ... what's the totality of advertising offered and its statistics?

    And value in some non-transparency
    ... I'd love it if my "id/signature" stayed with the website I thought I visited didn't go out to every advertiser that might put up an ad on the site

    Odd, that's kind of how Techdirt forums work!

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 11 Feb 2021 @ 4:03am

    I'm sure there will always be enough ads in the world to annoy everyone, regardless of how selective ("wrongly" or otherwise) some advertisers are.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 11 Feb 2021 @ 4:57am

      Re:

      But then there is ublock origin and Pi_hole, to waste their effort in making the selection.

      link to this | view in chronology ]

  • icon
    crade (profile), 11 Feb 2021 @ 2:48pm

    ok.. so what about the cases that aren't so sunshiny?

    As you point out, ads can be very much divorced from platforms, but thats not a rule even if the platform isn't creating the ads.

    What happens when the ones creating harmful the ads are out of reach of your legal system?
    What happens when the platform isn't creating harmful content, but is encouraging harmful content by selecting an ad system that is both outside the reach of your legal system and also known for ads with that sort of harmful content?

    What if instead of signing up for Google Ads, the platform is signed up for malware_ads_r_us, where malware_ads_r_us is highly likely but possibly not guaranteed to have content you want to protect people from, and is highly profitable for a site to use as an ad provider and also outside your jurisdiction to target directly with repercussions?

    Without creating any of the content, the platform can if they decide, choose to have a lot more influence over what ads appear than really they can over what their users post. Yes cases like google ads exist where the platform isn't as involved in deciding what ads show up, but that isn't the only possibility.

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.