FTC's Latest Fine Of YouTube Over COPPA Violations Shows That COPPA And Section 230 Are On A Collision Course

from the this-could-be-an-issue dept

As you probably heard, earlier this week, the FCC fined Google/YouTube for alleged COPPA violations in regards to how it collected data on kids. You can read the details of the complaint and proposed settlement (which still needs to be approved by a judge, but that's mostly a formality). For the most part, people responded to this in the same way that they responded to the FTC's big Facebook fine. Basically everyone hates it -- though for potentially different reasons. Most people hate it because they think it's a slap on the wrist, won't stop such practices and just isn't painful enough for YouTube to care. On the flip side, some people hate it because it will force YouTube to change its offerings for no good reason at all and in a manner that might actually lead to more privacy risks and less content for children.

They might all be right. As I wrote about the Facebook fine and other issues related to privacy, almost every attempt to regulate privacy tends to make things worse, in part, because people keep misunderstanding how privacy works. Also, most of the "complaints" about how this "isn't enough," are really not complaints directed at the FTC, but at Congress, because the FTC can only do so much under its current mandate.

Separately, since this fine focused on COPPA violations, I'll separately note that COPPA has always been a ridiculous law that makes no real sense -- beyond letting politicians and bureaucrats pretend they're "protecting the children" -- while really creating massive unintended consequences that do nothing to protect children or privacy, and do quite a bit to make the internet a worse place.

But... I'm not even going to rehash all of that today. Feel free to dig into the past links yourselves. What's interesting to me is something specific to this settlement, as noted by former FCC and Senate staffer (and current Princeton professor), Jonathan Mayer: the FTC, in this decision, appears to have significantly changed its interpretation of COPPA, and done so in a manner that is going to set up something of a clash with Section 230. What happened is a little bit subtle, so it requires some background.

The key feature of COPPA -- and the one you're probably aware of whether or not you know it -- is that it has specific rules if a site is targeting children under the age of 13. This is why tons of sites say that you need to be over 13 to use them (including us) -- in an attempt to avoid dealing with many of the more insane parts of COPPA compliance. Of course, in practice, this just means that many people lie. Indeed, as danah boyd famously wrote nearly a decade ago, COPPA seems to be training parents to help their kids lie online -- which is kinda dumb.

Of course, the key point under COPPA is not actually the "under 13" users, but rather whether or not a website or online service is "directed to children under 13 years of age." Indeed, in talking about it with various lawyers, we've been told that most sites (including our own) shouldn't even worry about COPPA because it's obvious that such sites aren't "directed to children" as a whole and therefore even if a few kids sneak in, they still wouldn't be violating COPPA. In other words, the way the world has mostly interpreted COPPA is that it's not about how whether any particular piece or pieces of content are aimed at children -- but whether the larger site itself is aimed at children.

This new FTC settlement agreement changes that.

Basically, the FTC has decided that, under COPPA, it no longer needs to view the service as a whole, but can divide it up into discrete chunks, and determine if any of those chunks are targeted at kids. To be fair, this is well within the law. The text of COPPA clearly says in definitional section (10)(A)(ii) that "a website or online service directed to children" includes "that portion of a commercial website or online service that is targeted to children." It's just that, historically, most of the focus has been on the overall website -- or something that is more distinctly a "portion" rather than an individual user's channel.

Except, that under the law, it seems that it should be the channel operator who is held liable for violations of COPPA under that channel, rather than the larger platform. In fact, back in 2013, the last time the FTC announced rules around COPPA it appears to have explicitly stated, that it would apply COPPA to the specific content provider who was directed at children and not at the general platform they used. This text is directly from that FTC rule, which went through years of public review and comment before being agreed upon:

... the Commission never intended the language describing ‘‘on whose behalf’’ to encompass platforms, such as Google Play or the App Store, when such stores merely offer the public access to someone else’s child-directed content. In these instances, the Commission meant the language to cover only those entities that designed and controlled the content...

But that's not what the FTC is doing here. And so it appears that the FTC is changing the definition of things, but without the required comment and rulemaking process. Here, the FTC admits that channels are "operators" but then does a bit of a two-step to say that it's YouTube who is liable.

YouTube hosts numerous channels that are “directed to children” under the COPPA Rule. Pursuant to Section 312.2 of the COPPA Rule, the determination of whether a website or online service is directed to children depends on factors such as the subject matter, visual content, language, and use of animated characters or child-oriented activities and incentives. An assessment of these factors demonstrates that numerous channels on YouTube have content directed to children under the age of 13, including those described below in Paragraphs 29-40. Many of these channels self-identify as being for children as they specifically state, for example in the “About” section of their YouTube channel webpage or in communications with Defendants, that they are intended for children. In addition, many of the channels include other indicia of child-directed content, such as the use of animated characters and/or depictions of children playing with toys and engaging in other child-oriented activities. Moreover, Defendants’ automated system selected content from each of the channels described in Paragraphs 29-40 to appear in YouTube Kids, and in many cases, Defendants manually curated content from these channels to feature on the YouTube Kids home canvas.

Indeed, part of the evidence that the FTC relies on is the fact that YouTube "rates" certain channels for kids.

In addition to marketing YouTube as a top destination for kids, Defendants have a content rating system that categorizes content into age groups and includes categories for children under 13 years old. In order to align with content policies for advertising, Defendants rate all videos uploaded to YouTube, as well as the channels as a whole. Defendants assign each channel and video a rating of Y (generally intended for ages 0-7); G (intended for any age); PG (generally intended for ages 10+); Teen (generally intended for ages 13+); MA (generally intended for ages 16+); and X (generally intended for ages 18+). Defendants assign these ratings through both automated and manual review. Previously, Defendants also used a classification for certain videos shown on YouTube as “Made for Kids.”

That's a key point that the FTC uses to argue that YouTube knows that its site is "directed at" children. But here's the problem with that. Section 230 of the Communications Decency Act, specifically the often forgotten (or ignored) (c)(2) is explicit that no provider shall be held liable for any moderation actions, including "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." One way to do that is... through content labeling policies, such as those that YouTube used and described by the FTC.

So here, YouTube is being found partially liable because of its content ratings, which is being shown as evidence that it's covered by COPPA. But, CDA 230 makes it clear that there can't be any such liability from such a rating system.

This won't get challenged in court (here) since Google/YouTube have agreed to settle, but it certainly does present a big potential future battle. And, frankly, given the way that some courts have been willing to twist and bend CDA 230 lately, combined with the general "for the children!" rhetoric, I have very little confidence that CDA 230 would win.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: cda 230, coppa, ftc, moderation, privacy, section 230
Companies: google, youtube


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Stephen T. Stone (profile), 6 Sep 2019 @ 11:14am

    given the way that some courts have been willing to twist and bend CDA 230 lately, combined with the general "for the children!" rhetoric, I have very little confidence that CDA 230 would win

    Makes me wonder how many sites/comments sections would go dark within 24 hours after a ruling that takes down or severely limits CDA 230.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 6 Sep 2019 @ 11:21am

      Re:

      Take one look at how sites responded to GDPR to get an idea.

      link to this | view in chronology ]

    • identicon
      anonymous, 6 Sep 2019 @ 12:05pm

      Re:

      Try to post an anonymous comment on /. and let me know how that goes.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 6 Sep 2019 @ 4:59pm

        Re: Re:

        "/." on it's own, tells us nothing.

        Since you asked, going fine so far.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 9 Sep 2019 @ 8:39am

          Re: Re: Re:

          "/." on it's own, tells us nothing.

          Actually it does. It's a direct reference to a specific site. Perhaps a bit unconventional and obscure to the public but he has stated the exact site he is asking you to try posting anonymously on.

          To wit, he wrote a "/" and a ".", or, in English, a "slash" and a "dot". Pronounce them in the sentence and you get:

          Try to post an anonymous comment on slash dot

          This is obviously a reference to slashdot.org, which no longer allows anonymous comments at the moment.

          link to this | view in chronology ]

  • icon
    That One Guy (profile), 6 Sep 2019 @ 1:45pm

    Probably shouldn't open that can...

    So here, YouTube is being found partially liable because of its content ratings, which is being shown as evidence that it's covered by COPPA. But, CDA 230 makes it clear that there can't be any such liability from such a rating system.

    This won't get challenged in court (here) since Google/YouTube have agreed to settle, but it certainly does present a big potential future battle. And, frankly, given the way that some courts have been willing to twist and bend CDA 230 lately, combined with the general "for the children!" rhetoric, I have very little confidence that CDA 230 would win.

    Funny thing about gutting 230 'for the children': it would be insanely counter-productive. If providing a ratings system opens a platform up to liability then they've basically got two options: Ditch the rating system, or ditch the content, and as the content is what makes the platform worthwhile guess which is likely to see the axe first?

    For those complaining about how sites like YT aren't 'doing enough' to be kid friendly wait until the ratings system goes up in smoke and either the parents themselves have to actually do the work of going through videos to see what is and is not kid friendly, or accept that letting their kids on the site could result in them seeing some very not 'kid friendly' content.

    link to this | view in chronology ]

  • icon
    urza9814 (profile), 6 Sep 2019 @ 2:01pm

    ...what?

    How exactly do you get from "YouTube specifically is paying people to create content and pages on their website that are explicitly marketed towards children" to "YouTube is not intentionally marketing anything to children"? Am I missing some crucial part of the argument here?

    YouTube Kids is not moderation, it's a service. They aren't purging offensive content, they're creating a curated list of non-offensive content. Those are very different actions. They're also marketing this service towards parents claiming that it is explicitly designed for children. That's not moderation, it's MARKETING. It's YouTube's own speech saying these things, not users or user content.

    link to this | view in chronology ]

  • icon
    aerinai (profile), 6 Sep 2019 @ 2:14pm

    My head hurts

    So maybe I've missed some of this nuance... Wouldn't the account that is actively viewing the content be the account that is considered 'over 13', 'under 13' or unknown (anonymous / no account)? I literally only watch baby shows on YouTube for my kid. I don't really YouTube much, but I don't log out of Chrome, log back in as my 1 year old... that sounds insane. So I'm assuming that they are targeting me with ads in this case. I get it. Awesome.

    So, let's say my kid is 10. He creates his own Google account. He is under 13. He watches another video... COPPA doesn't say you can't advertise, it just says you can't use his 'private' data to target, correct? Generalized advertisements (like you see on Disney Channel, Cartoon Network, etc.) are still allowed.

    Now, let's say my kid jumps on my computer and stays logged into my account. I don't see how any sane person could 'blame Google' for the actions of the user (in this case my kid) for Google sending him targeted ads (albeit based primarily on my data). Hell, it probably thinks i'm totally into Power Rangers and Powder Puff Girls or whatever happens to be popular (he is using my account after all).

    So... while I agree that this does seem like an unnecessary expansion of the law, I don't know why they had to expand it in the first place. Either Google knowingly targeted children's accounts (content is irrelevant), or they didn't.

    If Google saw an account flagged as sub-13 and ignored the law, that is kind of on them.

    If they allow 'anonymous' browsing; I'd assume that their TOS would cover them (you must be 13 or older).

    If you have an adult's account and are actually a child... this seems like yet another example of the intermediate liability... which is dumb. If you are mortally upset, then the government should sue the users for... using a service?

    What am I missing?!

    link to this | view in chronology ]

  • identicon
    bob, 6 Sep 2019 @ 3:06pm

    right target wrong rational.

    I agree the channels, not service, are responsible for the content they provide. However the control of targeted ads is not managed by the channels its managed by the service. A channel (though I could be wrong) can decide to monetize and stuff but unless they are marketing within the video they have no control of what YT/Google shows for ads.
    So I can see why in this case people would go after the service and not the channels individually. However their justification of the ratings system is bogus due to CDA 230.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 6 Sep 2019 @ 4:22pm

    Section 230 of the Communications Decency Act, specifically the often forgotten (or ignored) (c)(2) is explicit that no provider shall be held liable for any moderation actions, including "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."

    True, but Section 230 is intended to address certain specific issues with content, such as the endless defamation claims which are used by the unscrupulous to silence critics. It's not a panacea.

    There are some sorts of content issues (such as the failure to warn of violent criminals on a site, which Internet Brands learned the hard way with their "model mayhem" site) which aren't protected by 230.

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 6 Sep 2019 @ 6:33pm

      Section 230 is intended to address certain specific issues with content

      47 U.S.C. § 230 doesn’t apply to content so much as it applies to who posts it. It places legal liability on those most responsible for the creation and publication on that content; that may or may not include whoever owns/operates the platform on which that content is published.

      But if you don’t believe me, read what Chris Cox said on the Congressional record about the intent and purpose of 230:

      We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see.

      [O]ur amendment will do two basic things: First, it will protect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet, let us say, who takes steps to screen indecency and offensive material for their customers. It will protect them from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem. Second, it will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the Internet because frankly the Internet has grown up to be what it is without that kind of help from the Government. In this fashion we can encourage what is right now the most energetic technological revolution that any of us has ever witnessed. We can make it better. We can make sure that it operates more quickly to solve our problem of keeping pornography away from our kids, keeping offensive material away from our kids, and I am very excited about it.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 6 Sep 2019 @ 11:46pm

      Re:

      Hi, Herrick. How's the mailing list scam coming along?

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 7 Sep 2019 @ 6:59pm

      Re:

      You going to bring up the "IP addresses would prove who threatened the President" trope again, you old, impotent fuckwit?

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Sep 2019 @ 8:43am

      Re:

      True, but Section 230 is intended to address certain specific issues with content, such as the endless defamation claims which are used by the unscrupulous to silence critics. It's not a panacea.

      You are objectively and factually wrong. That is one of its intended uses, it is far from the only one.

      There are some sorts of content issues (such as the failure to warn of violent criminals on a site, which Internet Brands learned the hard way with their "model mayhem" site) which aren't protected by 230.

      Which are all covered under separate laws. Indeed, the intro to that section explicitly states it does not supersede other federal laws. If it's not covered under a separate law, then it falls under 230. Tada! Panacea!

      link to this | view in chronology ]

  • icon
    ECA (profile), 6 Sep 2019 @ 8:39pm

    Again... I love it..

    Privacy of children??
    HOW IN HELL CAN YT/GOOGLE know everything??

    Everyone demands privacy, and the only way to do that is NOT to pay attention to the data..
    DOnt Sort it, dont do this/that or the other..
    Treat it all the same, to Show that you are keeping it private.

    Are you REALLY going to enter your Childs info on the net??
    REALLY???
    When almost any site on the net can read it off of your Machine, while you are wondering the net??

    GET A HINT.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 7 Sep 2019 @ 7:40am

    'COPPA And Section 230 Are On A Collision Course'

    and you think that's unintended? yeah, right!!

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.