NY Senator Proposes Ridiculously Unconstitutional Social Media Law That Is The Mirror Opposite Of Equally Unconstitutional Laws In Florida & Texas
from the just-stop dept
We've joked in the past about how Republicans hate Section 230 for letting websites moderate too much content, while Democrats hate it for letting websites not moderate enough content. Of course, the reality is they both are mad about content moderation (at different extremes) because they both want to control the internet in a manner that helps "their team." But both approaches involve unconstitutional desires to interfere with 1st Amendment rights. For Republicans, it's often the compelled hosting of speech, and for Democrats, it's often the compelled deletion of speech. Both of those are unconstitutional.
On the Republican side, we've already seen states like Florida and Texas sign into law content moderation bills -- and both have been blocked for being wholly unconstitutional.
We've already heard that some other Republican-controlled states have shelved plans for similar bills, realizing that all they'd be doing was setting taxpayer money on fire.
Unfortunately, it looks like the message has not made its way to Democratic-controlled states. California has been toying with unconstitutional content moderation bills, and now NY has one as well. Senator Brad Hoylman -- who got his law degree from Harvard, where presumably they teach about the 1st Amendment -- has proudly introduced a hellishly unconstitutional social media bill. Hoylman announces in his press release that the bill will "hold tech companies accountable for promoting vaccine misinformation and hate speech."
Have you noticed the problem with the bill already? I knew you could. Whether we like it or not, the 1st Amendment protects both vaccine misinformation and hate speech. It is unconstitutional to punish anyone for that speech, and it's even more ridiculous to punish websites that host that content, but had nothing to do with the creation of it.
Believe it or not, the actual details of the bill are even worse than Hoylman's description of it. The operative clauses are outlandishly bad.
Prohibited activities. No person, by conduct either unlawful In itself or unreasonable under all the circumstances, shall knowingly or recklessly create, maintain or contribute to a condition in New York State that endangers the safety or health of the public through the promotion of content, including through the use of algorithms or other automated systems that prioritize content by a method other than solely by time and date such content was created, the person knows or reasonably should know:
1. Advocates for the use of force, is directed to inciting or producing imminent lawless action, and is likely to incite or produce such action;
2. Advocates for self-harm, is directed to inciting or producing imminent self-harm, and is likely to incite or produce such action; or
3. Includes a false statement of fact or fraudulent medical theory that is likely to endanger the safety or health of the public.
This is so dumb that it deserves to be broken down bit by bit. First off, any kind of content can, conceivably "endanger the health and safety of the public." That's so ridiculously broad. I saw an advertisement for McDonalds today on social media. Does that endanger the health and safety of the public? It sure could. Second, the bill says no use of algorithms or otherwise automated systems "other than solely by time and date such content was created" meaning that search is right out. Want the most relevant search result for the medical issues you're having? I'm sorry, sir, that's not allowed in New York, as a result might endanger your health and safety.
But it gets worse. The line that says...
Advocates for the use of force, is directed to inciting or producing imminent lawless action, and is likely to incite or produce such action
... is a weird one because clearly someone somewhere thought that this magical incantation might make this constitutional. The "directed to inciting or producing imminent lawless action, and is likely to incite or produce such action" is -- verbatim -- the Brandenburg test for a very, very limited exception to the 1st Amendment. But, do you notice the issue? Such speech is already exempted from the 1st Amendment. Leaving aside how astoundingly little content meets this test (especially the "imminent lawless action" part) this part of the law, at best, seems to argue that "unconstitutional speech is unconstitutional." That's... not helpful.
The second point is even weirder. It more or less tries to mirror the Brandenburg standard, but with a few not-so-subtle changes:
Advocates for self-harm, is directed to inciting or producing imminent self-harm, and is likely to incite or produce such action
Which, nice try, but just because you mimicked the "inciting or producing imminent" part doesn't let you get around the fact that discussions of "self-harm" in most cases remains constitutionally protected. So long as the effort is not lawless, then there's a huge 1st Amendment problem here.
But the really problematic part is point 3:
Includes a false statement of fact or fraudulent medical theory that is likely to endanger the safety or health of the public.
Ooooooooooof. That's bad. First of all, most "false statements of fact" and many "fraudulent medical theories" do in fact remain protected under the 1st Amendment. And, last I checked, New York is still bound by the 1st Amendment. Also, this is dumber than dumb. Remember, we're in the middle of a pandemic and the science is changing rapidly. Lots of things we thought were clear at first turned out to be very different -- don't wear masks / wear masks, for example.
In fact, this prong most closely resembles how China first handled reports of COVID-19. Early on in the pandemic we wrote about how China's laws against medical misinformation very likely helped COVID-19 spread much faster, because the Chinese government silenced Dr. Li Wenliang, one of the first doctors in China who called attention to the new disease. The police showed up to Dr. Li's home and told him he had violated the law by "spreading untruthful information online" and forced him to take down his warnings about COVID-19.
And rather than realize just how problematic that was, Senator Hoylman wants to make it New York's law!
It gets worse. The law, like most laws, has definitions. And the definitions are a mess. It uses an existing NY penal law definition of "recklessly" that requires those attempting to prosecute the law to establish the state of mind of... algorithms? Again, the bill says that if an algorithm "recklessly" creates, maintains, or contributes to such banned information, it can violate the law. But the reckless standard requires a "person" be "aware of and consciously disregards a substantial and unjustifiable risk that such result will occur." Good luck proving that with an algorithm.
Then we get to the enforcement provision. Incredibly, it makes this much, much worse.
Enforcement. Whenever there shall be a violation of this article, the attorney general, in the name of the people of the state of New York, or a city corporation counsel on behalf of the locality, may bring an action in the Supreme Court or federal district court to enjoin and restrain such violations and to obtain restitution and damages.
Private right of action. Any person, firm, corporation or association that has been damaged as a result of a person's acts or omissions in violation of this article shall be entitled to bring an action for recovery of damages or to enforce this article in the Supreme Court or federal district court.
The government enforcing a speech code is already problematic -- but then enabling this private right of action is just ridiculous. Think of how many wasteful stupid lawsuits would be filed within seconds of this law going into effect by anti-vaxxers and anti-maskers against people online advocating in favor of vaccines and masks and other COVID-preventative techniques?
This law is so blatantly unconstitutional and problematic that it's not even funny. And that's not even getting to the simple fact that Section 230 pre-empts any such state law, as we saw in Texas and Florida. Hoylman, laughably, suggests that he can ignore the pre-emption issue in his press release by saying:
The conscious decision to elevate certain content is a separate, affirmative act from the mere hosting of information and therefore not contemplated by the protections of Section 230 of the Communications Decency Act.
Except that's wrong. 230 specifically protects all moderation decisions and that includes elevating content. That's why Section 230 protects search results. And, as Jeff Kosseff rightly notes the 2nd Circuit (which covers NY) already addressed this exact claim in the Force v. Facebook case (the ridiculous case that attempted to hold Facebook liable for terrorism that impacted the plaintiff, because some unrelated terrorists also used Facebook). There the court said, pretty clearly:
We disagree with plaintiffs' contention that Facebook's use of algorithms renders it a non-publisher. First, we find no basis in the ordinary meaning of "publisher," the other text of Section 230, or decisions interpreting Section 230, for concluding that an interactive computer service is not the "publisher" of third-party information when it uses tools such as algorithms that are designed to match that information with a consumer's interests. Cf., e.g., Roommates.Com, 521 F.3d at 1172 (recognizing that Matchmaker.com website, which "provided neutral tools specifically designed to match romantic partners depending on their voluntary inputs," was immune under Section 230(c)(1) ) (citing Carafano, Inc. , 339 F.3d 1119 ); Carafano , 339 F.3d at 1124–25 ("Matchmaker's decision to structure the information provided by users allows the company to offer additional features, such as ‘matching’ profiles with similar characteristics ..., [and such features] [a]rguably promote[ ] the expressed Congressional policy ‘to promote the continued development of the Internet and other interactive computer services.’ 47 U.S.C. § 230(b)(1)."); Herrick v. Grindr, LLC , 765 F. App'x 586, 591 (2d Cir. 2019) (summary order) ("To the extent that [plaintiff's claims] are premised on Grindr's [user-profile] matching and geolocation features, they are likewise barred ....").
So... the law clearly violates the 1st Amendment, is pre-empted by Section 230, and, if it actually went into practice, would actually be both wildly abused and dangerous.
What's it got going for it?
Well, as Kosseff also points out, if it passed, and somehow the Texas/Florida laws were brought back from the dead, social media websites might get in trouble both for leaving up the same content they could get in trouble for taking down elsewhere. And, at least for those of us who write about content moderation, well, that will be amusing to cover. But, beyond that, this bill is complete garbage. It's the mirror image of the garbage Florida and Texas passed -- equally as dumb, equally as dangerous, and equally as unconstitutional, just at the other end of the spectrum.
Filed Under: 1st amendment, brad hoylman, content moderation, democrats, florida, misinformation, new york, republicans, section 230, social media, texas