Posted on Techdirt - 25 February 2022 @ 12:13pm
from the will-you-look-at-that dept
The story of Missouri's Department of Elementary and Secondary Education (DESE) leaking the Social Security Numbers of hundreds of thousands of current and former teachers and administrators could have been a relatively small story of yet another botched government technology implementation -- there are plenty of those every year. But then Missouri Governor Mike Parson insisted that the reporter who reported on the flaw was a hacker and demanded he be prosecuted. After a months' long investigation, prosecutors declined to press charges, but Parson doubled down and insisted that he would "protect state data and prevent unauthorized hacks."
You had to figure another shoe was going to drop and here it is. As Brian Krebs notes, it has now come out that it was actually the Governor's own IT team that was in charge of the website that leaked the data. That is, even though it was the DESE website, that was controlled by the Governor's own IT team. This is from the now released Missouri Highway Patrol investigation document. As Krebs summarizes:
The Missouri Highway Patrol report includes an interview with Mallory McGowin, the chief communications officer for the state’s Department of Elementary and Secondary Education (DESE). McGowin told police the website weakness actually exposed 576,000 teacher Social Security numbers, and the data would have been publicly exposed for a decade.
McGowin also said the DESE’s website was developed and maintained by the Office of Administration’s Information Technology Services Division (ITSD) — which the governor’s office controls directly.
“I asked Mrs. McGowin if I was correct in saying the website was for DESE but it was maintained by ITSD, and she indicated that was correct,” the Highway Patrol investigator wrote. “I asked her if the ITSD was within the Office of Administration, or if DESE had their on-information technology section, and she indicated it was within the Office of Administration. She stated in 2009, policy was changed to move all information technology services to the Office of Administration.”
Now, it's important to note that the massive, mind-bogglingly bad, security flaw that exposed all those SSNs in the source code of publicly available websites was coded long before Parson was the governor, but it's still his IT team that was who was on the hook here. And perhaps that explains his nonsensical reaction to all of this?
For what it's worth, the report also goes into greater detail about just how dumb this vulnerability was:
Ms. Keep and Mr. Durnow told me once on the screen with this specific data about any teacher listed in the DESE system, if a user of the webpage selected to view the Hyper Text Markup Language (HTML) source code, they were allowed to see additional data available to the webpage, but not necessarily displayed to the typical end-user. This HTML source code included data about the selected teacher which was Base64 encoded. There was information about other teachers, who were within the same district as the selected teacher, on this same page; however, the data about these other teachers was encrypted.
Ms. Keep said the data which was encoded should have been encrypted. Ms. Keep told me Mr. Durnow was reworking the web application to encrypt the data prior to putting the web application back online for the public. Ms. Keep told me the DESE application was about 10 years old, and the fact the data was only encoded and not encrypted had never been noticed before.
This explains why Parson kept insisting that it wasn't simply "view source" that was the issue here, and that it was hacking because it was "decoded." But Base64 decoding isn't hacking. If it was, anyone figuring out what this says would be a "hacker."
TWlrZSBQYXJzb24gaXMgYSB2ZXJ5IGJhZCBnb3Zlcm5vciB3aG8gYmVpZXZlcyB0aGF0IGhpcyBvd24gSVQgdGVhbSdzIHZlcnkgYmFkIGNvZGluZyBwcmFjdGljZXMgc2hvdWxkIG5vdCBiZSBibGFtZWQsIGFuZCBpbnN0ZWFkIHRoYXQgaGUgY2FuIGF0dGFjayBqb3VybmFsaXN0cyB3aG8gZXRoaWNhbGx5IGRpc2Nsb3NlZCB0aGUgdnVsbmVyYWJpbGl0eSBhcyAiaGFja2VycyIgcmF0aGVyIHRoYW4gdGFrZSBldmVuIHRoZSBzbGlnaHRlc3QgYml0IG9mIHJlc3BvbnNpYmlsaXR5Lg==
That's not hacking. That's just looking at what's there and knowing how to read it. Not understanding the difference between encoding and encrypting is the kind of thing that is maybe forgivable for a non-techie in a confused moment, but Parson has people around him who could surely explain it -- the same people who clearly explained it to the Highway Patrol investigating. But instead, he still insists it was hacking and is still making journalist Jon Renaud's life a living hell from all this nonsense.
The investigation also confirms exactly as we had been saying all along that Renaud and the St. Louis Post-Dispatch did everything in the most ethical way possible. It found the vulnerability, checked to make sure it was real, confirmed it with an expert, then notified DESE about it, including the details of the vulnerability, and while Renaud noted that the newspaper was going to run a story about it, made it clear that it wanted to make sure the vulnerability was locked down before the story would run.
So, once again, Mike Parson looks incredibly ignorant, and completely unwilling to take responsibility. And the more he does so, the more this story continues to receive attention.
Read More | 25 Comments
Posted on Techdirt - 25 February 2022 @ 9:30am
from the that's-not-what-common-carriage-is-for dept
There's been an unfortunate movement in the US over the last few years to try to argue that social media should be considered "common carriers." Mostly this is coming (somewhat ironically) from the Trumpian wing of grifting victims, who are trying to force websites to carry the speech of trolls and extremists claiming, (against all actual evidence) that there's an "anti-conservative bias" in content moderation on various major websites.
This leads to things like Ohio's bizarre lawsuit that just outright declares Google a "common carrier" and seems to argue that the company cannot "discriminate" in its search results, even though the entire point of search is to rank (i.e., discriminate) between different potential search results to show you which ones it thinks best answer your query.
There is even some movement among (mostly Republican) lawmakers to pass laws that declare Facebook/Google/Twitter to be "common carriers." There's some irony here, in that these very same Republicans spent years demonizing the idea of "common carriers" when the net neutrality debate was happening, and insisting that the entire concept of "common carrier" was socialism. Amusingly (if it weren't so dumb), Republican-proposed bills declaring social media sites common carriers often explicitly carve out broadband providers from the definitions, as if to prove that this is not about any actual principles, and 100% about using the law to punish companies they think don't share their ideological beliefs.
Unfortunately, beyond grandstanding politicians, even some academics are starting to suggest that social media should be treated like common carriers. Beyond the fact that this would almost certainly come back to bite conservatives down the line, there's an even better reason why it makes no sense at all to make social media websites common carriers.
They don't fit any of the underlying characteristics that made common carrier designations necessary in the first place.
While there were other precursor laws having to do with the requirement to offer service if you were "public callings" the concept of "common carriers" is literally tied up in its name: the "carrier" part is important. Common carriers have been about transporting things from point A to point B. Going back to the first use of the direct concept of a must "carry" rule, there's the 1701 case in England of Lane v. Cotton, regarding the failure to deliver mail by the postal service. The court ruled that a postal service should be considered a common carrier, and that there was a legitimate claim "[a]gainst a carrier refusing to carry goods when he has convenience, his wagon not being full."
In the US, the concept of the common carrier comes from the railroads, and the Interstate Commerce Act of 1887, and then to communications services with the Communications Act of 1934, and the establishment of an important bifurcation between information services (not common carriers) and telecommunications services which were common carriers.
As you look over time, you'll notice a few important common traits in all historical common carriers:
- Delivering something (people, cargo, data) from point A to point B
- Offering a commoditized service (often involving a natural monopoly provider)
In some ways, point (2) is a function of point (1). The delivery from point A to point B is the key point here. Railroads, telegraphs, telephone systems are all in that simple business -- taking people, cargo, data (voice) from point A to point B -- and then
having no further ongoing relationship with you.
That's just not the case for social media. Social media, from the very beginning, was about hosting content that you put up. It's not transient, it's perpetual. That, alone, makes a huge difference, especially with regards to the 1st Amendment's freedom of association. It's one thing to say you have to transmit someone's speech from here to there and then have no more to do with it, but it's something else entirely to say "you must host this person's speech forever."
Second, social media is, in no way, a commodified service. Facebook is a very different service from Twitter, as it is from YouTube, as it is from TikTok, as it is from Reddit. They're not interchangeable, nor are they natural monopolies, in which massive capital outlays are required upfront to build redundant architecture. New social networks can be set up without having to install massive infrastructure, and they can be extremely differentiated from every other social network. That's not true of traditional common carriers. Getting from New York to Boston by train is getting from New York to Boston by train.
Finally, even if you did twist yourself around, and ignore all of that, you're still ignoring that even with common carriers, they are able to refuse service to those who violate the rules (which is the reason that any social media bans a user -- for rule violations). Historically, common carriers can reject carriage for someone who does not pay, but also if the goods are deemed "dangerous" or not properly packed. In other words, even with a common carrier, they are able to deny service to someone who does not follow the terms of service.
So, social media does not meet any of the core components of a common carrier. It is hosting content perpetually, not merely transporting data from one point to another in a transient fashion. It is not a commodity service, but often highly differentiated in a world with many different competitors offering very differentiated services. It is not a natural monopoly, in which the high cost of infrastructure buildout would be inefficient for other entrants in the market. And, finally, even if, somehow, you ignored all of that, declaring a social media site a common carrier wouldn't change that they are allowed to ban or otherwise moderate users who fail to abide by the terms of service for the site.
So can we just stop talking about how social media websites should be declared common carriers? It's never made any sense at all.
19 Comments
Posted on Techdirt - 24 February 2022 @ 3:33pm
from the send-in-the-experts dept
In November, we wrote about a very bizarre case in which someone was using a highly questionable copyright claim to try to identify an anonymous Twitter user with the username @CallMeMoneyBags. The account had made fun of various rich people, including a hedge fund billionaire named Brian Sheth. In some of those tweets, Money Bags posted images that appeared to be standard social media type images of a woman, and the account claimed that she was Sheth's mistress. Some time later, an operation called Bayside Advisory LLC, that has very little other presence in the world, registered the copyright on those images, and sent a DMCA 512(h) subpoena to Twitter, seeking to identify the user.
The obvious suspicion was that Sheth was somehow involved and was seeking to identify his critic, though Bayside's lawyer has fairly strenuously denied Sheth having any involvement.
Either way, Twitter stood up for the user, noting that this seemed to be an abuse of copyright law to identify someone for non-copyright reasons, that the use of the images was almost certainly fair use, and that the 1st Amendment should protect Money Bag's identify from being shared. The judge -- somewhat oddly -- said that the fair use determination couldn't be made with out Money Bags weighing in and ordered Twitter to alert the user. Twitter claims it did its best to do so, but the Money Bags account (which has not tweeted since last October...) did not file anything with the court, leading to a bizarre ruling in which Twitter was ordered to reveal the identify of Money Bags.
We were troubled by all of this, and it appears that so was the ACLU and the EFF, who have teamed up to tell the court it got this very, very wrong. The two organizations have filed a pretty compelling amicus brief saying that you can't use copyright as an end-run around the 1st Amendment's anonymity protections.
The First Amendment protects anonymous speakers from retaliation and other harms by allowing them to separate their identity from the content of their speech to avoid retaliation and
other harms. Anonymity is a distinct constitutional right: “an author’s decision to remain
anonymous, like other decisions concerning omissions or additions to the content of a publication,
is an aspect of the freedom of speech protected by the First Amendment.” McIntyre v. Ohio
Elections Comm’n, 514 U.S. 334, 342 (1995). It is well-settled that the First Amendment protects
anonymity online, as it “facilitates the rich, diverse, and far-ranging exchange of ideas,” Doe v.
2TheMart.com, Inc., 140 F. Supp. 2d 1088, 1092 (W.D. Wash. 2001), and ensures that a speaker
can use “one of the vehicles for expressing his views that is most likely to result in those views
reaching the intended audience.” Highfields, 385 F. Supp. 2d at 981. It is also well-settled that
litigants who do not like the content of Internet speech by anonymous speakers will often misuse
“discovery procedures to ascertain the identities of unknown defendants in order to harass,
intimidate or silence critics in the public forum opportunities presented by the Internet.” Dendrite
Int’l v. Doe No. 3, 775 A.2d 756, 771 (N.J. App. Div. 2001).
Thus, although the right to anonymity is not absolute, courts subject discovery requests
like the subpoena here to robust First Amendment scrutiny. And in the Ninth Circuit, as the
Magistrate implicitly acknowledged, that scrutiny generally follows the Highfields standard when
the individual targeted is engaging in free expression. Under Highfields, courts must first
determine whether the party seeking the subpoena can demonstrate that its legal claims have merit.
Highfields, 385 F. Supp. 2d at 975-76. If so, the court must look beyond the content of the speech
at issue to ensure that identifying the speaker is necessary and, on balance, outweighs the harm
unmasking may cause.
The filing notes that the magistrate judge who ordered the unmasking apparently seemed to skip a few steps:
The Magistrate further confused matters by suggesting that a fair use analysis could be a
proxy for the robust two-step First Amendment analysis Highfields requires. Order at 7. This
suggestion follows a decision, in In re DMCA Subpoena, 441 F. Supp. 3d at 882, to resolve a
similar case purely on fair use grounds, on the theory that Highfields “is not well-suited for a
copyright dispute” and “the First Amendment does not protect anonymous speech that infringed
copyright.”...
That theory was legally incorrect. While fair use is a free-speech safety valve that helps
reconcile the First Amendment and the Copyright Act with respect to restrictions on expression,
anonymity is a distinct First Amendment right.1 Signature Mgmt., 876 F.3d at 839. Moreover,
DMCA subpoenas like those at issue here and in In re DMCA Subpoena, concern attempts to
unmask internet users who are engaged in commentary. In such cases, as with the blogger in
Signature Mgmt., unmasking is likely to chill lawful as well as allegedly infringing speech. They
thus raise precisely the same speech concerns identified in Highfields: the use of the discovery
process “to impose a considerable price” on a speaker’s anonymity....
ndeed, where a use is likely or even colorably a lawful fair use, allowing a fair use analysis
alone to substitute for a full Highfields review gets the question precisely backwards, given the
doctrine’s “constitutional significance as a guarantor to access and use for First Amendment
purposes.” Suntrust Bank v. Houghton Mifflin, 268 F.3d 1257, 1260 n.3 (11th Cir. 2001). Fair use
prevents copyright holders from thwarting well-established speech protections by improperly
punishing lawful expression, from critical reviews, to protest videos that happen to capture
background music, to documentaries incorporating found footage, and so on. But the existence of
one form of speech protection (the right to engage in fair use) should not be used as an excuse to
give shorter shrift to another (the right to speak anonymously).
It also calls out the oddity of demanding that Money Bags weigh in, when its Bayside and whoever is behind it that bears the burden of proving that this use was actually infringing:
Bayside incorrectly claims that Twitter (and by implication, its user) bears the burden of
demonstrating that the use in question was a lawful fair use. Opposition to Motion to Quash (Dkt.
No. 9) at 15. The party seeking discovery normally bears the burden of showing its legal claims
have merit. Highfields, 385 F. Supp. 2d at 975-76. In this pre-litigation stage, that burden should
not shift to the anonymous speaker, for at least three reasons.
First, constitutional rights, such as the right to anonymity, trump statutory rights such as
copyright. Silvers v. Sony Pictures Entm’t, Inc., 402 F.3d 881, 883-84 (9th Cir. 2005). Moreover,
fair use has an additional constitutional dimension because it serves as a First Amendment “safety
valve” that helps reconcile the right to speak freely and the right to restrict speech. William F.
Patry & Shira Perlmutter, Fair Use Misconstrued: Profit, Presumptions, and Parody, 11 Cardozo
Arts & Ent. L.J. 667, 668 (1993). Shifting the Highfields burden to the speaker would create a
cruel irony: an anonymous speaker would be less able to take advantage of one First Amendment
safeguard—the right to anonymity—solely because their speech relies on another—the right to
fair use. Notably, the Ninth Circuit has stressed that fair use is not an affirmative defense that
merely excuses unlawful conduct; rather, it is an affirmative right that is raised as a defense simply
as a matter of procedural posture. Lenz v. Universal, 815 F.3d 1145, 1152 (9th Cir. 2016).
Second, Bayside itself was required to assess whether the use in question was fair before it
sent its DMCA takedown notices to Twitter; it cannot now complain if the Court asks it to explain
that assessment before ordering unmasking. In re DMCA Subpoena, 441 F. Supp. 3d at 886 (citing
Lenz., 815 F.3d at 1153: “a copyright holder must consider the existence of fair use before sending
a takedown notification under § 512(c)”)
Third, placing the burden on the party seeking to unmask a Doe makes practical sense at
this early stage, when many relevant facts lie with the rightsholder. Here, for example, Bayside
presumably knows—though it has declined to address—the original purpose of the works. And as
the copyright holder, it is best positioned to explain how the use at issue might affect a licensing
market. While the copyright holder cannot see into the mind of the user, the user’s purpose is easy
to surmise here, and the same is likely to be true in any 512(h) case involving expressive uses.
With respect to the nature of the work, any party can adequately address that factor. Indeed, both
Bayside and Twitter have done so.
The filing also notes that this is an obvious fair use situation, and the judge can recognize that:
While courts often reserve fair use determinations for summary judgment or trial, in
appropriate circumstances it is possible to make the determination based on the use itself. See In
re DMCA Section 512(h) Subpoena to YouTube (Google, Inc.), No. 7:18-MC-00268 (NSR), 2022
WL 160270 (S.D.N.Y. Jan. 18, 2022) (rejecting the argument that fair use cannot be determined
during a motion to quash proceeding). In Burnett v. Twentieth Century Fox, for example, a federal
district court dismissed a copyright claim—without leave to amend—at the pleading stage based
on a finding of fair use. 491 F. Supp. 2d 962, 967, 975 (C.D. Cal. 2007); see also Leadsinger v.
BMG Music Pub., 512 F.3d 522, 532–33 (9th. Cir. 2008) (affirming motion to dismiss, without
leave to amend, fair use allegations where three factors “unequivocally militated” against fair use).
See also, e.g., Sedgwick Claims Mgmt. Servs., Inc. v. Delsman, 2009 WL 2157573 at *4 (N.D. Cal.
July 17, 2009), aff’d, 422 F. App’x 651 (9th Cir. 2011); Savage v. Council on Am.-Islamic Rels., Inc., 2008 WL 2951281 at *4 (N.D. Cal. July 25, 2008); City of Inglewood v. Teixeira, 2015
WL 5025839 at *12 (C.D. Cal. Aug. 20, 2015); Marano v. Metro. Museum of Art, 472 F. Supp. 3d
76, 82–83, 88 (S.D.N.Y. 2020), aff’d, 844 F. App’x 436 (2d Cir. 2021); Lombardo v. Dr. Seuss
Enters., L.P., 279 F. Supp. 3d 497, 504–05 (S.D.N.Y. 2017), aff’d, 729 F. App’x 131 (2d Cir.
2018); Hughes v. Benjamin, 437 F. Supp. 3d 382, 389, 394 (S.D.N.Y. 2020); Denison v. Larkin,
64 F. Supp. 3d 1127, 1135 (N.D. Ill. 2014).
These ruling are possible because many fair uses are obvious. A court does not need to
consult a user to determine that the use of an excerpt in a book review, the use of a thumbnail
photograph in an academic article commenting on the photographer’s work, or the inclusion of an
image in a protest sign are lawful uses. There is no need to seek a declaration from a journalist
when they quote a series of social media posts while reporting on real-time events.
And the uses by Money Bags were pretty obviously fair use:
First, the tweets appear to be noncommercial, transformative, critical commentary—classic
fair uses. The tweets present photographs of a woman, identified as “the new Mrs. Brian Sheth”
as part of commentary on Mr. Sheth, the clear implication being that Mr. Sheth has used his wealth
to “invest” in a new, young, wife. As the holder of rights in the photographs, Bayside could have
explained the original purpose of the photographs; it has chosen not to do so. In any event, it
seems unlikely that Bayside’s original purpose was to illustrate criticism and commentary
regarding a billionaire investor. Hence, the user “used the [works] to express ‘something new, with
a further purpose or different character, altering the first with new expression, meaning, or
message.’” In re DMCA Subpoena to Reddit, Inc., 441 F. Supp. 3d at 883 (quoting Campbell v.
Acuff-Rose Music, Inc., 510 U.S. 569, 579 (1994)). While undoubtedly crass, the user’s purpose
is transformative and, Bayside’s speculation notwithstanding, there is nothing to suggest it was
commercial.
The filing also calls out the magistrate judge's unwillingness to consider Twitter's own arguments:
Of course, there was a party in court able and willing to offer evidence and argument on
fair use: Twitter. The Magistrate’s refusal to credit Twitter’s own evidence, Order at 7-8, sends a
dangerous message to online speakers: either show up and fully litigate their anonymity—risking
their right to remain anonymous in the process—or face summary loss of their anonymity when
they do not appear. Order at 7. That outcome inevitably “impose[s] a considerable price” on
internet users’ ability to exercise their rights to speak anonymously. Highfields, 385 F. Supp. 2d
at 980-81. And “when word gets out that the price tag of effective sardonic speech is this high, that
speech will likely disappear.”
Hopefully the court reconsiders the original ruling...
Read More | 9 Comments
Posted on Techdirt - 24 February 2022 @ 12:01pm
from the that's-not-how-any-of-this-works dept
Andy Parker has experienced something that no one should ever have to go through: having a child murdered. Even worse, his daughter, Alison, was murdered on live TV, while she was doing a live news broadcast, as an ex-colleague shot her and the news station's cameraman dead. It got a lot of news coverage, and you probably remember the story. Maybe you even watched the video (I avoided it on purpose, as I have no desire to see such a gruesome sight). Almost none of us can even fathom what that experience must be like, and I can completely understand how that has turned Parker into something of an activist. We wrote about him a year ago, when he appeared in a very weird and misleading 60 Minutes story attacking Section 230.
While Parker considers himself an "anti-big tech, anti-Section 230" advocate, we noted that his story actually shows the benefits of Section 230, rather than the problems with it. Parker is (completely understandably!) upset that the video of his daughter's murder is available online. And he wants it gone. As we detailed in our response to the 60 Minutes story, Parker had succeeded in convincing various platforms to quickly remove that video whenever it's uploaded. Something they can do, in part, because of Section 230's protections that allow them to moderate freely, and to proactively moderate content without fear of crippling lawsuits and liability.
The 60 Minutes episode was truly bizarre, because it explains Parker's tragic situation, and then notes that YouTube went above and beyond to stop the video from being shared on its platform, and then it cuts to Parker saying he "expected them to do the right thing" and then says that Google is "the personification of evil"... for... doing exactly what he asked?
Parker is now running for Congress as well, and has been spouting a bunch of bizarre things about the internet and content moderation on Twitter. I'd link to some of them, but he blocked me (a feature, again, that is aided by Section 230's existence). But now the Washington Post has a strange article about how Parker... created an NFT of the video as part of his campaign to remove it from the internet.
Now, Andy Parker has transformed the clip of the killings into an NFT, or non-fungible token, in a complex and potentially futile bid to claim ownership over the videos — a tactic to use copyright to force Big Tech’s hand.
So... none of this makes any sense. First of all, Parker doesn't own the copyright, as the article notes (though many paragraphs later, even though it seems like kind of a key point!).
Parker does not own the copyright to the footage of his daughter’s murder that aired on CBS affiliate WDBJ in 2015.
But it says he's doing this to claim "ownership" of the video, because what appear to be very, very bad lawyers have advised him that by creating an NFT he can "claim ownership" of the video, and then use the DMCA's notice-and-takedown provisions instead. Everything about this is wrong.
First, while using copyright to takedown things you don't want is quite common, it's not (at all) what copyright is meant for. And, as much as Parker does not want the video to be available, there is a pretty strong argument that many uses of that video are covered by fair use.
But, again, he doesn't hold the copyright. So, creating an NFT of the video does not magically give him a copyright, nor does it give him any power under the DMCA to demand takedowns. That requires the actual copyright. Which Parker does not have. Even more ridiculously, the TV station that does hold the copyright has apparently offered to help Parker use the copyright to issue DMCA takedowns:
In a statement, Latek said that the company has “repeatedly offered to provide Mr. Parker with the additional copyright license” to call on social media companies to remove the WDBJ footage “if it is being used inappropriately.”
This includes the right to act as their agent with the HONR network, a nonprofit created by Pozner that helps people targeted by online harassment and hate. “By doing so, we enabled the HONR Network to flag the video for removal from platforms like YouTube and Facebook,” Latek said.
So what does the NFT do? Absolutely nothing. Indeed, the NFT is nothing more than basically a signed note, saying "this is a video." And part of the ethos of the NFT space is that people are frequently encouraged to "right click and save" the content, and to share it as well -- because the content and the NFT are separate.
Hell, there's an argument (though I'd argue a weak one -- though others disagree) that by creating an NFT of a work he has no copyright over, Parker has actually opened himself up to a copyright infringement claim. Indeed, the TV station is quoted in the article noting that, while it has provided licenses to Parker to help him get the video removed, "those usage licenses do not and never have allowed them to turn our content into NFTs."
I understand that Parker wants the video taken down -- even though there may be non-nefarious, legitimate reasons for those videos to remain available in some format. But creating an NFT doesn't give him any copyright interest, or any way to use the DMCA to remove the videos and whoever told Parker otherwise should be disbarred. They're taking advantage of him and his grief, and giving him very, very bad legal advice.
Meanwhile, all the way at the end of the article, it is noted -- once again -- that the big social media platforms are extremely proactive in trying to remove the video of her murder:
“We remain committed to removing violent footage filmed by Alison Parker’s murderer, and we rigorously enforce our policies using a combination of machine learning technology and human review,” YouTube spokesperson Jack Malon said in a statement.
[...]
Facebook bans any videos that depict the shooting from any angle, with no exceptions, according to Jen Ridings, a spokesperson for parent company Meta.
“We’ve removed thousands of videos depicting this tragedy since 2015, and continue to proactively remove more,” Ridings said in a statement, adding that they “encourage people to continue reporting this content.”
The reporter then notes that he was still able to find the video on Facebook (though all the ones he found were quickly removed).
Which actually goes on to highlight the nature of the problem. It is impossible to find and block the video with perfect accuracy. Facebook and YouTube employ some of the most sophisticated tools out there for finding this stuff, but the sheer volume of content, combined with the tricks and modifications that uploaders try, mean that they're never going to be perfect. So even if Parker got the copyright, which he doesn't, it still wouldn't help. Because these sites are already trying to remove the videos.
Everything about this story is unfortunate. The original tragedy, of course, is heartbreakingly horrific. But Parker's misguided crusade isn't helping, and the whole NFT idea is so backwards that it might lead to him potentially facing a copyright claim, rather than using one. I feel sorry for Parker, not only for the tragic situation with his daughter, but because it appears that some very cynical lawyers are taking advantage of Parker's grief to try to drive some sort of policy outcome out of it. He deserves better than to be preyed upon like that.
18 Comments
Posted on Techdirt - 24 February 2022 @ 9:31am
from the you-don't-say dept
Earlier this week we took a look at Donald Trump and Devin Nunes' Truth Social's terms of service, noting that they -- despite claiming that Section 230 should be "repealed" -- had explicitly copied Section 230 into their terms of service. In the comments, one of our more reliably silly commenters, who inevitably insists that no website should ever moderate, and that "conservatives" are regularly removed for their political views on the major social networks (and refusing to provide any evidence to support his claims, because he cannot), insisted that Truth Social wouldn't ban people for political speech, only for "obscenity."
So, about that. As Mashable has detailed, multiple people are describing how they've been banned from Truth Social within just the first few days -- and not for obscenity. The funniest one is someone -- not the person who runs the @DevinCow account on Twitter -- tried to sign up for a @DevinCow account on Truth Social. As you probably know, Devin Nunes, as a congressman, sued the satirical cow account for being mean to him (the case is still, technically, ongoing). You may recall that the headline of my article about Devin Nunes quitting Congress to run Truth Social announced that he was leaving Congress to spend more time banning satirical cows from Truth Social.
And apparently that was accurate. Matt Ortega first tried to register the same @DevinCow on Truth Social, only to be told that the username was not even allowed (which suggests that Nunes or someone else there had already pre-banned the Cow). Ortega then tried other varieties of the name, getting through with @DevinNunesCow... briefly. Then it, too, was banned:
Note that the ban email does not identify what rules were broken by the account (another point that Trumpists often point to in complaining about other websites' content moderation practices: that they don't provide a detailed accounting).
So, it certainly appears that it's not just "obscenity" that Nunes and Trump are banning. They seem to be banning accounts that might, possibly, make fun of them and their microscopically thin skins.
The Mashable article also notes that Truth Social has also banned a right wing anti-vaxxer, who you might expect to be more welcome on the site, but no such luck:
And here's the thing: this is normal and to be expected, and I'm glad that Truth Social is doing the standard forms of content moderation that every website needs to do to be able to operate a functional service. It would just be nice if Nunes/Trump and their whiny sycophants stopped pretending that this website is somehow more about "free speech" than other social media sites. It's not. Indeed, so far, they seem more willing to quickly ban people simply because they don't like them, than for any more principled reason or policy.
44 Comments
Posted on Techdirt - 23 February 2022 @ 12:09pm
from the getting-played-like-a-fiddle dept
Last summer, I believe we were among the first to highlight that the various antitrust bills proposed by mainly Democratic elected officials in DC included an incredibly dangerous trojan horse that would aid Republicans in their "playing the victim" desire to force websites to host their disinformation and propaganda. The key issue is that many of the bills included a bar on self-preferencing a large company's own services against competitors. The supporters of these bills claimed it was to prevent, say, an Apple from blocking a competing mapping service while promoting Apple Maps, or Google from blocking a competing shopping service, while pushing Google's local search results.
But the language was so broad, and so poorly thought out, that it would create a massive headache for content moderation more broadly -- because the language could just as easily be used to say that, for example, Amazon couldn't kick Parler off it's service, or Google couldn't refuse to allow Gab's app in its app store. You would have thought that after raising this issue, the Democratic sponsors of these bills would fix the language. They have not. Bizarrely, they've continued to issue more bills in both the House and the Senate with similarly troubling language. Recently, TechFreedom called out this problematic language in two antitrust bills in the Senate that seem to have quite a lot of traction.
Whatever you think of the underlying rationale for these bills, it seems weird that these bills, introduced by Democrats, would satisfy the Republicans' desire to force online propaganda mills onto their platforms.
Every “deplatformed” plaintiff will, of course, frame its claims in broad terms, claiming that the unfair trade practice at issue isn’t the decision to ban them specifically, but rather
a more general problem — a lack of clarity in how content is moderated, a systemic bias
against conservatives, or some other allegation of inconsistent or arbitrary enforcement —
and that these systemic flaws harm competition on the platform overall. This kind of argument would have broad application: it could be used against platforms that sell t-shirts and
books, like Amazon, or against app platforms, like the Google, Apple and Amazon app stores,
or against website hosts, like Amazon Web Services.
Indeed, as we've covered in the past, Gab did sue Google for being kicked out of the app store, and Parler did sue Amazon for being kicked of that company's cloud platform. These kinds of lawsuits would become standard practice -- and even if the big web services could eventually get such frivolous lawsuits dismissed, it would still be a tremendous waste of time and money, while letting grifters play the victim.
Incredibly, Republicans like Ted Cruz have made it clear this is why they support such bills. In fact, Cruz introduced an amendment to double down on this language and make sure that the bill would prohibit "discriminating on the basis of a political belief." Of course, Cruz knows full well this doesn't actually happen anywhere. The only platform that has ever discriminated based on a political belief is... Parler, whose then CEO once bragged to a reporter how he was banning "leftist trolls" from the platform.
Even more to the point, during the hearings about the bill and his amendment, Cruz flat out said that he was hoping to "unleash the trial lawyers" to sue Google, Facebook, Amazon, Apple and the like for moderating those who violate their policies. While it may sound odd that Cruz -- who as a politician has screamed about how evil trial lawyers are -- would be suddenly in favor of trial lawyers, the truth is that Cruz has no underlying principles on this or any other subject. He's long been called "the ultimate tort reform hypocrite" who supports trial lawyers when convenient, and then rails against them when politically convenient.
So no one should be surprised by Cruz's hypocrisy.
What they should be surprised by is the unwillingness of Democrats to fix their bills. A group of organizations (including our Copia Institute) signed onto another letter by TechFreedom that laid out some simple, common-sense changes that could be made to one of the bills -- the Open App Markets Act -- to fix this potential concern. And, yet, supporters of the bill continue to either ignore this or dismiss it -- even as Ted Cruz and his friends are eagerly rubbing their hands with glee.
This has been an ongoing problem with tech policy for a while now -- where politicians so narrowly focus on one issue that they don't realize how their "solutions" mess up some other policy goal. We get "privacy laws" that kill off competition. And now we have "competition" laws that make fighting disinformation harder.
It's almost as if these politicians don't want to solve actual issues, and just want to claim they did.
16 Comments
Posted on Techdirt - 23 February 2022 @ 9:21am
from the say-what-now? dept
With the launch of Donald Trump's ridiculous Truth Social offering, we've already noted that he's so heavily relying on Section 230's protections to moderate that he's written Section 230 directly into his terms of service. However, at the same time, Trump is still fighting his monstrously stupid lawsuits against Twitter, Facebook, and YouTube for banning him in the wake of January 6th.
Not surprisingly (after getting the cases transferred to California), the internet companies are pointing the courts to Section 230 as to why the cases should be dismissed. And, also not surprisingly (but somewhat hilariously), Trump is making galaxy brain stupid claims in response. That's the filing in the case against YouTube which somehow has eight different lawyers signed onto a brief so bad that all eight of those lawyers should be laughed out of court.
The argument as to why Section 230 doesn't apply is broken down into three sections, each dumber than the others. First up, it claims that "Section 230 Does Not Immunize Unfair Discrimination," which claims (falsely) that YouTube is a "common carrier" (it is not, has never been, and does not resemble one in any manner). The argument is not even particularly well argued here. It's three ridiculous paragraphs, starting with Packingham (which is not relevant to a private company choosing to moderate), then claiming (without any support, since there is none) that YouTube is a common carrier, and then saying that YouTube's terms of service mean that it "must carry content, irrespective of any desire or external compulsion to discriminate against Plaintiff."
Literally all of that is wrong. It took EIGHT lawyers to be this wrong.
The second section claims -- incorrectly -- that Section 230 "does not apply to political speech." They do this by totally misrepresenting the "findings" part of Section 230 and then ignoring basically all the case law that says, of course Section 230 applies to political speech. As for the findings, while they do say that Congress wants "interactive computers services" to create "a true diversity of political discourse" as the authors of the bill themselves have explained, this has always been about allowing every individual website to moderate as they see fit. It was never designed so that every website must carry all speech, but rather by allowing websites to curate the community and content they want, there will be many different places for different kinds of speech.
Again. Eight lawyers to be totally and completely wrong.
Finally, they argue that "Section 230(c) Violates the First Amendment as Applied to This Matter." It does not. Indeed, should Trump win this lawsuit (he won't) that would violate the 1st Amendment in compelling speech on someone else's private property who does not wish to be associated with it. And this section goes off the rails completely:
The U.S. contends that Section 230(c) does not implicate the First Amendment because “it
“does not regulate Plaintiff’s speech,” but only “establishes a content- and viewpoint-neutral rule
prohibiting liability” for certain companies that ban others’ speech. (U.S. Mot. at 2). Defendants’
egregious conduct in restraining Plaintiff’s political speech belies its claims of a neutral standard.
I mean, the mental gymnastics necessary to make this claim are pretty impressive, so I'll give them that. But this is mixing apples and orangutans in making an argument that, even if it did make sense, still doesn't make any sense. Section 230 does not regulate speech. That's why it's content neutral. The fact that the defendant, YouTube, does moderate its content -- egregiously or not -- is totally unrelated to the question of whether or not Section 230 is content neutral. Indeed, YouTube's ability to kick Trump off its platform is itself protected by the 1st Amendment.
The lawyers seem to be shifting back and forth between the government "The U.S." and the private entity, YouTube, here, to make an argument that might make sense if it were only talking about one entity, but doesn't make any sense at all when you switch back and forth between the two.
Honestly, this filing should become a case study in law schools about how not to law.
Read More | 34 Comments
Posted on Techdirt - 22 February 2022 @ 12:03pm
from the this-is-correct dept
For years, throughout the entire monkey selfie lawsuit saga, we kept noting that the real reason a prestigious law firm like Irell & Manella filed such a patently bogus lawsuit was to position itself to be the go-to law firm to argue for AI-generated works deserving copyright. However, we've always argued that AI-generated works are (somewhat obviously) in the public domain, and get no copyright. Again, this goes back to the entire nature of copyright law -- which is to create a (limited time) incentive for creators, in order to get them to create a work that they might not have otherwise created. When you're talking about an AI, it doesn't need a monetary incentive (or a restrictive one). The AI just generates when it's told to generate.
This idea shouldn't even be controversial. It goes way, way back. In 1966 the Copyright Office's annual report noted that it needed to determine if a computer-created work was authored by the computer and how copyright should work around such works:
In 1985, prescient copyright law expert, Pam Samuelson, wrote a whole paper exploring the role of copyright in works created by artificial intelligence. In that paper, she noted that, while declaring such works to be in the public domain, it seemed like an unlikely result as "the legislature, the executive branch, and the courts seem to strongly favor maximalizing intellectual property rewards" and:
For some, the very notion of output being in the
public domain may seem to be an anathema, a temporary inefficient
situation that will be much improved when individual property rights
are recognized. Rights must be given to someone, argue those who
hold this view; the question is to whom to give rights, not whether to
give them at all.
Indeed, we've seen exactly that. Back in 2018, we wrote about examples of lawyers having trouble even conceptualizing a public domain for such works, as they argued that someone must hold the copyright. But that's not the way it needs to be. The public domain is a thing, and it shouldn't just be for century-old works.
Thankfully (and perhaps not surprisingly, since they started thinking about it all the way back in the 1960s), when the Copyright Office released its third edition of the giant Compendium of U.S. Copyright Office Practices, it noted that it would not grant a copyright on "works that lack human authorship" using "a photograph taken by a monkey" as one example, but also noting "the Office will not register works produced by a machine or mere mechanical
process that operates randomly or automatically without any creative input or intervention from a human author."
Of course, that leaves open some kinds of mischief, and the Office even admits that whether the creative work is done by a human or a computer is "the crucial question." And, that's left open attempts to copyright AI-generated works. Jumping in to push for copyrights for the machines was... Stephen Thaler. We've written about Thaler going all the way back to 2004 when he was creating a computer program to generate music and inventions. But, he's become a copyright and patent pest around the globe. We've had multiple stories about attempts to patent AI-generated inventions in different countries -- including the US, Australia, the EU and even China. The case in China didn't involve Thaler (as far as we know), but the US, EU, and Australia cases all did (so far, only Australia has been open to allowing a patent for AI).
But Thaler is not content to just mess up patent law, he's pushing for AI copyrights as well. And for years, he's been trying to get the Copyright Office go give his AI the right to claim copyright. As laid out in a comprehensive post over at IPKat, the Copyright Office has refused him many times over, with yet another rejection coming on Valentine's Day.
The Review Board was, once again, unimpressed. It held that “human authorship is a prerequisite to copyright protection in the United States and that the Work therefore cannot be registered.”
The phrase ‘original works of authorship’ under §102(a) of the Act sets limits to what can be protected by copyright. As early as in Sarony (a seminal case concerning copyright protection of photographs), the US Supreme Court referred to authors as human.
This approach was reiterated in other Supreme Court’s precedents like Mazer and Goldstein, and has been also consistently adopted by lower courts.
While no case has been yet decided on the specific issue of AI-creativity, guidance from the line of cases above indicates that works entirely created by machines do not access copyright protection. Such a conclusion is also consistent with the majority of responses that the USPTO received in its consultation on Artificial Intelligence and Intellectual Property Policy.
The Review also rejected Thaler’s argument that AI can be an author under copyright law because the work made for hire doctrine allows for “non-human, artificial persons such as companies” to be authors. First, held the Board, a machine cannot enter into any binding legal contract. Secondly, the doctrine is about ownership, not existence of a valid copyright.
Somehow, I doubt that Thaler is going to stop trying, but one hopes that he gets the message. Also, it would be nice for everyone to recognize that having more public domain is a good thing and not a problem...
60 Comments
Posted on Techdirt - 22 February 2022 @ 9:25am
from the well-how-about-that dept
When Donald Trump first announced his plans to launch his own Twitter competitor, Truth Social, we noted that the terms of service on the site indicated that the company -- contrary to all the nonsense claims of being more "free speech" supportive than existing social media sites -- was likely going to be quite aggressive in banning users who said anything that Trump disliked. Last month, Devin Nunes, who quit Congress to become CEO of the fledgling site, made it clear that the site would be heavily, heavily moderated, including using Hive, a popular tool for social media companies that want to moderate.
So with the early iOS version of the app "launching" this past weekend, most people were focused on the long list of things that went wrong with the launch, mainly security flaws and broken sign-ups. There's also been some talk about how the logo may be a copy... and the fact that Trump's own wife declared that she'll be using Parler for her social media efforts.
But, for me, I went straight to checking out the terms of service for the site. They've been updated since the last time, but the basics remain crystal clear: despite all the silly yammering from Nunes and Trump about how they're the "free speech" supporting social network, Truth Social's terms are way more restrictive regarding content than just about any I've ever seen before.
Still, the most incredible part is not only that Truth Social is embracing Section 230, but it has literally embedded parts of 230 into its terms of service. The terms require people who sign up to "represent and warrant" that their content doesn't do certain things. And the site warns that if you violate any of these terms it "may result in, among other things, termination or suspension of your rights to use the Service and removal or deletion of your Contributions." I don't know, but I recall a former President and a former cow farming Representative from California previously referring to that kind of termination as "censorship." But, one of the things that users must "represent and warrant" is the following:
your Contributions are not obscene, lewd, lascivious, filthy, violent, harassing, libelous, slanderous, or otherwise objectionable.
That might sound familiar to those of you who are knowledgeable about Section 230 -- because it's literally cribbed directly from Section 230(c)(2), which says:
No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable...
That's almost word for word the same as 230. The only changes are that it removes "excessively" from "violent" and adds in "libelous" and "slanderous," -- subjects in which Devin Nunes considers himself something of an expert, though courts don't seem to agree.
Hell, they even leave in the catch-all "otherwise objectionable," even as some of their Republican friends in Congress have tried to remove that phrase in a few of their dozens of "Section 230 reform" bills.
So it's not at all surprising, but potentially a bit ironic that the man who demanded the outright repeal of Section 230 (even to the point of trying to stop funding the US military if Congress didn't repeal the law) has now not only embraced Section 230, but has literally baked a component of it (the part that he and his ignorant fans have never actually understood) directly into his own service's terms.
It's so blatant I almost wonder if it was done just for the trolling. That said, I still look forward to Truth Social using Section 230 to defend itself against inevitable lawsuits.
There are some other fun tidbits in the terms of service that suggest the site will be one of the most aggressive in moderating content. It literally claims that it may take down content that is "false, inaccurate, or misleading" (based on Truth Social's own subjective interpretation, of course). You can't advertise anything on the site without having it "authorized." You need to "have the written consent, release, and/or permission of each and every identifiable individual person in your Contributions." Does Truth Social think you actually need written permission to talk about someone?
There's also a long, long list of "prohibited" activities, including compiling a database of Truth Social data without permission, any advertising (wait, what?), bots, impersonation, "sexual content or language," or "any content that portrays or suggest explicit sexual acts." I'm not sure how Former President "Grab 'em by the p***y" will survive on his own site. Oh right, also "sugar babies" and "sexual fetishes" are banned.
Lots of fun stuff that indicates that like 4chan, then 8chan, then Gab, then Parler, then Gettr that have at times declared themselves to be "free speech zones," every website knows that it needs to moderate to some level, and also that it's Section 230 that helps keep them out of court when they moderate in ways that piss off some of their users.
128 Comments
Posted on Techdirt - 18 February 2022 @ 12:13pm
from the don't-criminalize-free-speech dept
Over the last few years, it's been depressing to see politicians from both major political parties attacking free speech. As we noted last month, Washington state governor Jay Inslee last month started pushing a bill that would criminalize political speech. He kept insisting that it was okay under the 1st Amendment because he got a heavily biased constitutional lawyer, Larry Tribe, to basically shrug and say "maybe it could be constitutional?" But the bill was clearly problematic -- and would lead to nonstop nonsense lawsuits against political candidates.
Thankfully, cooler heads have prevailed in the Washington Senate and the bill has died. The bill's main sponsor is still insisting that it would survive 1st Amendment scrutiny, but also recognized that it just didn't have enough political support:
State Sen. David Frockt (D), who sponsored the bill, said, "We have to respect that the bill in its current form did not have enough support to advance despite the care we took in its drafting through our consultation with leading First Amendment scholars."
Inslee, for his part, still insists something must be done:
After the bill was defeated on Tuesday, Inslee said in a statement, "We all still have a responsibility to act against this Big Lie ... we must continue to explore ways to fight the dangerous deceptions politicians are still promoting about our elections."
And, look, I don't disagree that the Big Lie about the 2020 election is a problem. But you don't solve problems by censoring 1st Amendment protected speech. That never ends well. At all.
8 Comments
Posted on Techdirt - 17 February 2022 @ 4:06pm
from the obnoxious dept
In the past, whenever Senator Richard Blumenthal has been called out for his many terrible legislative ideas regarding regulating technology and the internet, he has a habit of dismissing all of the concerns by claiming the complaints are only coming from "big tech lobbyists." He did this a few years ago with FOSTA, which has since proven to be exactly the disaster many of us warned Senator Blumenthal about at the time. This time around, he's going straight to the same playbook again, and it's good to see that he's getting some pushback. Nathalie Maréchal, from Ranking Digital Rights has published a great piece over at Tech Policy Press: No, Senator Blumenthal, I am not a Big Tech Lobbyist.
Ranking Digital Rights is about as far from a "big tech lobbyist" as you can find. The organization has been advocating for the FTC to ban targeted advertising, which is basically the key way in which both Google and Facebook make the majority of their money. And yet, it also recognizes the dangers of EARN IT.
The article notes that over 60 human rights groups signed a detailed letter highlighting the many problems of the bill. For Blumenthal to simply dismiss all of those concerns -- put together by respected groups who are in no way "big tech lobbyists" -- shows his pure disdain for facts and unwillingness to put in the effort to understand the very real damage his bill will do should it become law.
It's shameful behavior for a US senator, even if not surprising.
14 Comments
Posted on Techdirt - 17 February 2022 @ 10:48am
from the this-is-a-bad,-bad-idea dept
Senator Richard Blumenthal is apparently a bottomless well of terrible internet regulation ideas. His latest is yet another "for the children" bill that will put children in serious jeopardy. This time he's teamed up with the even worse Senator Marsha Blackburn to introduce the Kids Online Safety Act, which as the name suggests is full of a bunch of overbearing, dangerous nonsense that will not protect children at all, but will make them significantly less safe while giving clueless, authoritarian parents much more power to spy on their kids.
About the only "good" part of the bill is that it doesn't attack Section 230. But the rest of it is nonsense, and based on a terrible misunderstanding of how, well, anything works. The bill doesn't just take its name from the UK's Online Safety Bill, but it also takes a similar "duty of care" concept, which is a nonsense way of saying "if you make a mistake, and let undefined 'bad stuff' through, you'll be in trouble." Here's the duty of care is self-contradictory nonsense:
BEST INTERESTS.—A covered platform has a duty to act in the best interests of a minor that uses the platform’s products or services
How the hell is a website going to know "the best interests of a minor" using its platform? That's going to vary -- sometimes drastically -- from kid to kid. Some kids may actually benefit from learning about controversial topics, while others may get dragged down into nonsense. There is no one way to have "best interests" for kids, and it's a very context-sensitive question.
PREVENTION OF HARM TO MINORS.—In acting in the best interests of minors, a covered platform has a duty to prevent and mitigate the heightened risks of physical, emotional, developmental, or material harms to minors posed by materials on, or engagement with, the platform, including—
(1) promotion of self-harm, suicide, eating disorders, substance abuse, and other matters that pose a risk to physical and mental health of a minor;
(2) patterns of use that indicate or encourage addiction-like behaviors;
(3) physical harm, online bullying, and harass17 ment of a minor;
(4) sexual exploitation, including enticement, grooming, sex trafficking, and sexual abuse of minors and trafficking of online child sexual abuse material;
(5) promotion and marketing of products or services that are unlawful for minors, such as illegal drugs, tobacco, gambling, or alcohol; and
(6) predatory, unfair, or deceptive marketing practices.
So, so much of this is nonsense, disconnected from the reality of how anything works, but let's just focus in on the whole thing about how a covered platform has a duty to "prevent and mitigate" risks associated with "eating disorders." Last year we had a content moderation case study all about the very, very difficult and nuanced questions that websites face in dealing with content around eating disorders. Many of them found that trying to ban all such conversations actually backfired and made the problem worse. But often by allowing conversations about eating disorders it actually helped steer people away from eating disorders. In fact, much of the evidence showed that (1) people didn't start getting eating disorders from reading about others with eating disorders, and (2) people writing about their eating disorders made it easier for others to come and help them find the resources they needed to get healthy again.
In other words, it's not a matter of telling websites to block information about eating disorders -- as this Blumenthal and Blackburn bill would do. That will often just sweep the issue under the rug, and kids will still have eating disorders, but not get the help that they might have otherwise.
Once again, a Blumenthal bill is likely to make the problem it ostensibly tries to solve "worse." There is similar evidence that suicide prevention is an equally fraught area, and it's not nearly as simple as saying "no discussions about suicide," because often forums for discussing suicide are where people get help. But under this bill that will be prevented.
This bill takes extremely complex, nuanced issues, which often need thoughtful, context-based interventions, and reduces to block it all. Which is just dangerous. Because kids who are interested in suicide or eating disorders... are still going to be interested in those things. And if the major websites, with big trust and safety teams and more thoughtful approaches to all of this are forced to take down all that content, the kids are still going to go looking for it and they're going to end up on sketchier and sketchier websites, with fewer controls, fewer thoughtful staff, and it is much more prone to a worse outcome.
Honestly, this approach to regulating the internet seems much more likely to cause serious, serious problems for children.
Then, there's the terrible, terrible parental surveillance section. The bill would mandate websites provide "parental tools" that would be "readily-accessible and easy-to use" so parents can spy on their kids' activities online. Now, to avoid the problems of surreptitious surveillance, which would be even worse, the bill does note that "A covered platform shall provide clear and conspicuous notice to a minor when parental tools are in effect." That's certainly better than the opposite, but all this is doing is teaching kids that constant surveillance is the norm.
This is not what we should be teaching our kids.
I know how tempting it is for parents to want to know everything their kids are doing online. I know how tempting it is to be afraid about what kids are getting up to online, because we've all heard various horror stories. But surveilling kids of all ages, all the time is a stupid, dangerous idea. First of all, the kinds of things that a parent of a six-year-old might need are drastically different than the parents of a 16-year-old. But the bill treats everyone 16 and younger the same.
And there are already lots of tools parents can use -- voluntarily -- to restrict the behavior of their kids online. We don't need to make it the expected norm that every website gives parents tools to snoop on their kids. Because that alone can do serious damage to kids. Just a few months ago there was an amazing article in Wired about how dangerous parental surveillance of kids can be.
Constant vigilance, research suggests, does the opposite of increasing teen safety. A University of Central Florida study of 200 teen/parent pairs found that parents who used monitoring apps were more likely to be authoritarian, and that teens who were monitored were not just equally but more likely to be exposed to unwanted explicit content and to bullying. Another study, from the Netherlands, found that monitored teens were more secretive and less likely to ask for help. It’s no surprise that most teens, when you bother to ask them, feel that monitoring poisons a relationship. And there are very real situations, especially for queer and trans teens, where their safety may depend on being able to explore without exposing all the details to their family.
And yet, this bill requires the kind of situation that makes teenagers less safe, and pushes them into more risky and dangerous activity.
Why is it every Blumenthal bill "for the children" will make children less safe?
And just think about how this plays out for an LGBTQ child, brought up in a strictly religious family, who wants to use the internet to find like-minded individuals. Under this bill, that information gets reported back to the parents -- and seems way more likely to lead to distress, harm and even possibly suicidal ideation -- because of this bill.
In other words, this bill tries to prevent suicide by forcing websites to take down information that might help prevent suicides, and then forces vulnerable kids in dangerous home environments to share data with their parents, which seems more likely to drive them towards suicide.
It's like the worst possible way of dealing with vulnerable children.
There are, of course, other problems with the bill, but the whole thing is based on a fundamental misunderstanding of how to raise resilient kids. You don't do it by spying on their every move. You do it by giving kids the freedom to explore and learn, but equipped with the knowledge that not everything is safe, and not every idea is a good one. You teach them to recognize that the world can be dangerous, but they need to learn how to be equipped to deal with that. Obviously, the best strategies for that will differ at different ages and based on the individual child. But assuming that all children up to age 16 must be surveilled by their parents and that websites should be forced to block information about which many kids will want to explore, seems like it would create a horrifically bad result for many, many children -- including the most vulnerable.
It's truly incredible how many horrible, horrible laws about the internet one man can sponsor, but Senator Blumenthal really has become a one-man "terrible bill idea" shop. People of Connecticut: do better. As for Blackburn, well, she's always been terrible, but I find it amusing to remind people she put out this video a decade ago, screaming about how the internet should never be regulated. And now look at her.
Read More | 21 Comments
Posted on Techdirt - 17 February 2022 @ 9:37am
from the that-seems-like-a-problem dept
I've already talked about the potential 1st Amendment problems with the EARN IT Act and the potential 4th Amendment problems with it as well. But a recent post by Riana Pfefferkorn at Stanford raises an even bigger issue in all of this: what actual problem is EARN IT trying to solve?
This sounds like a simple question with a potentially simple answer, but the reality, once you start to dig in, suggests that either (1) the backers of EARN IT don't actually know, or (2) if they do know, they know what they actually want is unconstitutional.
Supporters of EARN IT will say, simply, the problem they're trying to solve is the prevalence of child sexual abuse material (CSAM) online. And, that is a real problem (unlike some other moral panics, CSAM is a legitimate, large, and extraordinarily serious problem). But... CSAM is already very, very illegal. So, if you dig in a little further, supporters of EARN IT will say that the problem they're really trying to solve is that... internet companies don't take CSAM seriously enough. But, the law (18 USC 2258A already has pretty strict requirements for websites to report any CSAM they find to NCMEC (the National Center for Missing & Exploited Children) -- and they do. NCMEC reported that it received almost 21.4 million reports of CSAM from websites. Ironically, many supporters of EARN IT point to these numbers as proof that the websites aren't doing enough, while also saying it proves they don't have any incentive to report -- which makes no sense at all.
So... is the problem that those 21.4 million reports didn't result in the DOJ prosecuting enough abusers? If so... isn't the problem somewhere between NCMEC and the DOJ? Because the DOJ can already prosecute for CSAM and Section 230 doesn't get in the way of that (it does not immunize against federal criminal law). And, as Riana noted in her article, this very same Senate Committee just recently heard about how the FBI actually knew about an actual serial child sex abuser named Larry Nasser, and turned a blind eye.
And, if NCMEC is the problem (namely in that it can't process the reports fast enough), then this bill doesn't help at all there either, because the bill doesn't give NCMEC any more funding. And, if the senators are correct that this bill would increase the reports to NCMEC (though it's not clear why that would work), wouldn't that just make it even more difficult for NCMEC to sort through the reports and alert law enforcement?
So... is the problem that companies aren't reporting enough CSAM? If you read the sponsors' myths and facts document, they make this claim -- but, again, the law (with really serious penalties) already requires them to report any CSAM. Taking away Section 230 protections won't change that. Reading between the lines of the "myths and facts" document, they seem to really be saying that the problem is that not every internet service proactively scans every bit of content, but as we've discussed that can't be the problem, because if that is the problem, EARN IT has a massive 4th Amendment problem that will enable actual child sex abusers to suppress evidence!
Basically, if you look step by step through the potential problems that supporters of the bill claim it tries to solve, you immediately realize it doesn't actually solve any of them. And, for nearly all of the potential problems, it seems like there's a much more efficient and effective solution which EARN IT does not do. Riana's post has a handy dandy table walking down each of these paths, but I wanted to make it even clearer, and felt that a table isn't the best way to walk through this. So here is her chart, rewritten (all credit to her brilliant work):
If online services don't report CSAM in violation of 2258A, and the real problem is large-scale, widespread, pervasive noncompliance by numerous providers that knowingly host CSAM without removing or reporting it (NOT just occasional isolated incidents), then there's a very long list of potential remedies:
- Conduct a congressional investigation to determine the extent of the problem
- Hold a hearing to ask DOJ why it has never once brought a 2258A prosecution
- DOJ prosecutes all those providers for illegally hosting CSAM under 2252A as well as violating 2258A’s reporting requirements
- Amend 2258A(e) to increase penalties for noncompliance
- Amend Dodd-Frank to include 2258A compliance in corporate disclosure requirements (akin to Form SD)
- Encourage FTC investigation of noncompliant companies for unfair or deceptive business practices
- Encourage private plaintiffs to file securities-fraud class actions against publicly-traded providers for misleading investors by secretly violating federal reporting duties
If that's the actual problem (which supporters imply, but when you try to get them to say it outright they hem and haw and won't admit it), then it seems like any of the above list would actually be helpful here. And the real question we should be asking is
why hasn't the DOJ done anything here?
But what does EARN IT actually do?
- Amend Section 230 instead of enforcing existing law
- Don’t demand that DOJ explain why they aren’t doing their job
Okay, so maybe the supporters will say (as they sometimes admit) that most web sites out there actually
do report CSAM under 2258A, but there are still
some providers who don't report it and these are
occasional, isolated instances of failure to report by multiple providers, OR repeated failure to report by a particular rogue provider (NOT a large-scale problem across the whole tech industry). If anything, that seems more probably than the first version, which doesn't seem to be reported by any facts. However, here again, there are a bunch of tools in the regulator's tool box to deal with this problem:
- Conduct a congressional investigation to determine the extent of the problem
- Hold a hearing to ask DOJ why it has never once brought a 2258A prosecution
- DOJ prosecutes those isolated violations or the particular rogue provider
Again, what it comes down to in this scenario is that the DOJ is not doing it's job. The law is on the books, and the penalties can be pretty stiff (first failure to report is $150,000 and each subsequent failure is another $300,000). If it's true that providers are not doing enough here, such penalties would add up to quite a lot and the question again should be
why isn't the DOJ enforcing the law?
But instead of exploring that, here's what EARN IT actually does:
- Amend Section 230 instead of enforcing existing law
- Don’t demand that DOJ explain why they aren’t doing their job
Okay, so next up, Riana points out that maybe it's possible that the DOJ
does regular investigations of websites failing to report CSAM in violation of 2258A, but those investigations are
consistently resolved without charges or fines and do not become public. Then, there's a pretty simple option for Congress:
- Hold hearings to have DOJ explain why their investigations never result in charges
But, instead, here's what Congress is doing with EARN IT (stop me if you've heard this one before):
- Amend Section 230 instead of enforcing existing law
- Don’t demand that DOJ explain why they aren’t doing their job
Okay, okay, so maybe the reality is that the DOJ does in fact criminally prosecute websites for 2258A violations, but the reason there is no public record of any such prosecution ever is that all such court records are under seal. This would be...
odd, first of all, given that the DOJ
loves to publicize prosecutions, especially over CSAM. But, again, here's what Congress could do:
- Tell DOJ to move for courts to unseal all sealed records in 2258A cases
- Require DOJ to report data on all 2258A prosecutions since 2258A’s enactment
- Amend 2258A to require regular reporting to Congress by DOJ of enforcement statistics
- Investigate whether providers (especially publicly-traded ones) kept 2258A fines a secret
But, instead, here's what EARN IT does:
- Amend Section 230 instead of enforcing existing law
- Don’t demand that DOJ reveal to Congress its 2258A enforcement details
So,
maybe the real problem is simply that
the DOJ seems to be ignoring any effort to enforce violations of 2258A. If that's the case, Congress has tools in its toolbox:
- Hold a hearing to ask DOJ why it has never once brought a 2258A prosecution
- Amend 2258A by adding a private right of action so that victims can do the work that DOJ isn’t doing
Instead, EARN IT...
- Amend Section 230 instead of enforcing existing law
- Don’t demand that DOJ explain why they aren’t doing their job
So... that's basically all the possible permutations if the problem is -- as some supporters claim repeatedly -- that companies are regularly violating 2258A and not reporting CSAM that they find. And, in almost every case, the real questions then should be
why isn't the DOJ enforcing the law? And there are lots of ways that Congress should deal with that. But EARN IT does literally
none of them.
About the only thing that supporters of EARN IT have claimed in response to this point is that, because EARN IT allows for state AGs and civil suits, it is "adding more cops to the beat" to take on failures to report under 2258A. But... that's kinda weird. Because wouldn't it make a hell of a lot more sense to first find out why the existing cops don't bother? Because no one has done that. And, worse, when it comes to the civil suits, this response basically means "the DOJ doesn't care to help victims of CSAM, so we're leaving it up to them to take matters into their own hands." And that doesn't seem like a reasonable solution no matter how you look at it.
If anything, it looks like Congress putting the burden for the DOJ's perpetual failings... on the victims of CSAM. Yikes!
Of course, there are other possible problems here as well, and Riana details them in the chart. In these cases, the problems aren't with failure to report CSAM, but elsewhere in the process. So... if websites do properly report CSAM to NCMEC's CyberTipline, perhaps the problem is that CSAM isn’t being taken down promptly enough or reported to NCMEC “as soon as reasonably possible” as required by 2258A(a)(1)(A)(i).
Well, then, as Riana notes, there are a few things Congress could do:
- Debate whether to insert a firm timeframe into 2258A(a)(1)(A)(i)
- Hold a hearing to ask ICS providers of various sizes why delays happen and whether a specific timeframe is feasible
Instead, what EARN IT actually does is...
Okay, so if companies are reporting to NCMEC in compliance with 2258A, perhaps the problem is
the volume of reports is so high that NCMEC is overwhelmed.
Well, then, the possible solutions from Congress would seem to be:
- Hold a hearing to ask NCMEC what it would take to process all the reports they already get
- Appropriate those additional resources to NCMEC
But, what EARN IT does is...
- Amend Section 230 to induce providers to make even more reports NCMEC can’t keep up with
- Give zero additional resources to NCMEC
Okay, so maybe the websites do properly report CSAM to NCMEC, and NCMEC is able to properly alert the DOJ to the CSAM such that the DOJ should be able to go prosecute the actual abusers, but
the DOJ doesn’t act on the reports providers make, and doesn’t make its own mandatory reports to Congress about internet crimes against children. That would be horrifying, but again, it would seem like there's a pretty clear course of action for Congress:
- Order GAO to conduct a study on what happens to CyberTips passed by NCMEC to DOJ
- Hold a hearing to ask DOJ why it isn’t acting on tips or filing its required reports
- Appropriate additional resources to DOJ
All of those would help,
if this is the problem, but instead, here's what EARN IT actually does:
- Earmark $1 million for IT improvements
- Don’t demand that DOJ explain why they aren’t doing their job
You might sense a pattern here.
And finally, perhaps websites do report CSAM in compliance with 2258A to NCMEC's CyberTipline, and maybe NCMEC does relay important information to the DOJ... and horrifyingly, perhaps federal law enforcement is failing child sex abuse victims just as the FBI turned a blind eye to Larry Nassar’s abuse of dozens of child gymnasts for years.
Well, then it seems fairly obvious what Congress should do:
But here's what EARN IT does in that situation:
- Amend Section 230, effectively delegating enforcement for child sexual abuse to states and victims themselves
As Riana summarizes:
No matter what the problem with online CSAM is, EARN IT isn’t going to fix it. It’s only going to make things worse, both for child victims and for everyone who uses the internet. The truth about EARN IT is that either there isn’t a serious noncompliance problem among providers that’s pervasive enough to merit a new law, but Congress just can’t resist using Section 230 as a political punching bag to harm all internet users in the name of sticking it to Big Tech… or there is a problem, but the DOJ is asleep at the wheel – and EARN IT is a concession that Congress no longer expects them to do their jobs.
Either option should be shameful and embarrassing for the bill’s supporters to admit. Instead, this horrible legislation, if it passes, will be hailed as a bipartisan victory that shows Congress can still come together across the aisle to get things done. Apparently, harming Americans’ rights online while making CSAM prosecutions harder is something both parties can agree on, even in an election year.
So, whatever problem the backers of EARN IT think they're solving for, EARN IT doesn't do it. That seems like it should be a big fucking deal. But, instead of responding to these points, the sponsors claim that people highlighting this "don't care about CSAM."
12 Comments
Posted on Free Speech - 16 February 2022 @ 10:45am
from the please-stop dept
Is there a contest in the Senate to see who can propose the highest number of unconstitutional bills? You might think that the leader in any such contest would have to be a crazed populist like a Josh Hawley or a Ted Cruz, but it seems like Senator Amy Klobuchar is giving them a run for the money. Last summer, she released a bill to try to remove Section 230 for "medical misinformation," as declared by the Ministry of Speech Director of Health and Human Services. We already explained the very, very serious constitutional problems with such a bill.
And now she's back with a new bill, the NUDGE Act (Nudging Users to Drive Good Experiences on Social Media) which she announced by claiming it would "hold platforms accountable" for the amplification of "harmful content." You might already sense the 1st Amendment problems with that statement, but the actual text of the bill is worse.
In some ways, it's an improvement on the health misinformation bill, in that she's finally realized that for any bill to pass 1st Amendment scrutiny it needs to be "content neutral." But... it's not. It claims that it's taking a "nudge" approach -- popularized from Cass Sunstein and Richard Thaler's 2008 book of that name. But the whole point of "nudges" in that book is about small tweaks to programs that get people to make better decisions, not threats of government enforcement and regulations (which is what Klobuchar's bill does).
The bill starts out fine... ordering a study on "content-agnostic interventions" to be done by the National Science Foundation (NSF) and the National Academies of Sciences, Engineering, and Medicine (NASEM) to look for such content-agnostic interventions that would "reduce the harms of algorithmic amplification and social media addiction." And, sure, more research from independent and trusted parties sounds good -- and the NSF and NASEM generally are pretty credible and trustworthy. Perhaps they can turn up something useful, though historically, we've seen that academics and government bureaucrats who have no experience with how content moderation actually works, tend to come up with some ridiculously silly ideas for how to "fix" content moderation.
But, unfortunately, the bill goes beyond just the studies. Once the "initial study report" has been delivered, the bill then tries to force social media companies to adopt its recommendations, whether or not they'll work, or whether or not they're realistic. And... that is the unconstitutional part. You can call it "content-agnostic" all you want, but as soon as you're telling companies how they have to handle some aspect of the editorial discretion/content moderation on their sites, that's a 1st Amendment issue. A big one.
The bill requires the Commission it creates to start a rulemaking process which would release regulations for social media websites. The Commission would determine "how covered platforms should be grouped together" (?!?), then "determine which content-agnostic interventions identified in such report shall be applicable to each group of covered platforms..." and then (play the ominous music) "require each covered platform to implement and measure the impact of such content-agnostic interventions..."
And here's where anyone with even a tiny bit of trust and safety/content moderation experiences throws back their heads and laughs a hearty laugh.
Content moderation is an ever-evolving, constantly adapting and changing monster, and no matter what "interventions" you put in place, you know that you're immediately going to run into false positives and false negatives, and more edge cases than you can possibly imagine. You can't ask a bunch of bureaucrats to magically come up with the interventions that work. The people who are working on this stuff all day, every day are already trying out all sorts of ideas to improve their sites, and through constant experimentation, and adaptation, they keep gradually improving -- but it's a never-ending impossible task, and the idea that (1) government bureaucrats will magically get it right where companies have failed, and (2) a single mandate will work is beyond laughable (even excluding the constitutional concerns).
Also, the setup here seems totally disconnected to the realities of running a website. "Covered platforms" will be given 60 days to submit a plan to the Commission as to how they'll implement the mandated interventions, and the Commission will approve or disapprove of the plan. And any changes to the plan need to also be approved by the Commission. Some trust and safety teams make multiple changes to rules all the time. Imagine having to submit every such adjustment to a government Commission? This is the worst of the worst kind of government nonsense.
If companies fail to implement the plans, as the Commission likes, then the bill says the websites will be considered to have committed "unfair or deceptive acts or practices" enabling the FTC to go after them with potential fines.
The bill has other problems, but seems to just be based on a bunch of tropes and myths. It would only apply to sites that have 20 million active users (why that many? who the hell knows?), despite the fact that over and over again we've seen that laws that target companies by size create very weird and problematic side effects. The bill is nonsense, written by people who don't seem to understand how social media, content moderation, or the 1st Amendment work.
And, bizarrely, the bill might actually have some support because (astoundingly?!?) it has bipartisan backing. While it's a Klobuchar bill, it was introduced with Senator Cynthia Lummis from across the aisle. Lummis has, in the past, whined about social media companies "censoring" content she wanted to see (about Bitcoin?!?), but also was a co-sponsor of a bill that would require social media companies to disclose when the government pressures them to remove content, which is kinda funny because that's what this bill she's sponsoring would do.
I'm all for doing more credible research, so that's great. But the rest of this bill is just unconstitutional, unrealistic nonsense. Do better, Senator.
Read More | 37 Comments
Posted on Techdirt - 16 February 2022 @ 5:45am
from the because-she-didn't dept
The last time we wrote about Sarah Palin's defamation lawsuit against the NY Times was in 2017 when Judge Jed Rakoff was dismissing the case, noting that Palin had failed to show "actual malice," by the NY Times, which is the necessary standard under the seminal defamation case (also involving the NY Times), NY Times v. Sullivan. However, two years later, the appeals court ruled that Rakoff violated procedural rules in doing so, and reinstated the case. It's been three years since then and over the past few weeks an actual trial was held -- which is extraordinarily rare in defamation cases.
The "actual malice" standard is both extremely important and widely misunderstood. It does not mean that the speaker/publisher "really disliked" the subject or wanted to get them. It has a distinct meaning under the law, which is that that the publisher/speaker either knew it was false at the time of publication, or that they posted it with "reckless disregard" for whether it was true or false. And, again, people often misunderstand the "reckless disregard" part as well. It does not mean that they were simply careless about it. For there to be reckless disregard, it means that they had to have substantial doubts about the truth of the statement, but still published it.
In other words, for defamation of a public figure, you have to show that the publisher/speaker either knew what they were writing was false, or at least had strong reasons to believe it was false, and still went ahead with it. This is extremely important, because without it, public figures could (and frequently would) file nonsense lawsuits any time some small mistake was made in reporting on them -- and small mistakes happen all the time just by accident.
But, still, the Palin case went to trial and before the jury even came back, Judge Rakoff announced that, as a matter of law (which the judge gets to rule on) Palin had failed to show actual malice. The oddity here was that he did so while the jury was still deliberating, and allowing the jury to continue to do so. The next day, the jury came to the same conclusion, finding the NY Times not liable for defamation, as a matter of fact (juries decide matters of fact, judges decide matters of law -- and it's nice when the two agree).
It seems likely that Palin will appeal, in part because there are a contingent of folks in the extreme Trumpist camp -- including Supreme Court Justice Clarence Thomas and some of his close friends who have been campaigning over the past few years to over turn the "actual malice standard" found in the Sullivan case.
As many observers have noted, this case is probably not a very good test case for that question, but that doesn't mean Palin won't try to make it just such a test case -- and even if it's a weak case, we should be watching closely as any such case moves through the courts -- as they are, inherently, attacks on free speech. Weakening the actual malice standard would be a way for the powerful to more easily silence the powerless who speak up against them. The "actual malice" standard is a key element of strong free speech protections -- and attempts to weaken it are attacks on free speech.
41 Comments
Posted on Free Speech - 15 February 2022 @ 9:30am
from the why-doesn't-anyone-understand-this dept
We've talked about so many problems with the EARN IT Act, but there are more! I touched on this a bit in my post about how EARN IT is worse than FOSTA, but it came up a bit in the markup last week, and it showed that the Senators pushing for this do not understand the issues around the knowledge standard required here, and how various state laws complicate things. Is it somewhat pathetic that the very senators pushing for a law that would make major changes impacting a wide variety of things don't seem to understand the underlying mechanisms at play? Sure is! But rest assured that you can be smarter than a senator.
First, let's start here: the senators supporting EARN IT seem to think that if you remove Section 230 for a type of law-violating content (in this case, child sexual abuse material, or CSAM), that magically means that website will be liable for that content -- and because of that they'll magically make it disappear. The problem is that this is not how any of this actually works. Section 230 expert and law professor Jeff Kosseff broke the details down in a great thread, but I want to make it even more clear.
As a reminder, Section 230 has never been a "get out of jail free" card, as some of its critics suggest. It's a procedural benefit that gets cases that would otherwise lose on 1st Amendment grounds tossed out at an earlier stage (when it's much less costly, and thus, much less likely to destroy a smaller company).
So, here, the senators supporting EARN IT seem to think, falsely, that if they remove Section 230 for CSAM that (1) it will make websites automatically liable for CSAM, and (2) that will somehow spur them into action to take down all CSAM because of the legal risk and that this will somehow make CSAM go away. Both of these assumptions are wrong, and wrong in such stupid ways that, again, EARN IT would likely make problems worse, not better. The real problem underlying both of these is the question of "knowledge." The legal folks like Jeff Kosseff dress this up as "mens rea" but the key thing is about whether or not a website knows about the illegal content.
This impacts everything in multiple ways. As Kosseff points out in his thread, Supreme Court precedent (which you would know if you read just the first chapter of his Section 230 book) says that for a distributor to be held liable for content that is not protected by the 1st Amendment, it needs to have knowledge of the illegal content. Supporters of EARN IT counteract with the correct, but meaningless, line that "CSAM is not protected by the 1st Amendment." And, it's not. But that's not the question when it comes to distributor liability. In Smith v. California, the Supreme Court overturned a conviction of Eleazar Smith (his bookstore sold a book the police believed was obscene), noting that even if the book's content was not protected by the 1st Amendment, the 1st Amendment cannot impose liability on a distributor, if that distributor does not have knowledge of the unprotected nature of the content. Any other result, Justice Brennan correctly noted, would lead distributors to be much more censorial, including of protected speech:
There is no specific constitutional inhibition against making the distributors of good the strictest censors of their merchandise, but the constitutional guarantees of the freedom of speech and of the press stand in the way of imposing a similar requirement on the bookseller. By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public's access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. It has been well observed of a statute construed as dispensing with any requirement of scienter that: 'Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.' The King v. Ewart, 25 N.Z.L.R. 709, 729 (C.A.). And the bookseller's burden would become the public's burden, for by restricting him the public's access to reading matter would be restricted. If the contents of bookshops and periodical stands were restricted to material of which their proprietors had made an inspection, they might be depleted indeed. The bookseller's limitation in the amount of reading material with which he could familiarize himself, and his timidity in the face of his absolute criminal liability, thus would tend to restrict the public's access to forms of the printed word which the State could not constitutionally suppress directly. The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered. Through it, the distribution of all books, both obscene and not obscene, would be impeded.
While there are some other cases, this remains precedent and it's difficult to see how the courts would (or could) say that a website is strictly liable for content that it does not know about.
This creates a bunch of problems. First and foremost, removing 230 in this context then gives websites not an incentive to do more to find CSAM, but actually to do less to find CSAM, because the lack of knowledge would most likely protect them from liability. That is the opposite of what everyone should want.
Second, it creates various problems in how EARN IT interacts with various state laws. As we've pointed out in the past, EARN IT isn't just about the federal standards for CSAM, but it opens up websites to legal claims regarding state laws as well. And the knowledge standards regarding CSAM in state laws is, literally, all over the map. Many do require actual knowledge (which again, reverses the incentives here). Others, however, have much more troubling standards around "should have known" or "good reason to know" or in some cases, they set a standard of "recklessness" for not knowing.
Some of those, if challenged, might not stand up to 1st Amendment scrutiny, such as what's found in Smith v. California, which should require actual knowledge, but either way the law would create a huge mess -- with it mostly incentivizing companies not to look for this. And considering that the sponsors of the bill keep saying that the whole reason of the bill is to get companies to do more looking for CSAM, they've literally got the entire law backwards.
What's most troubling, is that when Senator Blumenthal was pushed on this point during the markup, and it was mentioned that different states have different standards, rather than realizing one of the many (many) problems with the bill, he literally suggested that he hoped more states would change their standards to a potentially unconstitutional level, in which actual knowledge is not required for liability. And that's just setting up a really dangerous confrontation with the 1st Amendment.
If Senator Blumenthal and his legislative staffers actually cared about stopping CSAM, they would be willing to engage and talk about this. Instead, they refuse to engage, and mock anyone who brings up these points. Perhaps it's fun for them to generate false headlines while fundamentally causing massive problems for the internet and speech and making the CSAM problem worse while pretending the reverse is happening. But some of us find it immensely problematic.
39 Comments
Posted on Techdirt - 14 February 2022 @ 1:38pm
from the so,-so-much dept
Make sure you read the update at the end
This is a story that appears like it was created just to get Techdirt coverage, given how many issues we cover it touches on. Here's how it starts: Tulane law professor Ann Lipton, an expert on corporate governance and corporate law, wrote an academic paper about "Capital Discrimination." It's really interesting, and you should read it -- and a lot more people have been reading it over the last few days because of the situation I'm about to describe. The gist of the paper is that sex and gender discrimination happens in disputes regarding corporate structures/ownership, but that we don't generally have language in typical discussions of corporate ownership that recognize this very real dynamic. The article highlights multiple examples where courts try to apply the more traditional language of corporate ownership disputes in cases where there is clearly an element of sex discrimination.
One of the examples cited is In re: Shawe & Elting LLC, et al., which involves a somewhat incredible dispute between two people, Philip Shawe and Elizabeth Elting, who founded a company together, Transperfect Global. Without getting into all of the sordid details, Shawe and Elting had been in a relationship very early on, around the time of the formation of the business. At some point they were engaged to be married, though, according to the documents, Elting called off the engagement in 1997. From all of the details discussed in the opinion in the legal dispute between them, one could surmise that Shawe and Elting -- despite working together as co-CEOs, being the only two members of the board, and building up the company into a massive success, employing thousands of employees, and making hundreds of millions of dollars in revenue a year -- spent an awful lot of time fighting with each other in incredibly immature ways. It seems like they had been able to work together semi-amicably for over a decade after their personal relationship broke off, but things went off the rails sometime around 2012. The opinion linked above has detail after detail of incredibly petty and ridiculous behavior, sometimes on both of their parts, but quite frequently driven by Shawe. Here's just one example from the ruling:
On February 6, 2013, Elting was asked to approve a bonus for an employee working in one of the divisions (TDC) Shawe managed. Elting was willing to approve the bonus if Shawe approved other “raises that [were] being held up.” Intent on eliminating dual approvals, Shawe would not sign off on the raises Elting wanted to implement unless she would agree that “other small TPT/TDC decisions go through with eitherpartner’s approval...to avoid hostaging and eventual nuclear war.” Elting would not agree: “No, Phil. Not how it works here . . . the arrangement is to share it all with both of us. If there is good justification and transparency I will never hold things up.” Shawe would not relent. He instructed Boodram not to release any of the raises: “They will remain hostaged... until we figure out how to make decisions in general without
hostaging.” The episode was played out in an email string on which many of the Company’s senior managers were copied.
In an email exchange on February 14, 2013, Shawe put a new hire for one of Elting’s divisions (TPT) “[o]n hold” to pressure Elting to abandon dual approvals. Kevin Obarski, Senior Vice President of Sales, who was copied on the email string, chimed in with a private email to Shawe telling him that he was acting like a child:
You told me in New Orleans that I should tell Liz when she is being crazy- This is me telling you that you are being crazy. I know you are going through a tough time- but you are acting like a child, ruining the rep that you have spent two decade[s] to build and all for what. Because you need to run things by people. It is wasting your own and everyone’s time- just so you can be right. Who cares about being right. We are about to change the world and you are wasting your energy and time on something that does not matter.
In his private response to Obarski, Shawe revealed his plan to “create constant pain” for Elting until she acquiesced to his demands. He wrote, in relevant part:
I will not run small things by anyone for my divisions. I will make decisions for my division...and I will hold up Liz’s TPT stuff till they are pushed through. I cannot fight on every small decision. I cannot and will not live that way. I will not change my position. I will simply create constant pain until we go back to the old way of doing things...
There are multiple stories along these lines -- many of which appeared to be petty disputes between two co-CEOs posturing over who had power (there's a side issue in which technically Elting owned 50% of the business and Shawe 49%, but the other 1% was ostensibly held by Shawe's mother, in order to take advantage of being a "majority woman-owned business," but in practice, Shawe controlled his mother's share, so it was a 50/50 company). Many of the business disputes seem incredibly counter-productive, and seem to involve trying to make life difficult for the other one by delaying/hindering business decision making. As they argued, some of the behavior went into really, really questionable territory:
On the evening of December 31, 2013, when he knew “[w]ith virtual certainty” that Elting would not be in her office, Shawe secretly accessed her locked office on four different occasions using a master key card with the intent to obtain the hard drive from her computer. Having gained this access, Shawe dismantled Elting’s computer, removed the hard drive, made a mirror image of it, and reinstalled the hard drive later that night. A log of the key card access reflects that Shawe entered Elting’s office on New ear’s Eve at 4:29 p.m., 5:34 p.m., 7:22 p.m., and 7:47 p.m. Shawe began reviewing the contents of the hard drive image the next day.
In addition to breaking in to Elting’s computer, Shawe arranged to access the hard drive on her office computer remotely. Using the personal identification number he had previously obtained from the back of Elting’s computer, he mapped to her hard drive from his computer through the Company’s computer network. Shawe accessed Elting’s computer in this manner on at least twenty separate occasions from April 3, 2014, to July 23, 2014. At some point, either through reviewing the hard drive image or his remote access snooping (he could not remember precisely when or which method he used), Shawe discovered that there was a .pst file of Elting’s Gmails on her hard drive. Thereafter, when Shawe remotely accessed Elting’s hard drive, he downloaded a replica of the .pst file of Elting’s Gmails (each later .pst file having accumulated more of Elting’s Gmails) to thumb drives so he could view Elting’s Gmails privately on his laptop, which allowed him to conceal what he was doing. Through these stealthy actions, Shawe gained access to approximately 19,000 of Elting’s Gmails, including approximately 12,000 privileged communications with her counsel at Kramer Levin and her Delaware counsel in this litigation. Presumably concerned about the nature of Shawe’s actions, Sullivan & Cromwell LLP, Shawe’s lead litigation counsel in this Court, told him at the outset of its retention in March 2014 not to send information about the substance of Elting’s Gmails to anyone at the firm.
But some of the issues go way beyond arguments over how the business should be run or how its finances should work -- including some pointers that suggest odd behavior in response to the failure of the personal relationship. From a footnote:
Elting’s testimony on these events gives color to her and Shawe’s relationship. After the break-up, Shawe became very angry and “got under the bed and he stayed there for at least a half hour.” Shawe repeated the same bizarre behavior years later when Elting was in Buenos Aires, Argentina, on business. Shawe showed up unannounced at Elting’s hotel room, refused to leave and again “got under the bed” for about a half hour. Shawe also oddly invited himself and his mother (Ms. Shawe) to Elting’s wedding in Montego Bay, Jamaica. Id. 13-17 (Elting). Shawe did not deny taking any of these actions.
You can see how this dispute was of interest to Lipton's paper. It's one of multiple examples that fits right in and she quotes from the opinion directly. A draft of her paper was uploaded (like many pre-publication papers) to the Elsevier-owned SSRN website, and it was scheduled to be published in the Houston Law Review. However, if you go to the SSRN link now it shows the following:
This paper has been removed from SSRN at the request of the author, SSRN, or the rights holder.
It was not removed at the request of the author or of "the rights holder." It was removed by SSRN because Shawe had a lawyer send a ridiculous SLAPPy cease-and-desist letter, claiming that the law review article was defamatory. The cease and desist, from lawyer Martin Russo demands that the article be removed.
‘The defamatory article defines “capital discrimination” as “when women
principals experience sex discrimination” and then incorrectly identifies four alleged
instances of litigated cases, including one involving Mr. Shawe, that demonstrate
“The Many Faces of Capital Discrimination.” The article admits that "sex
discrimination was neither alleged nor proved,” but nonetheless falsely asserts that
the lack of allegations and proof was "because there is no clear avenue of recourse”
and that “these stories exemplify instances where firm ... partners acted against
‘women principals for reasons that at least appear to have stemmed from the.
principals’ status as women, and the managers’ relationship to the principals
specifically as women... What these scenarios have in common is that the managers
may have acted because of the woman's sex.”
“The first of several false examples of alleged discrimination is called “Clash of
the Founders,” and details certain findings of the Delaware Chancery Court
regarding Mr. Shawe’s alleged conduct. After one paragraph about a failed
romantic relationship between Elizabeth Elting and Mr. Shawe in 1999, the article:
factually ignores 12 years of profitable joint business operations to arrive at the
2012 disagreements between the co- CEOs over the direction of the company. What
follows are anecdotes plucked from the record which have no obvious connection to
sex or gender except for the fact that the co-CEO/founders were aman and a
woman. Without any factual basis, the article falsely states *[rJeading the Delaware
court’ findings and the parties’ submissions, the gendered aspects of the conflict
are difficult to miss.” In fact, the gendered aspects of the conflict are dificult to find,
because they do not exist. The article then goes on to more specifically falsely accuse
Mr. Shawe of so-called capital discrimination by “refusing to pay dividends” and
“making a low-ball buyout offer” to his former partner. Finally, the article falsely
states, in the absence of any claim or proof of sex or gender-based conduct, that if
‘Shawe’s stalking and undermining of Elting’s authority had been identified as
gender-based harassment, his breach of fiduciary duty to the TransPerfect
corporation may have persuaded the court to impose a non-competition order,
allowing for a sales process that would have been more favorable to Elting."
The crux of Shawe's complaint is that their legal dispute had nothing to do with their previous relationship, and was entirely a more traditional business dispute. But... that's an opinion. As is Lipton's opinion regarding how it relates the thesis of her paper. And, opinions are not defamatory. Other elements in the paper, including the references to Shawe's terrible behavior, seem obviously protected under fair reporting privilege. Honestly, the crux of Shawe/Russo's complaint is that they don't like how Lipton characterizes the nature of the dispute, but that's easily protected opinion and not defamatory.
Also, if Shawe wants to contend that the behavior at issue in the lawsuit was solely because of differences in how the business should be run, and not having anything to do with his failed personal relationship with Elting, he maybe should not have done the following, as detailed in the court's opinion:
Shawe sought to have Elting criminally prosecuted by referring to her as his ex-fiancée seventeen years after the fact when filing a “Domestic Incident Report” as a result of a seemingly minor altercation in her office.
So, maybe it wasn't Lipton who was connecting the failed relationship with the business dispute -- perhaps it was Shawe himself who sought to make use of the failed relationship claim to give him leverage in the business dispute, including seeking to have Elting criminally prosecuted by filing a "domestic incident report."
Given all of this, it's hard to see the cease and desist letter as anything more than blustery nonsense. But, ridiculously, SSRN pulled the paper, as has the Houston Law Review. To their credit, Lipton's employer, Tulane University is standing behind her:
The article is a thorough and meticulously-source scholarly work. The factual assertions regarding Mr. Shave are sourced from publicly-available court opinions and filings in the litigation between Mr. Shawe and his former business partner. The source of each statement is set forth in the Article's footnotes. The “cease and desist” letter of December 23, 2021, does not contend that the facts attributed to Mr. Shawe are false. Rather, the letter takes issue with the Article's conclusions and commentary on the facts presented (i.e., that Mr. Shawe’s conduct is an example of sex discrimination).
The Article’s conclusions constitute opinions protected by the First Amendment. As the United States Supreme Court has observed, “[u]nder the First Amendment there is no such thing. as a false idea. However pernicious an opinion may seem, we depend for its correction not on the conscience of judges and juries but on the competition of other ideas.”
Furthermore, it is well-settled that a statement of opinion based on fully disclosed facts is not actionable unless the stated facts are themselves false and defamatory. The rationale behind this rule is clear: When the facts underlying a statement of opinion are disclosed, readers understand they are getting the authors interpretation of the facts presented. “Because the reader understands that such supported opinions represent the writer's interpretation of the facts presented, and because the reader is free to draw his or her own conclusions based upon those facts, this type of statement is not actionable in defamation."
The letter also points out to SSRN that no terms of service have been violated, and they believe SSRN should repost the article.
So combine this all together and we have a situation in which Shawe is angry about how he is portrayed in the paper, but that doesn't make it defamatory. The cease and desist letter has all the hallmarks of a frivolous SLAPPy legal threat. It highlights no false statements of fact, but merely calls out the statements of opinion made by Lipton in her paper, which are based on the facts that -- again -- Shawe's letter does not dispute. So this seems like a pretty blatant SLAPP threat.
Then, let's get to SSRN, which should not be pulling down the article. First, even a semi-competent review of the cease and desist would find that the defamation claims appear baseless. One would hope that SSRN would do such an analysis and not fall prey to a heckler's veto. Second, even if there were defamatory content (and again, that seems like a huge stretch), SSRN would be easily protected under Section 230. SSRN is an interactive computer service under the law, and cannot be held liable for the speech of third party content providers, such as Lipton.
In fact, this situation highlights the importance of Section 230, in that without Section 230, bumptious threats like this one would enable anyone to get just about anything pulled off of an online host. The nature of Section 230's immunity, is that it allows all sorts of different kinds of websites to host content, without having to freak out at the first sign of a legal threat over the content uploaded by a user. SSRN is within its own rights to pull down any content, of course, but the decision to do so here strongly suggests that (1) it did not carefully review the letter and the paper, or (2) that it doesn't understand how Section 230 protects it here.
Finally, there's the Streisand Effect. I'd never heard about this paper, or the dispute between Shawe and Elting. And now I and many, many, many more people have read the article (and I went and read the opinion in the Delaware Chancery Court with many, many, many more details on Shawe's behavior). So, once again, in filing a highly questionable legal threat intended to suppress this information, Shawe and Russo have only served to make people much, much, much more aware of the court record regarding Shawe's behavior.
Update... and just as I was putting the finishing touches on this post, SSRN put the paper back up. On Twitter, it explained itself as follows:
To add some detail, SSRN has always had the policy of taking down any paper related to a defamation or other legal claim while the claim was being investigated. To date, we have not had problems with this approach and I am sorry how this situation has played out. We have now had lengthy discussions with the legal department and will be amending the approach going forward. Your paper has been reposted, all counts are updated, and I apologize for the confusion.
And one can argue that taking it down while you investigate is a reasonable policy -- though a key part of the way Section 230 works is that you don't need to. And, frankly, that's the appropriate setup, because it recognizes that the potential harm from suppressing legal speech is a huge problem. In the end, though, it's good that SSRN appears to be revising its policy.
Read More | 10 Comments
Posted on Techdirt - 14 February 2022 @ 10:44am
from the disgusting dept
Last autumn, you may recall, the St. Louis Post-Dispatch published an article revealing that the Missouri Department of Elementary and Secondary Education (DESE) was leaking the Social Security numbers of teachers and administrators, past and present, by putting that information directly in the HTML. The reporters at the paper ethically disclosed this to the state, and waited until this very, very bad security mistake had been patched before publishing the story. In response, rather than admitting that an agency under his watch had messed up, Missouri Governor Mike Parson made himself into a complete laughingstock, by insisting that the act of viewing the source code on the web page was nefarious hacking. Every chance he had to admit he fucked up, he doubled down instead.
The following month, the agency, DESE, flat out admitted it screwed up and apologized to teachers and administrators, and offered them credit monitoring... but still did not apologize to the journalists. FOIA requests eventually revealed that before Governor Parson had called the reporters hackers, the FBI had already told the state that no network intrusion had taken place and it was also revealed that the state had initially planned to thank the journalists. Instead, Parson blundered in and insisted that it was hacking and that people should be prosecuted.
Hell, three weeks after it was revealed that the FBI had told the state that no hacking had happened, Parson was still saying that he expected the journalists to be prosecuted.
Finally, late on Friday, the prosecutors said that they were not pressing charges and considered the matter closed. The main journalist at the center of this, Jon Renaud, broke his silence with a lengthy statement that is worth reading. Here's a snippet:
This decision is a relief. But it does not repair
the harm done to me and my family.
My actions were entirely legal and consistent
with established journalistic principles.
Yet Gov. Mike Parson falsely accused me of
being a “hacker” in a televised press conference,
in press releases sent to every teacher across the
state, and in attack ads aired by his political action
committee. He ordered the Highway Patrol to
begin a criminal investigation, forcing me to keep
silent for four anxious months.
This was a political persecution of a journalist,
plain and simple.
Despite this, I am proud that my reporting
exposed a critical issue, and that it caused the state
to take steps to better safeguard teachers’ private
data.
At the same time, I am concerned that the
governor’s actions have left the state more
vulnerable to future bad actors. His high-profile
threats of legal retribution against me and the
Post-Dispatch likely will have a chilling effect,
deterring people from reporting security or
privacy flaws in Missouri, and decreasing the
chance those flaws get fixed.
This has been one of the most difficult seasons
of my nearly 20-year career in journalism
Later in the letter, he notes that a week earlier, Parson himself had decried the treatment of his rejected nominee to lead the state's Department of Health and Senior Services, noting that Parson complained that "more care was given to political gain than the harm caused to a man and his family." Renaud noted that the same could be said of Parson's treatment of himself:
Every word Gov. Parson wrote applies equally to
the way he treated me.
He concludes by hoping that "Parson's eyes will be opened, that he will see the harm he did to me and my family, that he will apologize, and that he will show Missourians a better way."
And Parson showed himself to be a bigger man and did exactly that... ha ha, just kidding. Parson just kept digging, and put out a truly obnoxious statement, with no apology and continuing to insist that Renaud hacked the government's computers even though -- again, this is important, lest you just think the governor is simply technically ignorant -- the FBI has already told him that there was no hacking:
"The hacking of Missouri teachers' personally identifiable information is a clear violation of Section 56.095, RSMo, which the state takes seriously. The state did its part by investigating and presenting its findings to the Cole County Prosecutor, who has elected not to press charges, as is his prerogative.
The Prosecutor believes the matter has been properly address and resolved through non-legal means.
The state will continue to work to ensure safeguards are in place to protect state data and prevent unauthorized hacks.
This whole statement is utter hogwash and embarrassing nonsense. Again, there was no hacking whatsoever. The state messed up by putting information that should never, ever be in HTML code into HTML code, making it accessible for anyone who viewed the source on their own computer. The state messed up. The state failed to secure the data. The state sent that data to the browsers of everyone who visited certain pages on their public websites. Renaud did exactly the right thing. He discovered this terrible security flaw that the state put on the database, ethically reported it, waited until the state fixed its own error, and then reported on it.
Parson knew from the beginning that no hacking occurred. The FBI told the state that no hacking occurred. The state had prepared to thank Renaud and his colleagues at the St. Louis Post-Dispatch. It was only after Parson decided to deny, deny, deny and blame, blame, blame reporters for pointing out Parson's own government's failings, that this whole thing got out of hand.
The prosecutors have their own reasons for declining to prosecute, but the most likely reason is they knew they'd get laughed out of court and it would make them and Parson look even more ridiculous. Renaud chose give a heartfelt write up of what Parson's nonsense put him through, and asked in the politest way possible for Parson to look deep inside at the harm he had caused and to apologize. Instead, Parson quadrupled down, continued to insist that his own government's failings could be blamed on a "hack," and insisting that he's trying to "protect" the state when all he's done is show why no serious tech company should do business in such a state.
Missouri: elect better politicians. Parson is an embarrassment.
22 Comments
Posted on Techdirt - 14 February 2022 @ 9:38am
from the bad-ideas dept
In admitting that his EARN IT Act is really about attacking encryption, Senator Richard Blumenthal said he wouldn't agree to keep encryption out of the bill because he worried that it would give companies a "get-out-of-jail-free card." That's nonsense for multiple reasons, which we explained in that post, but the fact is Blumenthal's bill actually does contain a "get-out-of-jail-free card" that is incredibly damaging. It's one that child sexual abusers may be able to use to suppress any evidence collected against them and which would not just undermine the very point of EARN IT Act, but would make it that much harder to do the thing that needs to be done: stopping such abusers.
We touched on this a little bit in our earlier post about the mistakes senators made during the markup, but it's a little wonky, so it deserves a deeper exploration. Here's a good short description from Kir Nuthi in Slate:
As it stands, most companies that host online content voluntarily turn over huge amounts of potential evidence of child abuse to the National Center for Missing and Exploited Children. Because private companies search for this evidence voluntarily, courts have held that the searches are not subject to the Fourth Amendment. But the EARN IT Actthreatens to disrupt this relationship by using the threat of endless litigation and criminal prosecution to strongly pressure private companies to proactively search for illegal material. Thanks to how the EARN IT Act amends Section 230, companies are more exposed to civil and criminal liability if they don’t follow the government’s “or else” threat and search for child sexual abuse material.
Currently, tech platforms have an obligation to report but not search for suspected instances of child sexual abuse material. That’s why searches today are constitutional—they’re conducted voluntarily. By encouraging and pressuring private sector searches, the EARN IT Act casts doubt on every search—they’d no longer be voluntary. Thus, the Fourth Amendment would apply, and evidence collected without a warrant—all child sexual abuse material in this case, since private parties can’t get a warrant—would be at risk of exclusion from trial.
The Supreme Court has long held that when the government “encourages” private parties to search for evidence, those private parties become “government agents” subject to the Fourth Amendment and its warrant requirement. That means any evidence these companies collect could be ruled inadmissible in criminal trials against child predators because the evidence was procured unconstitutionally.
Put simply, thanks to the EARN IT Act, under theExclusionary Rule, defense attorneys could argue that evidence was collected in violation of the Fourth Amendment and should be excluded from trial. As a result, the bill could lead to fewer convictions of child predators, not more.
In short: under the current setup, companies can search for child sexual abuse material (CSAM) and if they find it they must report it to NCMEC (and remove it). This is good and useful and helps prevent the further spread. But under the 4th Amendment, if the government is mandating a search, then it would require a warrant before the search can happen. So, if the government mandates the search -- and as various senators made clear in both their "myths and facts" document, and in the markup hearing, that's exactly what they intend this bill to do -- then anyone who is charged with evidence found via such a search would have an unfortunately strong response that the evidence was collected under state action, and, as such in order to survive a 4th Amendment review, would require a warrant.
In other words, it hands terrible criminals -- those involved in the abuse of children -- a way to suppress the evidence used against them on 4th Amendment grounds. Under such a regime that would make it more difficult to prosecute actual criminals. But, even worse, it would then create a perverse and dangerous precedent in which companies would be greatly encouraged not to use basic scanning tools to find, remove, and report CSAM content, because in doing so, it would no longer be usable in prosecutions.
So the failure by senators to understand how the 4th Amendment works, means that EARN IT (beyond all its other problems) creates a constitutional mess that is, effectively (and almost literally) a "get-out-of-jail-free card" for criminals.
21 Comments
Posted on Techdirt - 11 February 2022 @ 9:46am
from the wow dept
We've said it over and over again, if libraries did not exist today, there is no way publishers would allow them to come into existence. We know this, in part, because of their attempts to stop libraries from lending ebooks, and to price ebooks at ridiculous markups to discourage libraries, and their outright claims that libraries are unfair competition. And we won't even touch on their lawsuit over digital libraries.
Anyway, in other book news, you may have heard recently about how a Tennessee school board banned Art Spiegelman's classic graphic novel about the Holocaust, Maus, from being taught in an eighth-grade English class. Some people called this a ban, while others said the book is still available, so it's not a "ban." To me, I think school boards are not the teachers, and the teachers should be able to come up with their own curriculum, as they know best what will educate their students. Also, Maus is a fantastic book, and the claim that it was banned because of "rough, objectionable language" and nudity is utter nonsense.
Either way, Maus is now back atop various best seller lists, as the controversy has driven sales. Spiegelman is giving fun interviews again where he says things like "well, who's the snowflake now?" And we see op-eds about how the best way get kids not to read books... is to assign it in English class.
But, also, we have publishers getting into the banning business themselves... by trying to capitalize on the sudden new interest in Maus.
Penguin Random House doesn't want this new interest in Maus to lead to... people taking it out of the library rather than buying a copy. They're now abusing copyright law to demand the book be removed from the Internet Archive's lending library, and they flat out admit that they're doing so for their own bottom line:
A few days ago, Penguin Random House, the publisher of Maus, Art Spiegelman's Pulitzer Prize-winning graphic novel about the Holocaust, demanded that the Internet Archive remove the book from our lending library. Why? Because, in their words, "consumer interest in 'Maus' has soared" as the result of a Tennessee school board's decision to ban teaching the book. By its own admission, to maximize profits, a Goliath of the publishing industry is forbidding our non-profit library from lending a banned book to our patrons: a real live digital book-burning.
This is just blatant greed laid bare. As the article notes, whatever problems US copyright law has, it has enshrined the concept of libraries, and the right to lend out books as a key element of the public interest. And the publishers -- such as giants like Penguin Random House -- would do anything possible to stamp that right out.
79 Comments
More posts from Mike Masnick >>
Re:
The way the courts have justified this is by effectively saying the copyright is on the framing of the shot -- the "creative" choices is where to point the camera to frame the image or video...
/div>Re: Larry Tribe
During the Trump era, he fell in with a weird bunch, and seemed to have lost the plot: https://www.buzzfeednews.com/article/josephbernstein/larry-tribe-why
/div>Re:
Just to be clear, nothing in this laughable word salad has anything to do with how anything in the law actually works. And, I have never worked for GoFundMe, and I don't advise companies on legal issues, and I personally think GoFundMe's original plan was clear wire fraud and have no idea why they would do that.
But, really, you're incredibly ignorant of basically everything. And you should maybe stop.
/div>Re: Re: Re: Re: Uh
Neither of those claims are true. Both are blatantly false. Google has sponsored projects we've done over the years, but so have dozens of companies.
And we were never sued over anything related to any of this. Nor would we (because we have never been "on Google's payroll" nor even particularly supportive of the company). You're very, very confused.
/div>Re: Uh
You have no first amendment right to make posts on social media websites.
That is correct. But this bill is not about that.
A website, however, DOES have a 1st Amendment right to determine how to moderate its own content. And that's where this bill creates a problem, by forcing websites to moderate how the government sees fit, rather than how they see fit.
/div>Re: Re: Uh
I must be the worst Google shill ever.
Here's where I noted that their advertising scheme almost certainly violates antitrust laws:
https://www.techdirt.com/articles/20220114/22313448287/states-3rd-amended-antitrust-complaint- against-google-looks-lot-more-damning.shtml
Here's where I talk about ditching all Google tracking from our website:
https://www.techdirt.com/articles/20210726/09441047251/techdirt-is-now-entirely-without-any -google-ads-tracking-code.shtml
Here we are calling out Google's ridiculous net neutrality position:
https://www.techdirt.com/articles/20150820/10454632018/google-lobbied-against-real-net-neu trality-india-just-like-it-did-states.shtml
Here we are calling out Google's obnoxious trade position:
https://www.techdirt.com/articles/20160610/15124434685/google-comes-down-wrong-side-tpp.sh tml
I could go on and on and on. But, at some point you have to think that, if I'm a "Google shill," then I'm clearly not a particularly good one.
But, of course, you weren't serious. You can't respond to the actual points so you need to spread some misinformation since that's the best you can do.
/div>Re: Re: Re:
If you remember reading the first decision Mike the only thing required to obtain a protective order is a reasonable belief that a party to the case acquired the evidence through means outside of the normal means.
The issue is not the rules for a protective order, but rather the rules for prior restraint, which are controlled by the 1st Amendment. You don't get to avoid the 1st Amendment here.
The difference is I dont get paid for legal advice.
Nor do I
I think that with the coming criminal investigations of GoFundMe and all the "expert advisors" who are now facing criminal investigation for conspiracy to commit fraud, yourself and those like you AKA Generation X talking heads who got their foot in the tech door early really need to take a step back and think.
Lol, wut?
Its going to catch up to all the "Mike Masniks" who advised GoFundMe that it was perfectly legal "under their ToS" to keep the money.
Whatever drugs you've been taking, you should stop.
Its going to catch up to all the "Mike Masniks" who advised GoFundMe that it was perfectly legal "under their ToS" to keep the money.
You are really, really, high.
/div>Re: Clarifying and Offering insight
1) None of that has anything to do with his attempt to silence the paper with a SLAPP threat, even if he disagrees with the characterization.
2) The court records literally note that -- contrary to your claim -- Shawe did not deny the bed incident.
3) As noted above, even if this was mostly a business dispute, from the court ruling, it was Shawe who filed domestic incident reports, that brought the former relationship into the business dispute...
/div>Re: Re: Re: Maus
It is utter nonsense. Nothing in the book is problematic. Anyone who thinks it has problems is too dumb to respond to.
So, yes, my views are exceptionally clear: Maus is a valuable contribution to literature and it is not at all problematic for middle school kids to read. Teachers should decide curriculum -- not schools boards, not parents. And copyright should not be used to stop libraries from lending books. I don't think what the school board here was a "ban," but dictating books teachers can't use is still a horrible act of censorship by a governing board.
No weasel words.
/div>Re:
That may be true of certain private equity firms, but not VC.
/div>Re: Maus
I literally said in the article that not everyone considers it a ban. Did you even read the article before commenting?
/div>Re:
Step 2: Have the FBI raid their offices so you can get all of their attorney-client documents outside of normal discovery
If this happened, then there are all sorts of remedies for it. The problem is that, so far, no evidence of this has been provided. If, at some future point, it is, then there are plenty of serious remedies that can be brought forth.
/div>Re: Re: Re: Disallowed
Masnick, your entire post was this very thing.
This post claims that being exposed to an opposing viewpoint is an affront to my very existence? That's a weird thing to say, since nothing in this post takes any view one way or the other on being exposed to viewpoints. It's literally just about the question of whether or not Section 230 matters here.
I honestly don't mind that people listen to Rogan. I think everyone here has free speech rights, but I think the people making a big deal out of Rogan aren't doing themselves any favors either, and playing into a silly martyrdom.
Honestly, my only complaint with Spotify is their nonsense desire to lock up open podcasts into their proprietary audio format. If Rogan had stayed as a regular podcast none of this would matter.
You are a mid-wit.
I mean, fuck, you're the guy who can't read the fucking post. So if I'm a mid-wit, what the fuck does that make you?
/div>Re:
Well the good news was that Earn it didn’t get the time to be marked up yesterday,
Nah, that's just standard practice. The bill is announced one week and "held over" for markup the next week. Happens with nearly every bill that gets a markup, so there was no delay... just standard practice.
But, yes, people need to speak up LOUDLY to get this stopped. This has serious traction.
/div>Re: Disallowed
There are a growing number of people, both in the U.S. and the western world, who are increasingly intolerant of any speech with which they disagree.
You are confusing "more speech" with intolerance. Neil Young engaged in speech to protest decisions he disagreed with. Spotify engaged in its speech determining who it wished to align with.
It's the marketplace of ideas Koby.
They view themselves ever being exposed to an opposing viewpoint as an affront to their very existence.
I have seen no one doing this.
And they even view it as unconscionable that anyone else would be permitted to listen to these opposing viewpoints, even if those others actively sought out the material.
I see no one doing this.
These Individuals are facists.
Expressing their views makes them fascists? No. But people -- such as yourself -- who claim to have the right to force private companies to host speech they disagree with, sure seem to toe the line.
/div>Re:
No, the paid part doesn't change the calculus. It's still 3rd party content. See the Drudge case I pointed to...
/div>Re:
Much, much, much more likely.
/div>Re: The both sides fallacy
The "both sides fallacy" is when you attempt to minimize one group's actions by saying others do it too. I'm not doing that. I'm pointing out, accurately, that there are both Democrats and Republicans shitting on the 1st Amendment. Because there are.
/div>Re: Huh?
As another commenter noted above, it appears that there are different kinds of WeChat accounts, and they chose one that required registration by a Chinese citizen because it appeared to enable them to do more push notifications. Silly politicians...
/div>Re: Re: Re: Non Interference
has never been enforced against left wing advocates from what I've seen
Lol. Dude. Just because you live in your own chamber of stupidity, don't think that things don't happen outside of that world.
Fact is that it happens way more to marginalized individuals and groups -- it's just that they don't have a large enough megaphone to play victim like the poseurs you follow do.
/div>More comments from Mike Masnick >>
Techdirt has not posted any stories submitted by Mike Masnick.
Submit a story now.
Tools & Services
TwitterFacebook
RSS
Podcast
Research & Reports
Company
About UsAdvertising Policies
Privacy
Contact
Help & FeedbackMedia Kit
Sponsor/Advertise
Submit a Story
More
Copia InstituteInsider Shop
Support Techdirt