Back in 2014 when Facebook bought Oculus, there were the usual pre-merger promises that nothing would really change, or that Facebook wouldn't erode everything folks liked about the independent kickstarted product. Oculus founder Palmer Luckey, who has since moved on to selling border surveillance tech to the Trump administration, made oodles of promises to that effect before taking his money and running toward the sunset. Among those promises was the promise users would never be forced to use a Facebook login account just to use your VR headset and its games, and that the company wouldn't track your behavior for advertising.
Like every major merger, those promises didn't mean much. This week, Facebook and Oculus announced that users will soon be forced to -- use a Facebook account if they want to be able to keep using Oculus hardware, so the company can track its users for advertising purposes. The official Oculus announcement tries to pretend that this is some innate gift to the end user, instead of just an obvious way for Facebook to expand its behavioral advertising empire:
"Giving people a single way to log into Oculus—using their Facebook account and password— will make it easier to find, connect, and play with friends in VR. We know that social VR has so much more to offer, and this change will make it possible to integrate many of the features people know and love on Facebook."
And while users won't be forced to fully use Facebook to login until 2023, those that don't want Facebook tracking their every online waking movement will be out of luck. Meaning that you may not be able to use your pricey hardware -- or the software you've been accumulating -- unless you agree to join the Facebook universe:
"After January 1, 2023, we will end support for Oculus accounts. If you choose not to merge your accounts at that time, you can continue using your device, but full functionality will require a Facebook account. We will take steps to allow you to keep using content you have purchased, though we expect some games and apps may no longer work. This could be because they include features that require a Facebook account or because a developer has chosen to no longer support the app or game you purchased. All future unreleased Oculus devices will require a Facebook account, even if you already have an Oculus account."
The changes will also impact the functionality of Oculus Quest's "Link," which lets users connect the standalone VR headset to a PC to expand its functionality. It also begs the question: what happens if you get banned by Facebook due to its incoherent and inconsistent moderation strategies? You suddenly can't use your VR headset because Facebook's algorithms stupidly ban you for posting photos of yourself breastfeeding?
This being Facebook, there's not a mention of any ability to prevent Facebook from being able to track the entirety of your behavior while using a VR headset. Given all the justified criticism of Facebook, the consumer response (especially among those that liked Oculus but have tried to avoid Facebook) is about what you'd expect over at forums like the Oculus subreddit:
The whole thing is very tone deaf, and very... Facebook. Facebook's need to track and monetize Oculus user behavior bulldozed over any concerns the company may have had about interoperability, or any valid concerns that this could simply drive customers to more open competitors. It's yet another example of users buying a product and features they believe they own, only to have functionality eroded or the terms of use dramatically modified down the road. It's also yet another example of how, more often than not, the promises made ahead of major U.S. mergers mean absolutely nothing.
In today's insanity, Facebook's top lobbyist in India, Ankhi Das, has filed a criminal complaint against journalist Awesh Tiwari. Tiwari put up a post on Facebook over the weekend criticizing Das, citing a giant Wall Street Journal article that is focused on how Facebook's rules against hate speech have run into challenges regarding India's ruling BJP party. Basically, the article said that Facebook was not enforcing its hate speech rules when BJP leaders violated the rules (not unlike similar stories regarding Facebook relaxing the rules for Trump supporters in the US).
Das is named in the original article, claiming that she had pushed for Facebook not to enforce its rules against BJP leaders because it could hurt Facebook's overall interests in India. Tiwari called out Das' role in his Facebook post, and it appears Das took offense to that:
In her complaint to the police, Das asked for an investigation to be opened against Tiwari for sexual harassment, defamation, and criminal intimidation. If charged and convicted, Tiwari could face fines as well as up to two years in prison for sexual harassment, up to two years for defamation, and up to seven years in prison for criminal intimidation, according to the Indian penal code.
In her complaint, Das said: “Since August 14, I have been receiving violent threats to my life and body, and I am extremely disturbed by the relentless harassment meted out to me by the accused persons. The content, which even includes my photograph, is evidently threatening to my life and body and I fear for my safety as well as that of my family members. The content also maligns my reputation based on a news article and I am subjected to name-calling, cyber bullying and eve-teasing online.”
Even if this were true, teasing and name calling should not be criminal offenses. But, even more to the point, why is Tiwari being blamed for the action of others. He just posted a post, citing the WSJ article, and criticizing Das. Das has all the power in the world to simply... respond on Facebook (the company which she works for).
As the Committee to Protect Journalists notes, this is absurd. Das should drop these claims and apologize. I certainly recognize the impossible position Facebook is put in with regards to content moderation, and completely understand that there are multiple tradeoffs at play in how Facebook chooses to handle moderation of politicians around the globe. But none of that justifies taking out a criminal complaint. And Facebook's response here is utter nonsense:
A Facebook representative told CPJ via email that the social media outlet takes the safety and security of its employees seriously, but said it does not comment on individual employee matters.
Sure, the complaint was taken out by Das, not Facebook, but Das is a representative of Facebook and this action reflects directly on the company.
Tiwari has now filed a counter complaint back against Das, which is not a great look either. His argument now is sort of the mirror image of Das's, saying that since news of her criminal complaint has come out, he has faced threatening comments as well.
It seems like this is just a typical internet-style flame war, except the participants all think the police should be involved and their critics should go to jail. And that's crazy. Take a breath everyone, drop the criminal complaints, and move on.
Every person in Myanmar above the age of 10 has lived part, if not most, of their life under a military dictatorship characterized by an obsession with achieving autonomy from international influences. Before the economic and political reforms of the past decade, Myanmar was one of the most isolated nations in the world. The digital revolution that has reshaped nearly every aspect of human life over the past half-century was something the average Myanmar person had no personal experience with.
Recent reforms brought an explosion of high hopes and technological access, and Myanmar underwent a digital leapfrog, with internet access jumping from nearly zero percent in 2015 to over 40 percent in 2020. At 27-years-old, I remember living in a Yangon where having a refrigerator was considered high tech, and now, there are 10-year-olds making videos on Tik Tok.
Everyone was excited for Myanmar's digital revolution to spur the economic and social changes needed to transform the country from a pariah state into the next economic frontier. Tourists, development aid, and economic investment poured into the country. The cost of SIM cards dropped from around 1,000 US dollars in 2013 to a little over 1 dollar today.
This dramatic price drop was paired with a glut of relatively affordable smartphones and phone carriers that provided data packages that made social media platforms like Facebook free, or nearly free, to use. This led to the current situation where about 21 million out of the 22 million people using the internet are on Facebook. Facebook became the main conduit through which people accessed the internet, and now is used for nearly every online activity from selling livestock, watching porn, reading the news, to discussing politics.
Then, following the exodus of over 700,000 Rohingya people from Myanmar’s war-torn Rakhine State, Facebook was accused of enabling a genocide.
The ongoing civil wars in the country and the state violence against the Rohingya, characterized by the UN as ethnic cleansing with genocidal intent, put a spotlight on the potential for harm brought on by digital connectivity. Given its market dominance, Facebook has faced great scrutiny in Myanmar for the role social media has played in normalizing, promoting, and facilitating violence against minority groups.
Facebook was, and continues to be, the favored tool for disseminating hate speech and misinformation against the Rohingya people, Muslims in general, and other marginalized communities. Despite repeated warnings from civil society organizations in the country, Facebook failed to address the new challenges with the urgency and level of resources needed during the Rohingya crisis, and failed to even enforce its own community standards in many cases.
To be sure, there have been improvements in recent years, with the social media giant appointing a Myanmar focused team, expanding their number of Myanmar language content reviewers, adding minority language content reviewers, establishing more regular contact with civil society, and devoting resources and tools focused on limiting disinformation during Myanmar’s upcoming election. The company also removed the accounts of Myanmar military officials and dozens of pages on Facebook and Instagram linked to the military for engaging in "coordinated inauthentic behavior." The company defines "inauthentic behavior" as "engag[ing] in behaviors designed to enable other violations under our Community Standards," through tactics such as the use of fake accounts and bots.
Recognizing the seriousness of this issue, everyone from the EU to telecommunications companies to civil society organizations have poured resources into digital literacy programs, anti-hate-speech campaigns, social media monitoring, and advocacy to try and address this issue. Overall, the focus of much of this programming is on what Myanmar and the people of Myanmar lack—rule of law, laws protecting free speech, digital literacy, knowledge of what constitutes hate speech, and resources to fund and execute the programming that is needed.
In the frenzy of the desperate firefighting by organizations on the ground, less attention has been given to larger systemic issues that are contributing to the fire.
There is a need to pay greater attention to those coordinated groups that are working to spread conspiracy theories, false information, and hatred to understand who they are, who is funding them, and how their work can be disrupted—and, if necessary, penalized.
There is a need to reevaluate how social media platforms are designed in a way that incentivizes and rewards bad behavior.
There is also a need to question how much blame we want to assign to social media companies, and whether it is to the overall good to give them the responsibility, and therefore power, to determine what is and isn't acceptable speech.
Finally, there is a need to ask ourselves about alternatives we can build, when many governments have proven themselves more than willing to surveil and prosecute netizens under the guise of health, security, and penalizing hate speech.
It is dangerous to expect private, profit-driven multinational corporations to be given the power to draw the line between hate speech and free speech. Just as it is dangerous to give that same power to governments, especially in this time of rising ethno-nationalistic sentiments around the globe and the increasing willingness of governments to overtly and covertly gather as much data as possible to use against those they govern. We can see from the ongoing legal proceedings against Myanmar in international courts regarding the Rohingya and other ethnic minorities, and statements from UN investigative bodies on Myanmar that Facebook has failed release to them evidence of serious international crimes, that neither company policies nor national laws are enough to ensure safety, justice, and dignity for vulnerable populations.
The solution to all this, as unsexy as it sounds, is a multifaceted, multi-stakeholder, long-term effort to build strong legal and cultural institutions that disperses the power and the responsibility to create and maintain safe and inclusive online spaces between governments, individuals, the private sector, and civil society.
Aye Min Thant is the Tech for Peace Manager at Phandeeyar, an innovation lab which promotes safer and more inclusive digital spaces in Myanmar. Formerly, she was a Pulitzer Prize winning journalist who covered business, politics, and ethno-religious conflicts in Myanmar for Reuters. You can follow her on Twitter @ma_ayeminthant.
This article was developed as part of a series of papers by the Wikimedia/Yale Law School Initiative on Intermediaries and Information to capture perspectives on the global impacts of online platforms’ content moderation decisions. You can read all of the articles in the series here, or on their Twitter feed @YaleISP_WIII.
I know that it's become accepted wisdom among some that the various social media platforms have an "anti-conservative bias" in how they moderate content. However, we've yet to see any evidence to actually support such a claim. Indeed, one study that has been pointed to frequently seemed to show that Twitter, at least, had an anti-Nazi and anti-troll policy -- and unless you think "conservatives" are synonymous with Nazis and trolls, then that doesn't really prove very much. Of course, there was another report that came out around that time noting that some Republican politicians' accounts were indistinguishable from Nazi accounts -- so... who knows?
Either way, the narrative has continued that somehow all the social media companies are somehow "unfair" to "conservatives" (which does not seem to have anything to do with actual conservative values, but mainly whether or not they support the current President). Indeed, a big part of the "antitrust" hearing a few weeks back was Republican Congressmen ranting and raving about the unfair treatment they and their friends receive on Facebook (though, at least one Congressional Rep confused Facebook and Twitter).
But, again, if anything, all of the evidence has shown the opposite to be true on Facebook. Pages and individuals who support the President (whether or not you consider that to be "conservative" is up to you) seem to do much better than other sites. And, now, new reports suggest that Facebook has bent over backwards to appease those sites, even when they break the rules. Indeed, according to an NBC report looking at internal documents, Facebook treated pages that support the President differently, giving them much more leeway than other users:
According to internal discussions from the last six months, Facebook has relaxed its rules so that conservative pages, including those run by Breitbart, former Fox News personalities Diamond and Silk, the nonprofit media outlet PragerU and the pundit Charlie Kirk, were not penalized for violations of the company’s misinformation policies.
Facebook's fact-checking rules dictate that pages can have their reach and advertising limited on the platform if they repeatedly spread information deemed inaccurate by its fact-checking partners. The company operates on a "strike" basis, meaning a page can post inaccurate information and receive a one-strike warning before the platform takes action. Two strikes in 90 days places an account into “repeat offender” status, which can lead to a reduction in distribution of the account’s content and a temporary block on advertising on the platform.
Of course, it's noteworthy to see PragerU, especially, getting special treatment, since that company has basically built its reputation by playing the victim with regards to social media content moderation. Indeed, it lost its ridiculous lawsuit against YouTube, but that hasn't quieted down the site's founder, Dennis Prager, who continues to whine about social media censorship for his "conservative" views.
Of course, the truth now appears to be that he's the beneficiary of... a kind of affirmative action.
In another case in late May, a Facebook employee filed a misinformation escalation for PragerU, after a series of fact-checking labels were applied to several similar posts suggesting polar bear populations had not been decimated by climate change and that a photo of a starving animal was used as a “deliberate lie to advance the climate change agenda.” This claim was fact-checked by one of Facebook’s independent fact-checking partners, Climate Feedback, as false and meant that the PragerU page had “repeat offender” status and would potentially be banned from advertising.
A Facebook employee escalated the issue because of “partner sensitivity” and mentioned within that the repeat offender status was “especially worrisome due to PragerU having 500 active ads on our platform,” according to the discussion contained within the task management system and leaked to NBC News.
After some back and forth between employees, the fact check label was left on the posts, but the strikes that could have jeopardized the advertising campaign were removed from PragerU’s pages.
Facebook seems to apply affirmative action to help aggrieved grifters who support the President. And, really, part of the argument many have made is that this was the point of these sites and users whining all this time. They knew they were spewing bullshit, and ran the risk of getting penalized, but if they pre-whined about it, perhaps they'd get special treatment. And now that's exactly what's happened.
The list and descriptions of the escalations, leaked to NBC News, showed that Facebook employees in the misinformation escalations team, with direct oversight from company leadership, deleted strikes during the review process that were issued to some conservative partners for posting misinformation over the last six months. The discussions of the reviews showed that Facebook employees were worried that complaints about Facebook's fact-checking could go public and fuel allegations that the social network was biased against conservatives.
The removal of the strikes has furthered concerns from some current and former employees that the company routinely relaxes its rules for conservative pages over fears about accusations of bias.
Two current Facebook employees and two former employees, who spoke anonymously out of fear of professional repercussions, said they believed the company had become hypersensitive to conservative complaints, in some cases making special allowances for conservative pages to avoid negative publicity.
Indeed, Facebook seems so sensitive to this issue that it has since fired an employee who collected the information showing that these Trump-supporting pages got special treatment.
Facebook, of course, is free to manage its platform the way that it wishes too, but it is somewhat amusing, at least, to think that if folks like Senator Josh Hawley got his way on some of his anti-Section 230 bills, then Facebook would actually open itself up to lawsuits from aggrieved parties (such as those who oppose the President's agenda) who didn't get that same "beneficial" treatment...
Summary: Though social media networks take a wide variety of evolving approaches to their content policies, most have long maintained relatively broad bans on nudity and sexual content, and have heavily employed automated takedown systems to enforce these bans. Many controversies have arisen from this, leading some networks to adopt exceptions in recent years: Facebook now allows images of breastfeeding, child-birth, post-mastectomy scars, and post-gender-reassignment surgery photos, while Facebook-owned Instagram is still developing its exception for nudity in artistic works. However, even with exceptions in place, the heavy reliance on imperfect automated filters can obstruct political and social conversations, and block the sharing of relevant news reports.
One such instance occurred on June 11, 2020 following controversial comments by Australian Prime Minister Scott Morrison, who stated in a radio interview that “there was no slavery in Australia”. This sparked widespread condemnation and rebuttals from both the public and the press, pointing to the long history of enslavement of Australian Aboriginals and Pacific Islanders in the country. One Australian Facebook user posted a late 19th century photo from the state library of Western Australia, depicting Aboriginal men chained together by their necks, along with a statement:
Kidnapped, ripped from the arms of their loved ones and forced into back-breaking labour: The brutal reality of life as a Kanaka worker - but Scott Morrison claims ‘there was no slavery in Australia’
Facebook removed the post and image for violation of their policy against nudity, although no genitals are visible, and restricted the user’s account. The Guardian Australia contacted Facebook to determine if this decision was made in error and, the following day, Facebook restored the post and apologized to the user, explaining that it was an erroneous takedown caused by a false positive in the automated nudity filter. However, at the same time, Facebook continued to block posts that included The Guardian’s news story about the incident, which featured the same photo, and placed 30-day suspensions on some users who attempted to share it. Facebook’s community standards report shows that in the first three months of 2020, 39.5-million pieces of content were removed for nudity or sexual activity, over 99% of those takedowns were automated, 2.5-million appeals were filed, and 613,000 of the takedowns were reversed.
Decisions to be made by Facebook:
Can nudity filters be improved to result in fewer false-positives, and/or is more human review required?
For appeals of automated takedowns, what is an adequate review and response time?
Should automated nudity filters be applied to the sharing of content from major journalistic sources such as The Guardian?
Should questions about content takedowns from major news organizations be prioritized over those from regular users?
Should 30-day suspensions and similar account restrictions be manually reviewed only if the user files an appeal?
Questions and policy implications to consider:
Should automated filter systems be able to trigger account suspensions and restrictions without human review?
Should content that has been restored in one instance be exempted from takedown, or flagged for automatic review, when it is shared again in future in different contexts?
How quickly can erroneous takedowns be reviewed and reversed, and is this sufficient when dealing with current, rapidly-developing political conversations?
Should nudity policies include exemptions for historical material, even when such material does include visible genitals, such as occurred in a related 2016 controversy over a Vietnam War photo?
Should these policies take into account the source of the content?
Should these policies take into account the associated messaging?
Resolution: Facebook’s restoration of the original post was undermined by its simultaneous blocking of The Guardian’s news reporting on the issue. After receiving dozens of reports from its readers that they were blocked from sharing the article and in some cases suspended for trying, The Guardian reached out to Facebook again and, by Monday, June 15, 2020, users were able to share the article without restriction. The difference in response times between the original incident and the blocking of posts is possibly attributable to the fact that the latter came to the fore on a weekend, but this meant that critical reporting on an unfolding political issue was blocked for several days while the subject was being widely discussed online.
Photo Credit (for first photo): State Library of Western Australia [Screenshot is taken directly from a Twitter embed]
In New Hampshire, Facebook has been dealing with a pro se lawsuit from the operator of a cafe, whose Instagram account was deleted for some sort of terms of service violation (it is never made clear what the violation was, and that seems to be part of the complaint). The Teatotaller cafe in Somerset, New Hampshire, apparently had and lost an Instagram account. The cafe's owner, Emmett Soldati first went to a small claims court, arguing that this violated his "contract" with Instagram, and cost his cafe revenue. There are all sorts of problems with that, starting with the fact that Instagram's terms of service, like every such site, say they can remove you for basically any reason, and specifically says:
You agree that we won’t be responsible . . . for any lost profits, revenues, information, or data, or consequential, special, indirect, exemplary, punitive, or incidental damages arising out of or related to [the Terms of Use], even if we know they are possible. This includes when we delete your content, information, or account.
And then there's the Section 230 issue. Section 230 should have wiped the case out nice and quick, as it has in every other case involving a social media account owner getting annoyed at being moderated. And, indeed, it appears that the local court in Dover tossed the case on 230 grounds. Soldati appealed, and somewhat bizarrely, the New Hampshire Supreme Court has overturned that ruling and sent it back to the lower court. That doesn't mean that Facebook will definitely lose, but the ruling is quite remarkable, and an extreme outlier compared to basically every other Section 230 case. It almost reads as if the judges wanted this particular outcome, and then twisted everything they could think of to get there.
To be clear, the judges who heard the case are clearly well informed on Section 230, as they cite many of the key cases in the ruling. It says that to be protected by Section 230(c)(1) (the famed "26 words" which say a website can't be held liable for the actions of its users), there's a "three pronged" test. The website has to be an interactive computer service -- which Facebook clearly is. The plaintiff has to be an information content provider, which Teatotaller clearly is. That leaves the last bit: does the lawsuit seek to hold Facebook liable as a publisher or speaker.
Let's take a little journey first. One of the things that often confuses people about Section 230 is the interplay between (c)(1) and (c)(2) of the law. (c)(1) is the websites not liable for their users' content part, and (c)(2) is the no liability for any good faith moderation decisions part. But here's the weird thing: in over two decades of litigating Section 230, nearly every time moderation decisions are litigated, the website is considered protected under (c)(1) for those moderation decisions. This used to strike me as weird, because you have (c)(2) sitting right there saying no liability for moderation. But, as many lawyers have explained it, it kinda makes sense. (c)(1)'s language is just cleaner, and courts have reasonably interpreted things to say that holding a company liable for its moderation choices is the same thing as holding it liable as the "publisher."
So, in this case (as in many such cases), Facebook didn't even raise the (c)(2) issue, and stuck with (c)(1), assuming that like in every other case, that would suffice. Except... this time it didn't. Or at least not yet. But the reason it didn't... is... weird. It basically misinterprets one old Section 230 case in the 9th Circuit, the somewhat infamous Barnes v. Yahoo case. That was the case where the court said that Yahoo lost its Section 230 protections because Barnes had called up Yahoo and the employee she spoke to promised to her that she would "take care of" the issue that Barnes was complaining about. The court there said that thanks to "promissory estopel," this promise overrode the Section 230 liabilities. In short: when the company employee promised to do something, they were forming a new contract.
Barnes is one of the most frequently cited case by people trying to get around Section 230, and it almost never works, because companies know better than to make promises like the one that happened in the Barnes case. Except here, the judges say that the terms of service themselves may be that promise, and thus it can be read as the terms of service overrule Section 230:
However, to the extent that Teatotaller’s claim is based upon specific promises that Facebook made in its Terms of Use, Teatotaller’s claim may not require the court to treat Facebook as a publisher. See Barnes, 570 F.3d at 1107, 1109 (concluding that the defendant website was not entitled to immunity under the CDA for the plaintiff’s breach of contract claim under a theory of promissory estoppel because “the duty the defendant allegedly violated springs from a contract—an enforceable promise—not from any non-contractual conduct or capacity of the defendant”); Hiam v. Homeaway.com, Inc., 267 F. Supp. 3d 338, 346 (D. Mass. 2017) (determining that “the Plaintiffs are able to circumvent the CDA” as to certain claims by asserting that “through [the defendant’s] policies, [the defendant] promises (1) a reasonable investigatory process into complaints of fraud and (2) that the website undertakes some measure of verification for each posting”), aff’d on other grounds, 887 F.3d 542 (1st Cir. 2018).
This is not a total win for Teatotaller, as the court basically says there isn't enough information to know whether the claims are based on promises within the terms of service, or if it's based on Facebook's decision to remove the account (in which case, Facebook would be protected by 230). And thus, it remands the case to try to sort that out:
Thus, because it is not clear on the face of Teatotaller’s complaint and objection whether prong two of the CDA immunity test is met, we conclude that the trial court erred by dismissing Teatotaller’s breach of contract claim on such grounds. See Pirozzi, 913 F. Supp. 2d at 849. We simply cannot determine based upon the pleadings at this stage in the proceeding whether Facebook is immune from liability under section 230(c)(1) of the CDA on Teatotaller’s breach of contract claim. See id. For all of the above reasons, therefore, although Teatotaller’s breach of contract claim may ultimately fail,
either on the merits or under the CDA, we hold that dismissal of the claim is not warranted at this time.
So, there are still big reasons why this case against Facebook is likely to fail. On remand, the court may recognize that the issue is just straight up moderation and dismiss again on 230 grounds. Or, it may say that it's based on the terms of service and yet still decide that nothing Facebook did violated those terms. Facebook is thus likely to prevail in the long run.
But... this ruling opens up a huge potential hole in Section 230 (in New Hampshire at least), saying that what you put into your terms of service could, in some cases, overrule Section 230, leading you to have to defend whether or not your moderation decision somehow violated your terms.
That sound you hear is very, very expensive lawyers now combing through terms of service on every dang platform out there to figure out (1) how to shore them up to avoid this problem as much as possible, or (2) how to start filing a bunch of sketchy lawsuits in New Hampshire to exploit this new loophole.
“I think it’s kind of incredible,” said Soldati, who represented himself as a pro se litigant. “I think this is a very powerful message that if you feel a tech company has trampled or abused your rights and you don’t feel anyone is listening ... you can seek justice and it will matter.”
That's... not quite the issue at hand. Your rights weren't trampled. Your account was shut down. That's all. But in fighting this case, there may be a very dangerous hole now punched into Section 230, at least in New Hampshire, and it could create a ton of nuisance litigation. And, that even puts business owners like Soldati at risk. 230 protects him and the comments people make on his (new) Instagram account. But if he promises something... he may wipe out those protections.
How was your Wednesday? I spent 5 and a half hours of mine watching the most inane and stupid hearing put on by Rep. David Cicilline, and the House Judiciary Committee's Subcommittee on Antitrust, Commercial & Administrative Law. The hearing was billed as a big antitrust showdown, in which the CEOs of Google, Facebook, Apple and Amazon would all answer questions regarding an antitrust investigation into those four companies. If you are also a glutton for punishment, you can now watch the whole thing yourself too (though, at least you can watch it at 2x speed). I'll save you a bit of time though: there was very little discussion of actual antitrust. There was plenty of airing of grievances, however, frequently with little to no basis in reality.
If you want to read my realtime reactions to the nonsense, there's a fairly long Twitter thread. If you want a short summary, it's this: everyone who spoke is angry about some aspect of these companies but (and this is kind of important) there is no consensus about why and the reasons for their anger is often contradictory. The most obvious example of this played out in regards to discussions that were raised about the decision earlier this week by YouTube and Facebook (and Twitter) to take down an incredibly ridiculous Breitbart video showing a group of "doctors" spewing dangerous nonsense regarding COVID-19 and how to treat it (and how not to treat it). The video went viral, and a whole bunch of people were sharing it, even though one of the main stars apparently believes in Alien DNA and Demon Sperm. Also, when Facebook took down the video, she suggested that God would punish Facebook by crashing its servers.
However, during the hearing, there were multiple Republican lawmakers who were furious at Facebook and YouTube for removing such content, and tried to extract promises that the platforms would no longer "interfere." Amusingly (or, not really), at one point, Jim Sensenbrenner even demanded that Mark Zuckerberg answer why Donald Trump Jr.'s account had been suspended for sharing such a video -- which is kind of embarrassing since it was Twitter, not Facebook, that temporarily suspended Junior's account (and it was for spreading disinfo about COVID, which that video absolutely was). Meanwhile, on the other side of the aisle, Rep. Cicilline was positively livid that 20 million people still saw that video, and couldn't believe that it took Facebook five full hours to decide to delete the video.
So, you had Republicans demanding these companies keep those videos up, and Democrats demanding they take the videos down faster. What exactly are these companies supposed to do?
Similarly, Rep. Jim Jordan made some conspiracy theory claims saying that Google tried to help Hillary Clinton win in 2016 (the fact that she did not might raise questions about how Jordan could then argue they have too much power, but...) and demanded that they promise not to "help Biden." On the other side of the aisle, Rep. Jamie Raskin complained about how Facebook allowed Russians and others to swing the election to Trump, and demanded to know how Facebook would prevent that in the future.
So... basically both sides were saying that if their tools are used to influence elections, bad things might happen. It just depends on which side wins to see which side will want to do the punishing.
Nearly all of the Representatives spent most of their time grandstanding -- rarely about issues related to antitrust -- and frequently demonstrating their own technological incompetence. Rep. Greg Steube whined that his campaign emails were being filtered to spam, and argued that it was Gmail unfairly handicapping conservatives. His "evidence" for this was that it didn't happen before he joined Congress last year, and that he'd never heard of it happening to Democrats (a few Democrats noted later that it does happen to them). Also, he said his own father found his campaign ads in spam, and so clearly it wasn't because his father marked them as spam. Sundar Pichai had to explain to Rep. Steube that (1) they don't spy on emails so they have no way of knowing that emails were between a father and son, and (2) that emails go to spam based on a variety of factors, including how other users rate them. In other words, Steube's own campaign is (1) bad at email and (2) his constituents are probably trashing the emails. It's not anti-conservative bias.
Rep. Ken Buck went on an unhinged rant, claiming that Google was in cahoots with communist China and against the US government.
On that front, Rep. Jim Jordan put on quite a show, repeatedly misrepresenting various content moderation decisions as "proof" of anti-conservative bias. Nearly every one of those examples he misrepresented. And then when a few other Reps. pointed out that he was resorting to fringe conspiracy theories he started shouting and had to be told repeatedly to stop interrupting (and to put on his mask). Later, at the end of the hearing, he went on a bizarre rant about "cancel culture" and demanded each of the four CEOs to state whether or not they thought cancel culture was good or bad. What that has to do with their companies, I do not know. What that has to do with antitrust, I have even less of an idea.
A general pattern, on both sides of the aisle was that a Representative would describe a news story or scenario regarding one of the platforms in a way that misrepresented what actually happened, and painted the companies in the worst possible light, and then would ask a "and have you stopped beating your wife?" type of question. Each of the four CEOs, when put on the spot like that, would say something along the lines of "I must respectfully disagree with the premise..." or "I don't think that's an accurate representation..." at which point (like clockwork) they were cut off by the Representative, with a stern look, and something along the lines of "so you won't answer the question?!?" or "I don't want to hear about that -- I just want a yes or no!"
It was... ridiculous -- in a totally bipartisan manner. Cicilline was just as bad as Jordan in completely misrepresenting things and pretending he'd "caught" these companies in some bad behavior that was not even remotely accurate. This is not to say the companies haven't done questionable things, but neither Cicilline nor Jordan demonstrated any knowledge of what those things were, preferring to push out fringe conspiracy theories. Others pushing fringe wacko theories included Rep. Matt Gaetz on the Republican side (who was all over the map with just wrong things, including demanding that the platforms would support law enforcement) and Rep. Lucy McBath on the Democratic side, who seemed very, very confused about the nature of cookies on the internet. She also completely misrepresented a situation regarding how Apple handled a privacy situation, suggesting that protecting user's privacy by blocking certain apps that had privacy issues was anti-competitive.
There were a few Representatives who weren't totally crazy. On the Republican side, Rep. Kelly Armstrong asked some thoughtful questions about reverse warrants (not an antitrust issue, but an important 4th Amendment one) and about Amazon's use of competitive data (but... he also used the debunked claim that Google tried to "defund" The Federalist, and used the story about bunches of DMCA notices going to Twitch to say that Twitch should be forced to pre-license all music, a la the EU Copyright Directive -- which, of course, would harm competition, since only a few companies could actually afford to do that). On the Democratic side, Rep. Raskin rightly pointed out the hypocrisy of Republicans who support Citizens United, but were mad that companies might politically support candidates they don't like (what that has to do with antitrust is beyond me, but it was a worthwhile point). Rep. Joe Neguse asked some good questions that were actually about competition, but for which there weren't very clear answers.
All in all, some will say it was just another typical Congressional hearing in which Congress displays its technological ignorance. And that may be true. But it is disappointing. What could have been a useful and productive discussion with these four important CEOs was anything but. What could have been an actual exploration of questions around market power and consumer welfare... was not. It was all just a big performance. And that's disappointing on multiple levels. It was a waste of time, and will be used to reinforce various narratives.
But, from this end, the only narrative it reinforced was that Congress is woefully ignorant about technology and how these companies operate. And they showed few signs of actually being curious in understanding the truth.
Summary: Social media platforms are constantly seeking to remove racist, bigoted, or hateful content. Unfortunately, these efforts can cause unintended collateral damage to users who share surface similarities to hate groups, even though many of these users take a firmly anti-racist stance.
A recent attempt by Facebook to remove hundreds of pages associated with bigoted groups resulted in the unintended deactivation of accounts belonging to historically anti-racist groups and public figures.
The unintentional removal of non-racist pages occurred shortly after Facebook engaged in a large-scale deletion of accounts linked to white supremacists, as reported by OneZero:
Hundreds of anti-racist skinheads are reporting that Facebook has purged their accounts for allegedly violating its community standards. This week, members of ska, reggae, and SHARP (Skinheads Against Racial Prejudice) communities that oppose white supremacy are accusing the platform of wrongfully targeting them. Many believe that Facebook has mistakenly conflated their subculture with neo-Nazi groups because of the term “skinhead.”
The suspensions occurred days after Facebook removed 200 accounts connected to white supremacist groups and as Mark Zuckerberg continues to be scrutinized for his selective moderation of hate speech.
Dozens of Facebook users from around the world reported having their accounts locked or their pages disabled due to their association with the "skinhead" subculture. This subculture dates back to the 1960s and predates the racist/fascist tendencies now commonly associated with that term.
Facebook’s policies have long forbidden the posting of racist or hateful content. Its ban on "hate speech" encompasses the white supremacist groups it targeted during its purge of these accounts. The removals of accounts not linked to racism -- but linked to the term "skinhead' -- were accidental, presumably triggered by a term now commonly associated with hate groups.
Questions to consider:
How should a site handle the removal of racist groups and content?
Should a site use terms commonly associated with hate groups to search for content/accounts to remove?
If certain terms are used to target accounts, should moderators be made aware of alternate uses that may not relate to hateful activity?
Should moderators be asked to consider the context surrounding targeted terms when seeking to remove pages or content?
Should Facebook provide users whose accounts are disabled with more information as to why this has happened? (Multiple users reported receiving nothing more than a blanket statement about pages/accounts "not following Community Standards.")
If context or more information is provided, should Facebook allow users to remove the content (or challenge the moderation decision) prior to disabling their accounts or pages?
Resolution: Facebook's response was nearly immediate. Facebook apologized to users shortly after OneZero reported the apparently-erroneous deletion of non-racist pages. Guy Rosen (VP- Integrity at Facebook) also apologized for the deletion on Twitter to the author of the OneZero post, saying the company had removed these pages in error during its mass deletion of white supremacists pages/accounts and said the company is looking into the error.
One of the most frustrating claims that critics of Section 230 make is that because of Section 230 the big internet companies have no incentive to deal with awful content (abuse, harassment, bigotry, lies, etc.). Yet, over and over again we see why that's not at all true. First of all, there's strong incentive to deal with crap content on your platform because if you don't your users will go elsewhere. So the userbase itself is incentive. Then, as we've discussed, there are incentives from advertisers who don't want their ads showing up next to such junk and can pressure companies to change.
Finally, there are the employees of these companies. While so much of the narrative around internet companies focuses (somewhat ridiculously) on the larger-than-life profiles of their founders/CEOs, the reality is that there are thousands of employees at these companies, many of whom don't want to be doing evil shit or enabling evil shit. And they have influence. Over the past few years, there have been multiple examples of employees revolting and pushing back against company decisions on things like government contracts and surveillance.
And, now they're pushing back on the wider impact of these companies. That's a Buzzfeed article detailing how a bunch of employees inside Facebook are getting fed up with the company's well-documented problems, its failure to change, and its failure to take into account its broader impact.
“This time, our response feels different,” wrote Facebook engineer Dan Abramov in a June 26 post on Workplace, the company’s internal communications platform. “I’ve taken some [paid time off] to refocus, but I can’t shake the feeling that the company leadership has betrayed the trust my colleagues and I have placed in them.”
Messages like those from Wang and Abramov illustrate how Facebook’s handling of the president’s often divisive posts has caused a sea change in its ranks and led to a crisis of confidence in leadership, according to interviews with current and former employees and dozens of documents obtained by BuzzFeed News. The documents — which include company discussion threads, employee survey results, and recordings of Zuckerberg — reveal that the company was slow to take down ads with white nationalist and Nazi content reported by its own employees. They demonstrate how the company’s public declarations about supporting racial justice causes are at odds with policies forbidding Facebookers from using company resources to support political matters. They show Zuckerberg being publicly accused of misleading his employees. Above all, they portray a fracturing company culture.
The examples in the Buzzfeed article may not be representative of how all employees feel, nor is it necessarily indicative that Facebook will definitely change its policies one way or the other. It's just highlighting that pressure to be better, to be responsible, and to build better products comes from all over -- and in Silicon Valley many employees came up with the belief (cynical or not) that they're here to change the world for the better. And when they realize they may not be doing that, many will speak out and push back.
And that is likely to have an impact over time: especially when the big tech companies are fighting over top talent, and desperately trying to hire the best engineers possible. If those engineers speak up and speak out, it can create very strong incentives for companies to change and to improve -- all without needing to take an axe to Section 230, which has little to nothing to do with all of this.
Having just criticized the Second Circuit for getting Section 230 (among other things) very wrong, it's worth pointing out an occasion where it got it very right. The decision in Force v. Facebook came out last year, but the Supreme Court recently denied any further review, so it's still ripe to talk about how this case could, and should, bear on future Section 230 litigation.
It is a notable decision, not just in terms of its result upholding Section 230 but in how it cut through much of the confusion that tends to plague discussion regarding Section 230. It brought the focus back to the essential question at the heart of the statute: who imbued the content at issue with its allegedly wrongful quality? That question is really is the only thing that matters when it comes to figuring out whether Section 230 applies.
This case was one of the many seeking to hold social media platforms liable for terrorists using them. None of them have succeeded, although for varying reasons. For instance, in Fields v. Twitter, in which we wrote an amicus brief, the claims failed but not for Section 230 reasons. In this case, however, the dismissal of the complaint was upheld on Section 230 grounds.
The plaintiffs put forth several theories about why Facebook should not have been protected by Section 230. Most of them tried to construe Facebook as the information content provider of the terrorists' content, and thus not entitled to the immunity. But the Second Circuit rejected them all.
Ultimately the statute is simple: whoever created the wrongful content is responsible for it, not the party who simply enabled its expression. The only question is who created the wrongful content, and per the court, "[A] defendant will not be considered to have developed third-party content unless the defendant directly and 'materially' contributed to what made the content itself 'unlawful.'" [p. 68].
Section 230 really isn't any more complicated than that. And the Second Circuit clearly rejected some of the ways people often try to make it more complicated.
For one thing, it does not matter that the platform exercised editorial judgment over which user content it displayed. After all, even the very decision to host third-party content at all is an editorial one, and Section 230 has obviously always applied in the shadow of that sort of decision.
The services have always decided, for example, where on their sites (or other digital property) particular third-party content should reside and to whom it should be shown. Placing certain third-party content on a homepage, for example, tends to recommend that content to users more than if it were located elsewhere on a website. Internet services have also long been able to target the third-party content displayed to users based on, among other things, users' geolocation, language of choice, and registration information. And, of course, the services must also decide what type and format of third-party content they will display, whether that be a chat forum for classic car lovers, a platform for blogging, a feed of recent articles from news sources frequently visited by the user, a map or directory of local businesses, or a dating service to find romantic partners. All of these decisions, like the decision to host third-party content in the first place, result in "connections" or "matches" of information and individuals, which would have not occurred but for the internet services' particular editorial choices regarding the display of third-party content. We, again, are unaware of case law denying Section 230(c)(1) immunity because of the "matchmaking" results of such editorial decisions. [p. 66-67]
Nor does it matter that the platforms use algorithms to help automate editorial decisions.
[P]laintiffs argue, in effect, that Facebook's use of algorithms is outside the scope of publishing because the algorithms automate Facebook's editorial decision-making. That argument, too, fails because "so long as a third party willingly provides the essential published content, the interactive service provider receives full immunity regardless of the specific edit[orial] or selection process." [p. 67]
Even if the platform uses algorithms to decide whether to make certain content more "visible," "available," and "usable," that does not count as developing the content. [p. 70]. Nor does simply letting terrorists use its platform to make it a partner in the creation of their content. [p. 65]. The court notes that in cases where courts have found platforms liable as co-creators of problematic content, they had played a much more active role in the development of specific instances of problematic expression than simply enabling it.
Employing this "material contribution" test, we held in FTC v. LeadClick that the defendant LeadClick had "developed" third parties' content by giving specific instructions to those parties on how to edit "fake news" that they were using in their ads to encourage consumers to purchase their weight-loss products. LeadClick's suggestions included adjusting weight-loss claims and providing legitimate-appearing news endorsements, thus "materially contributing to [the content's] alleged unlawfulness." [We] also concluded that a defendant may, in some circumstances, be a developer of its users' content if it encourages or advises users to provide the specific actionable content that forms the basis for the claim. Similarly, in Fair Housing Council v. Roommates.Com, the Ninth Circuit determined that—in the context of the Fair Housing Act, which prohibits discrimination on the basis of sex, family status, sexual orientation, and other protected classes in activities related to housing—the defendant website's practice of requiring users to use pre-populated responses to answer inherently discriminatory questions about membership in those protected classes amounted to developing the actionable information for purposes of the plaintiffs' discrimination claim. [p. 69]
Of course, as the court noted, even in Roommates.com, the platform was not liable for any and all potentially discriminatory content supplied by its users.
[I]t concluded only that the site's conduct in requiring users to select from "a limited set of pre-populated answers" to respond to particular "discriminatory questions" had a content-development effect that was actionable in the context of the Fair Housing Act. [p. 70]
Also, woven throughout the decision the court also included an extensive discussion, [see, e.g., p. 65-68], about that perpetual red herring: the term, "publisher," which keeps creating confusion about the scope of the law. One of the most common misconceptions about Section 230 is that it hinges on some sort of "platform v. publisher" distinction, immunizing only "neutral platforms" and not anyone who would qualify as a "publisher." People often mistakenly believe that a "publisher" is the developer of the content, and thus not protected by Section 230. In reality, however, for purposes of Section 230 platforms and publishers are actually one and the same, and therefore all protected by it. As the court explains, the term "publisher" just stems from the understanding of the word as "one that makes public," [p. 65], which is the essential function of what a platform does to distribute others' speech, and that distribution is not the same thing as creation of the offending content. Not even if the platform has made editorial decisions with respect to that distribution. Being a publisher has always entailed exercising editorial judgment over what content to distribute and how, and, as the court makes clear, it is not suddenly a basis for denying platforms Section 230 protection.