New Hampshire Supreme Court Issues Very Weird Ruling Regarding Section 230
from the but-that-makes-no-sense dept
In New Hampshire, Facebook has been dealing with a pro se lawsuit from the operator of a cafe, whose Instagram account was deleted for some sort of terms of service violation (it is never made clear what the violation was, and that seems to be part of the complaint). The Teatotaller cafe in Somerset, New Hampshire, apparently had and lost an Instagram account. The cafe's owner, Emmett Soldati first went to a small claims court, arguing that this violated his "contract" with Instagram, and cost his cafe revenue. There are all sorts of problems with that, starting with the fact that Instagram's terms of service, like every such site, say they can remove you for basically any reason, and specifically says:
You agree that we won’t be responsible . . . for any lost profits, revenues, information, or data, or consequential, special, indirect, exemplary, punitive, or incidental damages arising out of or related to [the Terms of Use], even if we know they are possible. This includes when we delete your content, information, or account.
And then there's the Section 230 issue. Section 230 should have wiped the case out nice and quick, as it has in every other case involving a social media account owner getting annoyed at being moderated. And, indeed, it appears that the local court in Dover tossed the case on 230 grounds. Soldati appealed, and somewhat bizarrely, the New Hampshire Supreme Court has overturned that ruling and sent it back to the lower court. That doesn't mean that Facebook will definitely lose, but the ruling is quite remarkable, and an extreme outlier compared to basically every other Section 230 case. It almost reads as if the judges wanted this particular outcome, and then twisted everything they could think of to get there.
To be clear, the judges who heard the case are clearly well informed on Section 230, as they cite many of the key cases in the ruling. It says that to be protected by Section 230(c)(1) (the famed "26 words" which say a website can't be held liable for the actions of its users), there's a "three pronged" test. The website has to be an interactive computer service -- which Facebook clearly is. The plaintiff has to be an information content provider, which Teatotaller clearly is. That leaves the last bit: does the lawsuit seek to hold Facebook liable as a publisher or speaker.
Let's take a little journey first. One of the things that often confuses people about Section 230 is the interplay between (c)(1) and (c)(2) of the law. (c)(1) is the websites not liable for their users' content part, and (c)(2) is the no liability for any good faith moderation decisions part. But here's the weird thing: in over two decades of litigating Section 230, nearly every time moderation decisions are litigated, the website is considered protected under (c)(1) for those moderation decisions. This used to strike me as weird, because you have (c)(2) sitting right there saying no liability for moderation. But, as many lawyers have explained it, it kinda makes sense. (c)(1)'s language is just cleaner, and courts have reasonably interpreted things to say that holding a company liable for its moderation choices is the same thing as holding it liable as the "publisher."
So, in this case (as in many such cases), Facebook didn't even raise the (c)(2) issue, and stuck with (c)(1), assuming that like in every other case, that would suffice. Except... this time it didn't. Or at least not yet. But the reason it didn't... is... weird. It basically misinterprets one old Section 230 case in the 9th Circuit, the somewhat infamous Barnes v. Yahoo case. That was the case where the court said that Yahoo lost its Section 230 protections because Barnes had called up Yahoo and the employee she spoke to promised to her that she would "take care of" the issue that Barnes was complaining about. The court there said that thanks to "promissory estopel," this promise overrode the Section 230 liabilities. In short: when the company employee promised to do something, they were forming a new contract.
Barnes is one of the most frequently cited case by people trying to get around Section 230, and it almost never works, because companies know better than to make promises like the one that happened in the Barnes case. Except here, the judges say that the terms of service themselves may be that promise, and thus it can be read as the terms of service overrule Section 230:
However, to the extent that Teatotaller’s claim is based upon specific promises that Facebook made in its Terms of Use, Teatotaller’s claim may not require the court to treat Facebook as a publisher. See Barnes, 570 F.3d at 1107, 1109 (concluding that the defendant website was not entitled to immunity under the CDA for the plaintiff’s breach of contract claim under a theory of promissory estoppel because “the duty the defendant allegedly violated springs from a contract—an enforceable promise—not from any non-contractual conduct or capacity of the defendant”); Hiam v. Homeaway.com, Inc., 267 F. Supp. 3d 338, 346 (D. Mass. 2017) (determining that “the Plaintiffs are able to circumvent the CDA” as to certain claims by asserting that “through [the defendant’s] policies, [the defendant] promises (1) a reasonable investigatory process into complaints of fraud and (2) that the website undertakes some measure of verification for each posting”), aff’d on other grounds, 887 F.3d 542 (1st Cir. 2018).
This is not a total win for Teatotaller, as the court basically says there isn't enough information to know whether the claims are based on promises within the terms of service, or if it's based on Facebook's decision to remove the account (in which case, Facebook would be protected by 230). And thus, it remands the case to try to sort that out:
Thus, because it is not clear on the face of Teatotaller’s complaint and objection whether prong two of the CDA immunity test is met, we conclude that the trial court erred by dismissing Teatotaller’s breach of contract claim on such grounds. See Pirozzi, 913 F. Supp. 2d at 849. We simply cannot determine based upon the pleadings at this stage in the proceeding whether Facebook is immune from liability under section 230(c)(1) of the CDA on Teatotaller’s breach of contract claim. See id. For all of the above reasons, therefore, although Teatotaller’s breach of contract claim may ultimately fail, either on the merits or under the CDA, we hold that dismissal of the claim is not warranted at this time.
So, there are still big reasons why this case against Facebook is likely to fail. On remand, the court may recognize that the issue is just straight up moderation and dismiss again on 230 grounds. Or, it may say that it's based on the terms of service and yet still decide that nothing Facebook did violated those terms. Facebook is thus likely to prevail in the long run.
But... this ruling opens up a huge potential hole in Section 230 (in New Hampshire at least), saying that what you put into your terms of service could, in some cases, overrule Section 230, leading you to have to defend whether or not your moderation decision somehow violated your terms.
That sound you hear is very, very expensive lawyers now combing through terms of service on every dang platform out there to figure out (1) how to shore them up to avoid this problem as much as possible, or (2) how to start filing a bunch of sketchy lawsuits in New Hampshire to exploit this new loophole.
Meanwhile, Soldati seems to be celebrating a bit prematurely:
“I think it’s kind of incredible,” said Soldati, who represented himself as a pro se litigant. “I think this is a very powerful message that if you feel a tech company has trampled or abused your rights and you don’t feel anyone is listening ... you can seek justice and it will matter.”
That's... not quite the issue at hand. Your rights weren't trampled. Your account was shut down. That's all. But in fighting this case, there may be a very dangerous hole now punched into Section 230, at least in New Hampshire, and it could create a ton of nuisance litigation. And, that even puts business owners like Soldati at risk. 230 protects him and the comments people make on his (new) Instagram account. But if he promises something... he may wipe out those protections.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: emmett soldati, new hampshire, section 230, terms of service
Companies: facebook, instagram, teatotaller
Reader Comments
Subscribe: RSS
View by: Time | Thread
Sounds to me like the judges might have a grudge against “Big Tech” despite their knowledge of 230, and this was their way of “striking back”.
[ link to this | view in chronology ]
Re:
Maybe, but I can see a point where the TOS could waive 230 protection. Suppose the TOS ended with a phrase like:
"Failure to abide by the terms of service may result in your being banned from this service and your account and data being deleted. However, rest assured that this is the only circumstance under which this will happen and we commit to preservice your account and your data if you do obey the TOS."
Now suppose someone is arbitrarily kicked off the service and has a record of the transactions that a reasonable person would not consider to have violated the TOS (or a proprietor posted something like "I don't care if you didn't violate the TOS, I don't like your face and you're gone!"). Should 230 protect the service company in a case like this?
[ link to this | view in chronology ]
I’mma stop your premise right there because no lawyer worth a good god’s damn would let that last sentence through. They wouldn’t let a client either leave themselves open to that much liability or cut themselves off from making decisions that fall outside the TOS. I mean, how can a service that says “we won’t ban you if you follow the TOS” ban a racist if the racist uses dogwhistle terms that don’t trip any moderation filters and thus appear to fall squarely within the TOS?
[ link to this | view in chronology ]
Re:
It makes a lot more sense for something like a straight up hosting provider, rather than someone like Facebook who wraps your content up in their own framing.The normal interpretation of S230 would mean that for any internet service that makes content available to the public they can terminate a hosting agreement unilaterally without warning or penalty, especially since the "good faith" part has been broadened to irrelevance. That's the problem with applying the same rules to dumb conduits and businesses which do select and promote content and generally edge close to being legal publishers.
As for how to ban racists using dog whistles, that could be solved by referring to the definition of hate speech from some country with a suitably strict standard to satisfy your marketing department.
[ link to this | view in chronology ]
Re: Re:
"...that could be solved by referring to the definition of hate speech from some country with a suitably strict standard to satisfy your marketing department."
So in other words we're right back to 100% subjective moderation, again?
Coming from a country with "suitably strict standards" of what constitutes hate speech I can only inform you that it tends to become a court case every damn time anyway.
[ link to this | view in chronology ]
Re: Re:
Why not? The problems with this hypothetical situation really have nothing to do with section 230, which explicitly protects moderation decisions—even bad ones. That they promised not to make bad decisions doesn't change anything. People bothered by that can sue for breach of contract.
[ link to this | view in chronology ]
Re: Re: Re:
He is suing for breach of contract, but the normal interpretation of S230© is that it overrules any TOS promise not to moderate something, whereas the NH supreme court has said that it doesn't.
[ link to this | view in chronology ]
Re:
These types of terms (of service) in this instance have been found to be unconscienable in rental agreements where courts have decided a landlord cannot just toss someone out for no reason.
[ link to this | view in chronology ]
Good Faith
Perhaps this might be a new front of attack against 230: good faith. If the person files suit not because of someone else's speech, as protected by c(1), then the defendant may need to provide a good faith c(2) defense. But if the defendant can't provide any reason, and can't explain what happened, now there may be civil liability. How can you claim good faith if you dont know?
C(2) might not be interpreted to say any moderation is allowable, only that certain good faith moderation is allowable.
[ link to this | view in chronology ]
Please define “good faith” in clear, objective terms that any court can use to chip away at 230.
I’ll wait.
[ link to this | view in chronology ]
Re:
No need to wait. Subjective as it sounds, many courts do make decisions about the term "good faith".
(While this doesn't mean section 230 does or should depend on it, the term itself would not pose as much of a problem as your question implies.)
[ link to this | view in chronology ]
Re: Re:
"Good faith" is the engine by which the copyright trolling moneymaker machine chugs along. You'll have to forgive the regulars for not being particularly enthusiastic when "good faith" is brought up.
[ link to this | view in chronology ]
Action: Suspended account. Reason: Because we can.
In which case platforms will always argue both to cover their bases, and as a safety measure will write their TOS terms as wide as possible, making it crystal clear(if they haven't already) that they can give you the boot for whatever reason they feel like, and if you don't like it don't use the service, no 'reason' needed.
[ link to this | view in chronology ]
Re: Action: Suspended account. Reason: Because we can.
Exactly, Facebook totally needed to include a c(2) defense in this case.
A smart idea. In this case, however, the company claims that it was paying fees to Instagram. Perhaps it was advertising? I wouldn't know how these things work for companies attempting to deal with social media. In any case, once money is being exchanged, we may be entering into the area of contract law. So now the defendant can't simply provide no reason at all.
In reading the decision, I now think that this court case will be of limited use to those who want to tear down section 230. I'm guessing no money exchanges hands for most accounts, and so this nuance may prevent any further problems. But this could limit any social media company's ability to charge money for regular users, if they ever wanted to start some kind of subscription.
[ link to this | view in chronology ]
Except they didn’t because a c(1) defense should’ve handled things.
Then don’t speak on the issue until you do.
Ain’t that a shame~.
[ link to this | view in chronology ]
Re: Good Faith
Why? You're an asshole and I kicked you out of my restaurant, no specific reason nor explanation needed. Sue me!
Any online platform has the ability to kick people off of their virtual property for any reason or no reason and face zero civil or criminal complaints.
[ link to this | view in chronology ]
Re: Re: Good Faith
Since the cafe alleges that they paid Instagram, the analogy that I think fits better is "If you pay for a meal at a restaurant up-front, can they kick you out before you get your meal without a reason?"
[ link to this | view in chronology ]
Re: Re: Re: Good Faith
Yes.
[ link to this | view in chronology ]
Re: Re: Re: Good Faith
Not only yes, it is not necessary to explain it to the police officers who show up to enforce the action. I have asked them to leave, and they have not, therefore they are trespassers, take them away. I have done this may times, though the payed in advance is unusual.
It does not matter whether the customer makes the disturbance before or after paying, it matters that the disturbance is noticed and acted upon. In my experience the acted upon coincided with a call to the police. They, again in my experience, appreciated a call that accomplished an arrest. They were always cooperative to our operations, because we always let things go to the point where they had a reason to arrest. I am not pro-police, and I am pro-Constitution, but there is a time and a place for acting out. Ask any parent of a two year old, but expect to get a not anytime nor anyplace response. Tell that to the two year old.
From my perspective, and as a business operator, constitutional rights were not important with regard to your behavior, at least when the behavior exhibited was detrimental to other guests. We had a business to run. Your 'exhibition' of your 'constitutional rights' have nothing to do with the operation of the business in operation.
Go find a public square where someone will actually listen to you. Good luck!
[ link to this | view in chronology ]
Re: Re: Re: Re: Good Faith
In my analogy, there was no reason provided. In your counter-example, there is a reason. I think this lines up with both the court decision, and your experience. If you can provide a reason, then the business has a defense. But if you can't provide a reason, then the business will need to suffer the consequences. Better have that c(2) defense ready.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Good Faith
"In my analogy, there was no reason provided"
Yes, your analogies do normally ignore most context in order to make your point.
Do you have any examples of this happening IRL? Or, is it just the usual guy playing the victim to like-minded bigots because he doesn't understand why his own behaviour led to the ban?
As usual, actual verifiable examples of the things you're thinking of would be welcome, because every time they're examined they usually don't say what you pretend they say.
"But if you can't provide a reason, then the business will need to suffer the consequences"
Yes, but this rarely happens. In reality, what usually happens is an abusive bigot gets his Twitter account blocked for his abuse or a plague rat gets banned from Costco or Wal Mart for refusing to abide by store policies, then goes online to whine that there was no reason for the ban. To non-idiots, the reasons are usually crytal clear.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Good Faith
That was one of the key takeaways from this New Hampshire court case:
"...whose Instagram account was deleted for some sort of terms of service violation (it is never made clear what the violation was, and that seems to be part of the complaint)."
"So, in this case (as in many such cases), Facebook didn't even raise the (c)(2) issue, and stuck with (c)(1), assuming that like in every other case, that would suffice. Except... this time it didn't."
[ link to this | view in chronology ]
Re: Re: Re: Good Faith
Purchasing a meal does not give you a license to be an asshole! Sue me!
[ link to this | view in chronology ]
Re: Re: Re: Good Faith
"If you pay for a meal at a restaurant up-front, can they kick you out before you get your meal without a reason?"
Yes, they can. You might have an argument if they didn't refund you before kicking your ass out on the street, but as long as they didn't break any other law in the process they have every right to kick you out.
Now, this doesn't often happen because restaurants like to keep paying customers happy unless they're being really disruptive to staff or other customers. So yet again, Koby, instead of whining that you were kicked out you need to start asking what kind of an asshole were you being that got you kicked out?
[ link to this | view in chronology ]
Re: Re: Re: Good Faith
"the analogy that I think fits better is "If you pay for a meal at a restaurant up-front, can they kick you out before you get your meal without a reason?""
Well yes. They can kick you out. Most likely they'll have to refund whatever you paid, but they can certainly kick you out.
You appear to be right back to where in order to back your argument you need to claim that private property should be an invalid concept in the US.
[ link to this | view in chronology ]
Re: Good Faith
The really strange thing about attacks on 230 is that it didn't add any new provisions to law, it simply codified and made explicit principles which had been a part of common law for a long time. As such, "defeating" 230 won't change anyone's liabilities, just make it more tedious and expensive to to establish the lack of liability.
So those who feel genuinely abused and want redress will gain nothing. The only real winners will be those who want to abuse the court system to deter perfectly legal behaviour. (Although, put in those terms, maybe the attacks aren't so strange, it's just that the public reasoning behind them is dishonest).
[ link to this | view in chronology ]
Re: Re: Good Faith
S230 does add new provisions, when it comes to companies like Facebook etc. which select particular content (based on what it says) and promote it and so on, which under common law would make them dangerously close to being publishers.
[ link to this | view in chronology ]
Re: Re: Re: Good Faith
https://www.techdirt.com/articles/20200531/23325444617/hello-youve-been-referred-here-because-youre- wrong-about-section-230-communications-decency-act.shtml
[ link to this | view in chronology ]
Re: Good Faith
"How can you claim good faith if you dont know?"
Ah, another bad faith argument from Koby.
Twitter know why they banned you and your Klan buddies. You might not agree with their decision but they can sure as hell support it in good faith.
[ link to this | view in chronology ]
A lose-lose situation
I can't fathom how this legal premise doesn't involve some sort of circular reasoning. If section 230 applies, you're not liable. If section 230 does not apply, then your terms of service will determine if your moderation decision makes you liable. Almost every service's terms of service states they have the right to delete your content/account for any reason they see fit. As long as that statement is there in some way, shape, or form, you're still not liable. In other words, if section 230 applies, we're not liable, but if not, we decide if we're liable, and we decide we're not.
The only way I can see this NOT involving circular reasoning is if it's the judge that determines if the terms of service, as it is written, when applied to a case, favors or disfavors the outcome alledged in the complaint. When the judge is deciding, it becomes a house of cards. They can interpret those terms in a way that favors the outcome that the judge wants. Once that happens, Section 230 becomes toothless in defending moderation decisions. Combine that with Masnick's Moderation Theorem ("Content moderation at scale is impossible to do."), and the floodgates of previously frivolous lawsuits will open.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
a very dangerous hole now punched
Damn those hanging chads. They keep coming back to haunt us.
https://www.usnews.com/news/articles/2008/01/17/the-legacy-of-hanging-chads
[ link to this | view in chronology ]
It occurs to me that I may have been drastically underestimating the stupidity of many of these attacks on Section 230.
Section 230 provides immunity against what someone else (say, me) publishes on your website, no?
But most of the complaining has been about what WASN'T published on your website. Who knew you needed protection from liability against what DIDN'T HAPPEN?
Now, people have sued most big papers--say, the New York Tunas or the Washington Possets--about outside editorials they printed. People might sue them over letters to the editor that they printed. But who knew you could sue them over letters to the editor that they DIDN'T print? Failures to adequately commit libel? Misinformation they didn't give out? Or, for that matter, documented information they did not publish? "My granny's cat died, the obituary was in The Backwoods Barely-Time News Journal; why wasn't it in the New York papers? CALL MY LAWYER!" Because cousin Jetta in Queens, where TBBTNJ doesn't deliver!"
These dogs ain't barking up the wrong tree, they're barking where ain't no tree ever been!
So what am I missing?
Now, THIS case may be different. The Advertise-Free-or-Die dude may have paid for some kind of service, and felt he didn't get it, or didn't like it when he got it. but most of us aren't up that tree. We paid nothing, and nobody promised us nothing. And when that's what we get, that's who we can sue.
[ link to this | view in chronology ]
I think it would be a violation of frees speech or the rights of a publisher to force them to publish every article or letter they recieve,
Also what if the letters contain racist or sexist content or spam ads for illegal services or obscene content.
Or fake news, eg drink beer it cures any disease
etc
It's important that any website can ban users who do not. Follow its terms of service..
Apple has a long list of rule devs must follow to publish apps. On its devices , for example
They ban links in an app like go to my website,
Pay for a sub here , instead of paying through apple
pay or using your credit card on the iPhone or ipad
[ link to this | view in chronology ]
I think I see the reasoning here
Based on my readings of CDA 230, the legislation is focused on content: sites aren't liable for content posted by users, and can moderate that content as they see fit.
In this case, it's somewhat unclear why the business was kicked off Instagram. I think that's why the appeals court sent this case back. If the ban wasn't due to a content moderation decision, it's probably not appropriate to dismiss this case on CDA 230 grounds.
What should happen is that this case gets dismissed based on the ToS wording. That's a much clearer victory, regardless of why the account was banned.
[ link to this | view in chronology ]
CDA Section 230 is a very bad law which should be repealed
Why does Tech Dirt ALWAYS seem so giddy whenever Big Tech's conduct is excused by CDA Section 230? Why do you think it is healthy that they have immunity from any wrongdoing? It is such an unusual gift to bestow upon the grandest public communication mechanism in the history of mankind. Anyone can call anyone else a murderer publish it (as many times as they like) on Big Tech and there it remains (and duplicates) for all eternity. Big tech has no duty to vet it for truthfulness in fact they will gleefully refuse to do so. Lives and careers are ruined with the click of a few keys with no proof necessary and TechDirt thinks all is right with the World. It is the opposite, CDA Section 230 immunity is the dumbest law ever and should never have seen the light of day.
[ link to this | view in chronology ]
Re: CDA Section 230 is a very bad law which should be repealed
https://www.techdirt.com/articles/20200531/23325444617/hello-youve-been-referred-here-because-youre- wrong-about-section-230-communications-decency-act.shtml
[ link to this | view in chronology ]
Re: CDA Section 230 is a very bad law which should be repealed
You are acting like a lying murder
[ link to this | view in chronology ]
Re: CDA Section 230 is a very bad law which should be repealed
The phone companies allow anyone to to spread rumours that someone is a paedophile, which has resulted in innocent people losing their lives. Should they be made responsible, and forced to monitor all communication over their network so that can shu down rumours?
[ link to this | view in chronology ]
Re:
And the police has no duty to vet the roads for criminals until something actually happens, because the alternative is traffic stops under ridiculous pretenses of racial profiling.
Because fuck knows why, nobody actually thinks verifying information exists anymore. Apparently the narrative of "fake news" only exists when it's convenient but not when sacking people on shitty excuses and hearsay.
[ link to this | view in chronology ]