The lawyer-plaintiff is Lenore Albert. Her Yelp page. She claims a former employee orchestrated a social media attack on her business, including posting fake disparaging reviews on her Yelp page plus this image (which she claims isn’t clearly demarcated as user content instead of Yelp-sourced content)...
Albert also claims that Yelp further screwed up her page when she refused to advertise with it. She sued Yelp for defamation, tortious interference and intentional infliction of emotional distress. The lower court granted Yelp’s anti-SLAPP motion. The appeals court affirmed.
After deciding that posted reviews were not commercial speech (which would not be covered by the state's anti-SLAPP statute) and of public interest (the plaintiff being a lawyer involved in foreclosure proceedings), the court moves on [PDF] to solidly stake out the extensive coverage of Section 230 protections for service providers.
Since Yelp is an internet service provider, it is immunized, under section 230 of the Telecommunications Act of 1996, for defamation contained in any third party reviews on a Yelp page pertaining to a given business. The case law on this point is conclusive…
All doubt is removed when we examine two of the most extreme cases illustrating the immunizing effect of section 230, Barnes v. Yahoo!, Inc. (9th Cir. 2009) 570 F.3d 1096 (Barnes) and Carafano v. Metrosplash.com, Inc. (9th Cir. 2003) 339 F.3d 1119. These cases involved more than simple defamatory third party comments. Rather, in both cases third parties were able to use a website to cast the plaintiff in a decidedly negative false light. In Barnes, the ex-boyfriend of the plaintiff posted revenge porn on the website. The court held the website itself was still immune under section 230. (Barnes, supra, 570 F.3d at p. 1103 [to hold the website responsible would be to treat it like a publisher in contravention of section 230].) And in Carafano, the court held a dating website could not be held responsible for a third party’s virtual impersonation of an actress on the site. Of course, section 230 certainly does not immunize third parties who actually write defamatory posts to a website. (E.g., Bentley Reserve LP v. Papaliolios (2013) 218 Cal.App.4th 418 [former tenant could be liable for postings on Yelp about landlord]), but the website itself is unreachable.
The court also dismisses several other accusations by Albert, noting that Yelp has never solicited defamatory/misleading reviews and acts in good faith to remove defamatory or misleading postings when notified. It also points out that Albert's claim that Yelp itself creates misleading/defamatory reviews is not supported by any available evidence.
The plaintiff has asked for the opportunity to amend her complaint (not a bad idea, considering every allegation was rebuffed), but the court points out that the anti-SLAPP statute would be completely useless if complainants were allowed to rewrite their pleadings in light of a court's decision.
As this court recently pointed out, when a complaint is attacked by an anti-SLAPP motion, it cannot be amended so as to add or omit facts that would take the claim out of the protection of the anti-SLAPP statute. In the instant case, the plaintiff sued the ubiquitous business review internet service Yelp, alleging three causes of action which are unmeritorious. On appeal she posits she might be able to amend to allege other causes of action, at least two of which, unfair competition and false advertising, might arguably have merit given the Second District’s recent decision in Demetriades v. Yelp, Inc. (2014) 228 Cal.App.4th 294 (Demetriades) [suit based on Yelp’s statements about itself].) But whether they have merit cannot be reached in this case. Given the rule against amendments to add or omit facts in anti-SLAPP cases, we must affirm the judgment based on the three causes of action actually alleged.
While the decision does affirm what's already assumed about Section 230 protections, it's good to see these protections reaffirmed -- especially given recent highly-questionable decisions emanating from that area of the country. Yelp will recover the costs of its appeal, and if Albert still has money to blow, she's welcome to sue the people who posted the negative material, rather than the website hosting it.
We've noted in the last month or so a series of court rulings in California all seem to be chipping away at Section 230. And now we've got another one. As we noted last month, revenge porn extortion creep Kevin Bollaert had appealed his 18-year sentence and that appeal raised some key issues about Section 230. As we noted, it seemed clear that the State of California was misrepresenting a bunch of things in dangerous ways.
Unfortunately, the appeals court has now sided with the state, and that means we've got more chipping away at Section 230. No one disagrees that Bollaert was a creep. He was getting naked pictures of people posted to his site, along with the person's info, and then had set up a separate site (which pretended to be independent) where people could pay to take those pages down. But there are questions about whether or not Bollaert could be held liable for actions of his users in posting content. Section 230 of the Communications Decency Act (CDA 230) is pretty damn clear that he should not be held liable -- but the court has twisted itself in a knot to find otherwise, basically arguing that Bollaert is, in part, responsible for the creation of the content. This is going to set a bad precedent for internet platforms in California and elsewhere.
The court, not surprisingly, relies heavily on the infamous Roommates.com ruling that also said that site didn't qualify for Section 230 immunity, because it asked "illegal" questions (about housing preferences), and since the site itself had asked those questions, it was liable for creating that "illegal" content. That's different than what happened with Bollaert's UGotPosted site, but the court works hard to insist the two are close enough:
Here, the evidence shows that like the Web site in Roommates, Bollaert created UGotPosted.com so that it forced users to answer a series of questions with the damaging content in order to create an account and post photographs. That content—full names, locations, and Facebook links, as well as the nude photographs themselves—exposed the victims' personal identifying information and violated their privacy rights. As in Roommates, but unlike Carafano or Zeran, Bollaert's Web site was "designed to solicit" (Roommates, supra, 521 F.3d at p. 1170, italics added) content that was unlawful, demonstrating that Bollaert's actions were not neutral, but rather materially contributed to the illegality of the content and the privacy invasions suffered by the victims. In that way, he developed in part the content, taking him outside the scope of CDA immunity.
I can predict that this paragraph is likely to show up in a bunch of other cases. People are going to insist that lots of other platforms that include any form of structure will now be liable if any of the content based on that structure violates the law. That, again, goes directly against the clearly stated purpose of CDA 230. And it's likely to create something of a mess for internet platforms that regularly rely on 230.
The really crazy thing here is that earlier in the ruling, the court noted that it didn't even need to answer the Section 230 question because they already had enough info to support charges of action "with the intent to defraud." But then it answered the CDA 230 issue anyway, and did so badly. No one's going to feel sorry for Bollaert, who is a complete creep. But the wider precedent of this ruling is going to be dangerous and will likely show up in lots and lots of lawsuits against internet platforms going forward.
Back in May, we noted that large cities around the country were rushing to put in place anti-Airbnb laws designed to protect large hotel companies. In that post, we noted that many of the bills almost certainly violated Section 230 of the CDA by making the platform provider, Airbnb, liable for users failing to "register" with the city. Section 230, again, says that a platform cannot be held liable for the actions (or inactions) of its users. San Francisco was the first city to get this kind of legislation pushed through. And while the city's legislators insisted that Section 230 didn't apply, they're now going to have to test that theory in court. Airbnb has asked a court for a preliminary injunction blocking the law, based mainly on Section 230, but also mentioning the Stored Communications Act and tossing in a First Amendment argument just in case.
As designed and drafted by the Board of Supervisors, the Ordinance directly conflicts with,
and is preempted by, Section 230 of the Communications Decency Act, 47 U.S.C. § 230 (the
“CDA”). According to its own sponsors, the law holds “hosting platforms accountable for the
hundreds of units (rented by) unscrupulous individuals” posting listings on their websites, and holds
“Airbnb Accountable for Listing Illegal Short Term Rentals.” Declaration of Jonathan H. Blavin
(“Blavin Decl.”)... As such, the Ordinance unquestionably
treats online platforms like Airbnb as the publisher or speaker of third-party content and is
completely preempted by the CDA. In addition, the law violates the Stored Communications Act,
18 U.S.C. §§ 2701 et seq. (the “SCA”), by requiring disclosure to the City of customer information
without any legal process, and the First Amendment as an impermissible content-based regulation.
As Airbnb points out, the city even recognized that the bill probably runs afoul of Section 230, but signed it into law anyway:
The City was not blind to the fact that the Ordinance might run afoul of the CDA and other
laws. Following its passage, the Mayor’s office said that the “mayor remains concerned that this
law will not withstand a near-certain legal challenge and will in practice do nothing to aid the city’s
registration and enforcement of our short-term rental laws.” ... The City
Attorney’s Office acknowledged that the Ordinance could raise “issues under the Communications
Decency Act” but claimed that it had been drafted “in a way that minimizes” those issues by
regulating “business activities” instead of “content.” ... Despite the City’s
best efforts to tiptoe around the CDA through such semantic devices, the problem for the City is
that the substance of what the Ordinance seeks to do violates the CDA. No amount of creative
drafting can change that reality.
The Stored Communications Act argument involves the requirements of Airbnb to turn over information on its users. The SCA is a part of ECPA, the Electronic Communications Privacy Act, that is supposed to protect the privacy of electronic communications (though, it's in deep need of an update). Here, Airbnb points out that the city ordering it to release customer information almost certainly violates the SCA.
The verification provisions of the Ordinance separately are barred by the SCA. In a futile
effort to sidestep the CDA, the Ordinance requires Hosting Platforms to verify listings by disclosing
to the City host names and addresses “prior to posting” a listing—and without a subpoena.... But in this failed endeavor to avoid Section
230, the Ordinance runs smack into the SCA, which bars state laws that compel online services like
Airbnb to release customer information to governmental entities without legal process.
The First Amendment argument is basically a backstop in case the CDA and SCA arguments fail, and then there's a Constitutional argument to appeal. If the court deals with the case on CDA and SCA grounds, it probably will avoid the First Amendment question altogether. But, the basis is that regulating types of advertisements on Airbnb's platform is a form of a content-based restriction on speech. And there is a strong argument that in restricting the content on the platform, rather than merely punishing the people who post their content to Airbnb, that the law violates the First Amendment. There are exceptions but, generally speaking, the First Amendment doesn't like any law that blocks out speech entirely, even if it's commercial speech.
The Ordinance also violates Hosting Platforms’ First Amendment rights. The prohibition on
the publication of certain rental advertisements—i.e., those listings without verified registration
numbers—is a content-based speech restriction subject to “heightened judicial scrutiny” under the
First Amendment.... The City cannot meet its
burden of demonstrating that this speech restriction directly advances a substantial state interest and
does so in a narrowly tailored way. Even assuming the Ordinance actually advances a substantial
state interest (which is questionable), it places a far greater burden on speech than is necessary to
achieve that end. The “normal method of deterring unlawful conduct” is to punish the conduct,
rather than prohibit speech or advertising regarding it.... The City cannot show that the obvious alternative of enforcing its existing laws against third-party residents who rent properties in violation of the law, rather than against Hosting
Platforms, would be ineffective or inadequate. Just the opposite: it is clear the City could enforce
its laws directly against hosts who violate them—as it already has begun to do with increasing
effectiveness and success—rather than indirectly against Hosting Platforms that publish listings.
Further, the law is unconstitutionally overbroad as it punishes platforms for publishing any listing
without complying with its “verification” procedures—including those listings that may be lawful.
Whatever you think of Airbnb (and people seem to get more emotional about it than seems reasonable...), this lawsuit could become quite important in making sure that Section 230 remains strong in protecting internet services providing useful services to individuals. In the past month or so, we've seen a number of questionable Section 230 rulings (especially in California) that have started to chip away at this law. However, I don't see how any of those rulings directly apply to this case. The most direct comparison is probably the Model Mayhem case but, in that case, the court was clear that it allowed the California law requiring the platform to "warn" users to stand in part because it did not require the platform "to remove any user content or otherwise affect how it publishes or monitors such content." That's clearly not the case with this Airbnb law.
Either way, this is a case worth following, and hopefully one where the courts don't lop off another chunk of Section 230's protections (or, for that matter, the SCA's privacy protections).
Separately, there's a very, very bizarre NY Times article about this, falsely claiming that Airbnb is suing over a law it helped pass. That's just wrong. It's really bad reporting. Airbnb is clearly suing over the new language voted in by the SF Board of Supervisors earlier this month, and not the broader law that passed a few years ago.
A few weeks ago, we wrote about how legislators in various cities (mainly SF, Chicago and LA) were trying to push through anti-Airbnb legislation that would require homeowners doing short term rentals to register with the city -- and which would hold the platform (Airbnb) liable if its users failed to do so. As we noted, that almost certainly violates Section 230 of the CDA, which bars any law that attempts to hold a platform liable for the actions of its users. At least in San Francisco, the Board of Supervisors ignored all of this with a city attorney claiming (incorrectly) that since it regulates "business activities of platforms," it's not regulating the content on those platforms. That's an... interesting dodge on the Section 230 issues. It seems unlikely to hold up in court, but California's been especially wacky on CDA 230 lately. The SF legislation has since passed, and it will be interesting to see if anyone (i.e., Airbnb) decides to challenge it in court.
Meanwhile, over in NY state, it seems that they're bringing out an even bigger and more clueless anti-Airbnb sledgehammer. It's a proposed bill that would bar Airbnb using homeowners from "advertising" short term rentals of their properties. They put it in SCREAMY LETTERS mixed with legalese:
PROHIBITING ADVERTISING THAT PROMOTES THE USE OF DWELLING UNITS
IN A CLASS A MULTIPLE DWELLING FOR OTHER THAN PERMANENT RESIDENCE
PURPOSES. IT SHALL BE UNLAWFUL TO ADVERTISE OCCUPANCY OR USE OF
DWELLING UNITS IN A CLASS A MULTIPLE DWELLING FOR OCCUPANCY THAT WOULD
VIOLATE SUBDIVISION EIGHT OF SECTION FOUR OF THIS CHAPTER DEFINING A
"CLASS A" MULTIPLE DWELLING AS A MULTIPLE DWELLING THAT IS OCCUPIED FOR
PERMANENT RESIDENCE PURPOSES.
Basically: you can't use Airbnb to rent out your home for a short period of time and make some extra money because NY legislators don't want to upset the hotel business. Violating the law for Airbnb users can lead to increasing fines ($1,000 for first offense, $5,000 for a second and $7,000 for each additional violation). While a quick reading of the bill appears to focus on the homeowners, it can also be read to apply to Airbnb itself. Because the definition of "advertise" includes any "WEBSITES" that are "INTENDED OR USED TO INDUCE, ENCOURAGE OR PERSUADE THE PUBLIC TO ENTER INTO A CONTRACT FOR GOODS AND/OR SERVICES." (Sorry for the screamies, which are in the original).
Apparently, NY legislators are rushing this bill through. The fact that it can go after Airbnb almost certainly violates Section 230 yet again, but a bigger deal is just how ridiculous this is for anyone in NY who wants to make use of Airbnb. Airbnb is a very useful platform for both homeowners and travelers. It's helpful for the tourism industry and creates a bunch of benefits. It's not perfect, but this kind of bill would effectively kill off a lot of the usefulness of Airbnb. And for what? The message the NY legislature would be sending is "innovation is not welcome in NY." As Julie Samuels wrote in the NY Daily News:
But rather than making it easier to bring this home-sharing consensus to New York and preserve the innovative possibilities in the sharing economy, the legislation in Albany threatens to foreclose productive conversations about a comprehensive regulatory environment for startups like Airbnb. Episodes like these — where New York’s leaders risk signaling that they are not interested in listening to what tech companies have to say — are precisely the kind of stories that loom large in the minds of entrepreneurs and hurt job growth.
What’s worse is that this bill does nothing to address legitimate concerns about home-sharing, or to support tech companies’ efforts to crack down on illegal hotel operators who seek to remove housing from the market. Instead, it sets a bullseye on thousands of middle-class New Yorkers by imposing fines of up to $7,500 for advertising their homes on networks such as Airbnb’s.
It's amazing how often politicians seem to want to attack, rather than nurture innovation that's helping their constituents.
The lawsuit against Twitter for "providing material support" to ISIS (predicated on the fact that ISIS members use Twitter to communicate) -- filed in January by the widow of a man killed in an ISIS raid -- is in trouble.
Twitter filed its motion to dismiss in March, stating logically enough that the plaintiff had offered nothing more than conclusory claims about its "support" of terrorism, not to mention the fact that there was no link between Twitter and the terrorist act that killed the plaintiff's husband. On top of that, it pointed out the obvious: that Section 230 does not allow service providers to be held responsible for the actions of their users.
U.S. District Judge William Orrick said the complaint fails to show a link between the social media network's actions and the attack that took five lives in Jordan.
"I just don't see causation under the Antiterrorism Act," [Judge William] Orrick said. "There's no allegation that ISIS used Twitter to recruit Zaid."
That deals a blow to one of the lawsuit's allegations. Orrick also didn't find the plaintiff's claim that Twitter direct messages are somehow different than regular tweets when it comes to Section 230 protections.
Orrick was not persuaded that companies like Twitter could be sued for messages sent by users.
"Just because it's private messaging doesn't put this beyond the Communications Decency Act's reach," Orrick said.
This was in response to the plaintiff's lawyer's assertion that because direct messages are not accessible by the public, Twitter couldn't avail itself of Section 230 protections as a "publisher." Twitter's lawyer countered by pointing out email providers are still considered "publishers" and they can't be held responsible for users' communications, even though those messages are never made public.
It only took about 40 minutes for Judge Orrick to reach a decision, albeit one that doesn't shut down this ridiculous lawsuit completely. The lawsuit has been dismissed, but without prejudice and with an invitation for the plaintiff to file an amended complaint.
Given the hurdles the plaintiff needs to leap (some logical, some statutory) to find Twitter responsible for the actions of terrorists halfway around the world, it's unlikely that an amended complaint will fix the seriously misguided lawsuit. The only people truly responsible for the plaintiff's husband's death are those who took his life. While it's an understandable emotional response to want someone to pay for the murder of a loved one, sometimes there's no way to receive that sort of closure.
Twitter isn't a closed platform developed solely for terrorists' communications. It's available to anyone with an email address… even terrorists. Twitter is routinely criticized for its handling of illicit material and abusive behavior, but the undeniable fact still remains: these unpleasant communications are created by users, not by Twitter. Any attempt to connect the dots between a terrorist attack and terrorist chatter is tenuous, and any attempt to hold platforms responsible for the actions of their users carries with it the potential to make the internet worse for millions of law-abiding users.
Not sure what's going on in California, but it's been suddenly issuing a bunch of really bad rulings concerning Section 230 of the CDA (the most important law on the internet). As we've explained many times, Section 230 says that online services cannot be held liable for actions of their users (and also, importantly, that if those platforms do decide to moderate content in any way, that doesn't impact their protections from liability). This is massively important for protecting free speech online, because it means that platforms don't have to proactively monitor user behavior out of fear of legal liability and they don't feel the need to over-aggressively take down content to avoid being sued.
Over and over again the courts have interpreted Section 230 quite broadly to protect internet platforms. This has been good for free speech and good for the internet overall (and, yes, good for online companies, which is why some are so against Section 230). But, as we've been noting, Section 230 has been under attack in the past year or so, and all of a sudden courts seem to be chipping away at the protections of Section 230. Last week we wrote about a bad appeals court ruling that said Section 230 did not protect a website from being sued over failing to warn users of potential harm that could come from some users on the site. Then, earlier this week, we wrote about an even worse ruling in San Mateo Superior Court (just a block away from my office...) exempting publicity rights from Section 230.
And now, Eric Goldman points our attention to an even worse ruling coming out of California state's appeals court for the First Appellate district. In this ruling, the court determines that Yelp can be forced to delete reviews that the court found defamatory (though entirely based on a default judgment, where the defendant didn't show up in court). In previous cases most courts have found that even if content is found to be defamatory, a third party website cannot be forced to delete it, because of the pesky First Amendment.
In this case, the court doesn't care. The background of the case involves a lawyer, Dawn Hassell, who sued a former client, Ava Bird, who allegedly posted negative reviews of Hassell's work. Hassell sued, Bird ignored, and the court ruled for Hassell as a default judgment. As part of this it also ordered Yelp to remove the reviews. Yelp protested. The court then twists itself into all kinds of questionable knots to ignore both Section 230 and the First Amendment. The court first questions whether or not Yelp can even make the First Amendment argument, seeing as it's also claiming that it's not the author of the content in question. Of course, that totally misses the point: it's not necessarily just about the content in the review, but also Yelp's First Amendment rights in presenting content on its website.
In order to claim a First Amendment stake in this case, Yelp characterizes itself as a publisher or distributor. But, at other times Yelp portrays itself as more akin to an Internet bulletin board—a host to speakers, but in no way a speaker itself. Of course, Yelp may play different roles depending on the context. However, in this context it appears to us that the removal order does not treat Yelp as a publisher of Bird’s speech, but rather as the administrator of the forum that Bird utilized to publish her defamatory reviews.
But, uh, the administrator of a forum still has separate First Amendment rights in determining how they present things in their forum. That's kind of how it works. As Eric Goldman notes:
What the hell is an “administrator of the forum,” and what legal consequences attach to that status? We’re not talking about the free speech rights of a janitor with a mop. This case involves a curator of speech–and even if the curator is just “administrating,” telling a curator how to administrate raises significant speech interests that deserve more respect than this court gave it.
The court then suggests that the First Amendment doesn't apply because Yelp has no right to question a court.
To the extent Yelp has ever meant to contend that an injunction requiring Bird to remove defamatory statements from the Internet injuriously affects Yelp, we disagree. Yelp’s claimed interest in maintaining Web site as it deems appropriate does not include the right to second-guess a final court judgment which establishes that statements by a third party are defamatory and thus unprotected by the First Amendment.
Yikes! That of course, ignores the actual issue at play -- especially the fact that the finding of defamation was on default, rather than through an actual adversarial process.
But the really scary part is how the court gets around Section 230. Goldman refers to it as "jujitsu" and that's a pretty apt analogy:
Yelp argues the authority summarized above establishes that the removal order is void. We disagree. The removal order does not violate section 230 because it does not impose any liability on Yelp. In this defamation action, Hassell filed their complaint against Bird, not Yelp; obtained a default judgment against Bird, not Yelp; and was awarded damages and injunctive relief against Bird, not Yelp.
Okay... but then it's ordering Yelp to remove the reviews, despite being a non-party. And if Yelp does not remove the reviews, then it's in contempt of court, which means that yes, the court is absolutely applying liability. But, no, says the court, because [reasons].
If an injunction is itself a form of liability, that liability was imposed on Bird, not Yelp. Violating the injunction or the removal order associated with it could potentially trigger a different type of liability which implicates the contempt power of the court.
Got that. It's not liability because it's "a different type of liability." WHAT?!? Where in the law does it say that "a different type of liability" (with no clear definition) is allowed? The court clarifies by muddying the waters some more:
In our opinion, sanctioning Yelp for violating a court order would not implicate section 230 at all; it would not impose liability on Yelp as a publisher or distributor of third party content.
This makes no sense at all.
Separately, the court keeps relying on the fact that Yelp itself was not sued by Hassell, and that all other cases involved service providers that were parties to the case. But that leads to ridiculous results:
As we have pointed out, Hassell did not allege any cause of action seeking to hold Yelp liable for Bird’s tort. The removal order simply sought to control the perpetuation of judicially declared defamatory statements. For this reason, Yelp seriously understates the significance of the fact that Hassell obtained a judgment which establishes that three reviews Bird posted on Yelp.com are defamatory as a matter of law, and which includes an injunction enjoining Bird from repeating those three reviews on Yelp.com. Indeed, that injunction is a key distinction between this case and the CDA cases that Yelp has cited, all of which involved allegations of defamatory conduct by a third party, and not a judicial determination that defamatory statements had, in fact, been made by such third party on the Internet service provider’s Web site.
But under that standard, the court has just offered up a huge hole to avoid Section 230: just don't name the service provider, and then you can force the service provider to take down the content. If that stands, very bad things will happen as a result. As Goldman points out in response to this, the court is simply wrong:
So the court is flat-out wrong. While I believe it’s correct that none of the cases were posed as contempt proceedings, the actions in both Blockowicz and Giordano also came after lower court findings of defamation. And in any case, WTF? Is the court saying that Section 230 preempts a direct lawsuit against a UGC site seeking injunctive relief, but it’s totally OK to reach the same result by not naming the UGC site in the lawsuit and then enforcing an injunction via contempt proceedings?
Goldman goes on to note how this ruling will create all kinds of mischief opportunities:
Step 1: sue the content poster for defamation in California state court. Do not sue the UGC site because (a) they are immune under Section 230, or (b) they might decide to fight substantively.
Step 2: take advantage of loose service of process rules and or otherwise hope the poster doesn’t appear in the case. For example, non-California residents aren’t likely to fight in a California court even if they get notice.
Step 3: get a default judgment finding defamation. If the user does make an appearance, a stipulated judgment with the user could reach the same result.
Step 4: seek an injunction requiring removal by the UGC site. Once the judge accepts the service of process and concludes the defendant didn’t show, the judge will probably do just about whatever the plaintiff asks. With the default judgment, the plaintiff can then use the coercive effect of contempt to force the UGC site to remove the content so long as the UGC site is under California’s jurisdictional reach–which most UGC sites are.
Voila! A right to be forgotten in the US, despite the First Amendment and Section 230.
As an added bonus, in the same lawsuit, the plaintiff can target multiple items of unwanted content by claiming it’s also written by the defendant or someone working in concert with the defendant. For example, I don’t believe it was ever confirmed that Birdzeye and JD are the same person, but consistent with the less-stringent approach deployed by judges when faced with default proceedings, the court treats both reviews as if the author(s) of the opinions was in court. If, in fact, JD is a different person, then Hassell successfully scrubbed JD’s content without ever suing the actual author or serving proper notice on the author. As you can see, there’s a great collateral damage potential here.
Goldman also warns that this ruling may not be easy to overturn. Yelp can (and should) appeal to the state Supreme Court, but there's no guarantee it will take the case. There are legislative solutions, but those are unlikely as well. But for the time being, this ruling is a ticking time bomb. It can and will be abused. We see so many attempts to censor content by abusing copyright law, and now California has given people a playbook for how to abuse defamation law to do the same thing.
What a week. Just a few days after we wrote about a dangerous ruling in a federal appeals court in California concerning a way to get around Section 230 of the CDA, now we have another problematic CDA 230 ruling from California in the form of a ruling from San Mateo Superior Court judge, Donald Ayoob, that has the potential to do a lot of damage to Section 230 as well as anti-SLAPP efforts in California. Paul Levy has a very detailed post about the case, but we'll try and do a summary here.
The case involves Jason Cross, a "country rap" musician who performs under the name Mikel Knight, who has apparently made a name for himself through a highly aggressive "street team" operation that basically travels around in vans pushing people to buy Knight's CDs -- and there are plenty of accusations of sketchy behavior around how those street teams operate, and how Cross treats the people who work for him. Apparently, Cross was not happy with a Facebook group entitled "Families Against Mike Knight and the MDRST" (MDRST standing for Maverick Dirt Road Street Team, which is what Cross calls the street team). He then used a court in Tennessee to try to get Facebook to identify who was behind the group, and then demanded that the page be taken down. That effort is still ongoing, but has been temporarily postponed, while he then filed a separate lawsuit in Californiaagainst Facebook and whoever is behind that group, a variety of things, including breach of contract, negligent misrepresentation, negligent interference with prospective economic relations, unfair business practices and various publicity rights violations. Oddly, as Levy points out in his post, despite listing John Does as defendants, the complaint doesn't describe anything anyone did other than Facebook. However, as part of the discovery process, Cross did (of course!) ask Facebook to identify the people behind the group criticizing him.
Facebook, quite reasonably, asked the court to dismiss the case under California's anti-SLAPP law and pointed to Section 230 for an explanation of why it's immune. The ruling, unfortunately, is very, very confused. It grants some of Facebook's request, saying that Facebook didn't breach any agreement in failing to remove the group, but refuses to dismiss the publicity rights claims, stating that publicity rights are "intellectual property" and intellectual property is not covered by Section 230. The first half of the ruling does note that Facebook is not liable for the regular content on those pages and thus it was under no obligation to take them down, but then goes off the rails on the publicity rights claim.
You might wonder where there's a publicity rights claim in any of this, but it appears that Cross is arguing that because the group (a) uses images of himself (as Mikel Knight) and (b) Facebook puts ads on those pages, that this is an abuse of his publicity rights for commercial advantage. Really.
Here, it is alleged that Facebook had knowledge since October 2014 that pages using Knight's
likeness and identity were being created on its site.... Knight states that he did not
consent to these pages or the advertising Facebook placed on them.... Facebook's
financial performance is based on its user base; accordingly, Facebook' s alleged use of Knight's
image on the unauthorized pages generates advertising revenue for the company....
Knight states that Facebook's unauthorized use of his image has resulted in substantial harm.... Accordingly, Plaintiffs have shown a probability of prevailing on their rights of
publicity claims. Because the Sixth Cause of Action is a derivative claim that may arise from either or
both the Fourth and Fifth Causes of Action, here to Plaintiffs have shown a probability of
prevailing.
This seems wrong on a number of different accounts. First, and most importantly, how the hell is this a legitimate publicity rights claim? Publicity rights are supposed to be about stopping companies from using an image of a famous person in a manner that suggests endorsement when the person did no such thing. It's a very, very twisted (and incorrect) notion to argue that because Facebook has some ads on the same page as a group that complains about Cross/Knight that it's violating his publicity rights. As Levy notes in his post:
If this ruling is upheld, it will blow a gaping hole in the immunity provided by section 230. Plaintiffs who are unhappy about being criticized on any platform provided by an online service provider will be able to force the removal of those materials, and without even showing that there is anything false or otherwise tortious about the criticism – all they will have to argue is that they did not give permission for the use of their names or images in the criticism and that will be enough to make out a viable legal claim against the hosting company.
That's frightening. But, as Levy also notes, this seems to clearly be a misreading of the law:
...it simply cannot be the case that a violation of the right of publicity can be found whenever someone talks about a celebrity and thereby makes a profit. People Magazine and the National Enquirer, for example, and a variety of other publications make their money writing about individuals about whom the public has an insatiable appetite for information, but they do not require the celebrities’ permission to write about them. Indeed, to the extent that the right of publicity is analogous to trademark rights, it applies when a use of the celebrity’s name and likeness creates a likelihood that consumers will believe that the celebrity has endorsed the company that used the name and likeness (analogous to the “likelihood of confusion” requirement).
The judge here is just confused.
Second, there's the question of whether or not CDA 230 should or should not apply to publicity rights claims. It is true that CDA 230 explicitly carves out "intellectual property," but as Levy notes, there's some debate as to whether or not that applies just to federal IP laws or if it also covers state ones. Publicity rights are a purely state law concept. Of course, I think the argument could go even further, and it could be claimed that publicity rights shouldn't even be considered intellectual property in the first place. I already have issues with lumping copyright and patents together with trademarks as "intellectual property." But adding amorphous and ever changing publicity rights into the same bucket is problematic as well. There is no official definition of what counts as "intellectual property." This is, perhaps, more of a problem in that CDA 230 should have never created such a broad carveout (or should have at least specified "copyright" or whatever), but stretching the exemption to cover publicity rights is dangerous -- especially when courts like this one seem so confused about what's actually covered by publicity rights.
Levy also notes that there's a First Amendment issue behind all of this, in that it's pretty clear that Cross is seeking to suppress people saying things about him that he doesn't like. Levy -- who has been investigating this case to see if his organization, Public Citizen, should get involved (and his post details talking to a number of people who used to be a part of Cross's "street team") -- notes that he's intending to file an amicus brief in Facebook's inevitable appeal.
Either way, it's unfortunate that a judge would get such issues so incredibly wrong in a manner that could have a serious impact on free expression. Hopefully, the ruling is quickly reversed on appeal.
Fights over tech policy are going increasingly local. Most technology regulations have been federal issues. There have been a few attempts to regulate on the state level -- including Pennsylvania's ridiculous attempt to demand ISPs filter out porn in the early 2000s. But state legislators and Attorneys General eventually learned (the hard way) that federal law -- specifically CDA 230 -- prevents any laws that look to hold internet platforms liable for the actions of their users. This is why state Attorneys General hate Section 230, but they need to deal with it, because it's the law.
It's looking like various cities are now about to go through the same "education" process that the states went through in the last decade. With the rise of "local" services like Uber and Airbnb, city by city regulation is becoming a very, very big deal. And it seems that a bunch of big cities are rapidly pushing anti-Airbnb bills that almost certainly violate Section 230 and possibly other federal laws as well. In particular, San Francisco, Los Angeles and Chicago are all pushing laws to further regulate platforms for short term housing rentals (and yes, the SF effort comes just months after another shortsighted attempt to limit Airbnb failed).
The bills basically look to force people who want to use platforms like Airbnb to register, but then look to hold the platforms liable if a renter does not include the registration info in their profile. Gautam Hans does a nice job in the link above outlining why San Francisco's proposed bill -- which will be voted on shortly -- clearly would fail to survive a Section 230 challenge:
This imposition of liability clearly goes against Section 230, which states in (c)(1) that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” — meaning that, if an information content provider, typically an individual user, posts something illegal, the interactive computer service, typically a website, can’t be held liable for it. Moreover, under (e)(3), “no liability may be imposed under any State or local law that is inconsistent with this section.” States and localities can pass laws that are consistent with Section 230, but anything inconsistent with Section 230 — like the imposition of liability on a website operator for user-generated content — is unlawful. From a logistical perspective, this makes a great deal of sense. If states and cities could enact a variety of conflicting laws, the whole point of Section 230 would be undermined. As a global medium, the internet wouldn’t work if it were subject to piecemeal regulations by every state and city within the US.
Hans also points out that the Chicago proposal (which is ~50 pages!) is equally bad:
The other recent proposal, from Chicago, creates similar issues by holding platforms liable for user content. Like the San Francisco proposal, it uses fines as the leverage to require platforms to ensure that listings on a platform have been approved by the city. And, as with the San Francisco proposal, the architecture of the liability structure runs afoul of Section 230’s preemption clause. The problematic language in this legislation, Section 4-13-250, states “It shall be unlawful for any licensee … to list, or permit to be listed, on its platform any short term residential rental that the commissioner has determined is ineligible for listing”; the penalty for violations, in Section 4-13-410, is “a fine of not less than $1,500.00 nor more than $3,000.00 for each offense. Each day that a violation continues shall constitute a separate and distinct offense.” This essentially creates a strict liability regime for website operators based on third-party content: if a user uploads a non-compliant rental listing, the site operator would immediately be in violation of this provision, regardless of whether they were aware of the posting or its ineligible status. No matter what the amount the potential fine is, this imposition of liability clearly contravenes Section 230.
Hans doesn't cover the LA law, but it's just as problematic (potentially more problematic!). Like the SF and Chicago bills, the focus is on requiring registration, and then puts liability on the platforms:
Hosting Platform Requirements.
(1) Actively prevent, remove and cancel any illegal listings and bookings of short term
rentals including where a listing has been offered: without a Home-Sharing
registration number; by a Host who has more than one listing in the City of Los
Angeles; or, for a rental unit that exceeds 90 days in a calendar year.
Yes, sure, cities are concerned about how Airbnb can impact the way cities are run -- though over and over again we've seen evidence that Airbnb can be super helpful to cities in terms of increasing tourism and opening up new ways for people to earn money. But, if cities want to target questionable practices, they should do so by targeting the actual questionable practices, not by trying to skip around Section 230 and pretending it doesn't exist. I'm sure, as with the state AGs, we may hear city officials whine about how terrible Section 230 is and how it gets in the way of them "protecting citizens" or whatever they're going to claim, but those claims are silly. Section 230 is about properly targeting liability. When you point the liability in the wrong direction -- at platforms -- you reduce innovation and chill useful services. As Hans notes:
Enforcing the laws of a city or state is an important goal, especially when those laws are designed for compliance, safety, and non-discrimination. Yet it is equally important to ensure that the internet remains an open platform for innovation and exchange, which requires ensuring that intermediaries are not held legally responsible for content they did not author. In enacting Section 230, Congress ensured that this value would be the law of the land, and it is important that cities and states abide by superseding federal law.
One would hope that the cities in question would recognize the legal problems with their own bills before they decide to move forward on any of them. Otherwise, they're just going to end up wasting a ton of taxpayer money when someone takes these to court, and the cities inevitably lose, just as the states did a few years ago.
About a decade ago, we wrote about a series of silly lawsuits against Google in which search engine optimizers sued Google because their search engine ranking sucked. All of these lawsuits went nowhere fast. The reason why seems fairly straightforward: it's Google's search engine, and it gets to decide how its algorithm works. Having the courts come in and start mucking with that gets problematic fast.
While I thought those kinds of cases went out of style a decade ago, apparently another SEO firm, called e-ventures, sued Google after the company called e-ventures' site "pure spam" and removed it from the Google Index. This is a level of punishment that Google has been known to slap on really egregious and sketchy SEO tactics. Google takes a pretty hard line on really scammy tactics, and even once famously banned BMW's website for spammy techniques.
Apparently, at some point, Google's web spam team decided that e-ventures was spamming as well, and removed its website. The company sued under a variety of theories, but mainly claims that it did nothing that violated any of Google's stated rules -- and furthermore that Google was misleading in some of its public statements about what it will and won't remove from the web, as well as how it alerts people to those removals. Google hit back with two responses in a motion to dismiss. First, it said that it's protected under CDA 230 for removing content and second that the choices it makes on how the search results are ranked are protected by the First Amendment.
First: the CDA 230 claim is a different one than we normally talk about with CDA 230. Normally we're focused on CDA 230(c)(1), which talks about a service provider not being treated as the publisher of content from users. Here, no one denies that this is about Google's own search engine and own actions. But Google is pointing to a different part of the law, sometimes known as the "Good Samaritan" clause in CDA 230(c)(2)(A), which says that no provider shall be held liable for "any action voluntarily taken in good faith to restrict access to or availability of material...." This was designed to actually encourage sites to take down sketchy or "obscene" content. Basically, it's saying that if you decide to take down some content you deem to be obscene it does not remove your Section 230 immunity, and it doesn't mean you're now required to take down content other people find obscene. Google's argument is that this applies to search removals as well, and since it's making a good faith effort to remove content it finds objectionable, it's protected from liability.
This argument seems pretty strong within the context of Section 230, but the court doesn't buy it, though it's reasons are kind of odd:
The CDA statutory immunity is an affirmative defense which plaintiff is not required to negate in its Complaint. The plain language of the CDA only provides immunity for actions “voluntarily taken in good faith.”... While the CDA defense may properly be considered if it is apparent from the face of the complaint, that is not the situation in this case. Here, plaintiff has included allegations within its Second Amended Complaint that Google failed to act in good faith when removing its websites from Google’s search results.
But that seems to wipe away much of CDA 230(c)(2)(A). So long as the plaintiff claims that a content removal is in "bad faith" you lose the immunity? That can't be right... but the court says it's fine for now.
Perhaps the bigger issue, though, is the First Amendment claim. Again, the court rejects Google's arguments, and tries to thread the needle carefully. It says that it agrees that Google's search rankings are protected by the First Amendment, but that the real issue here is not the actual search rankings, but rather the statements Google made about why it removes some sites.
While a claim based upon Google’s PageRanks or order of websites on Google’s search results may be barred by the First Amendment, plaintiff has not based its claims on the PageRanks or order assigned to its websites. Rather, plaintiff is alleging that as a result of its pages being removed from Google’s search results, Google falsely stated that e-ventures’ websites failed to comply with Google’s policies.... Google is in fact defending on the basis that e-ventures’ websites were removed due to e-ventures’ failure to comply with Google’s policies.... The Court finds that this speech is capable of being proven true or false since one can determine whether e-ventures did in fact violate Google’s policies. This makes this case distinguishable from the PageRanks situation. Therefore, this case does not involve protected pure opinion speech, and the First Amendment does not bar the claims as pled in the Second Amended Complaint.
This feels like the strongest point the court has, but it still feels pretty weak. Google's policies include some basic catch-alls, saying that it can choose to remove search results based on "policies." That is, it can basically decide what it wants in the search results. And that seems perfectly reasonable. It seems dangerous to think that courts can tell a website what must be included in their search engine.
The court also rejects another First Amendment argument in a way that also seems problematic -- saying that while "editorial judgment" is protected by the First Amendment, anti-competitive motives are not:
While
publishers are entitled to discretion for editorial judgment
decisions, plaintiff has alleged that Google’s reason for banning
its websites was not based upon “editorial judgments,” but instead
based upon anti-competitive motives.... Further, a fact published maliciously with knowledge of its falsity or serious doubts as to its truth is sufficient to overcome the
editorial judgment protection afforded by the Constitution....
Two thoughts on this: first, the idea that Google is removing an SEO company's websites because of anti-competitive reasons seems ludicrous on its face. I mean, Google links heavily to a number of actual direct competitors all the time. It's beyond reason to suggest that it would target a small no-name SEO firm. Second, again, this semantic setup gives a massive out on the First Amendment. Just claim anything is not "editorial judgment" but "anti-competitive motives" and suddenly the First Amendment issue gets tossed aside?
The court also lets motions around trademark, unfair practices and tortious interference move forward, but they're basically rehashes of the points above. The only count it dismisses is a defamation claim, which was a clear nonstarter.
While the ruling doesn't mean that e-ventures will succeed overall, since these issues can be debated again in more detail as the case moves forward, it seems likely that Google may try to appeal the basis for these denials. No matter what you think of Google as an entity, having courts tell it what can and cannot be in its index seems very dangerous.
So... you may recall that, back in December, we received and responded to a ridiculous and bogus legal threat sent by one Milorad "Michael" Trkulja from Australia. Mr. Trkulja had sent the almost incomprehensible letter to us and to Google, making a bunch of claims, many of which made absolutely no sense at all. The crux of the issue, however, was that, back in November of 2012, we had an article about a legal victory by Mr. Trkulja against Google. The issue was that when you searched on things like "sydney underworld criminal mafia" in Google's Image search, sometimes a picture of Trkulja would show up. His argument was that this was Google defaming him, because its algorithms included him in the results of such a search and he was not, in fact, a part of the "underworld criminal mafia."
Either way, back in 2012 we wrote about that case, and Trkulja was upset that a comment on that story jokingly referred to him as a "gangster." Because of that, Trkulja demanded that we pay him lots of money, that we delete the story and the comments and that Google delist all of Techdirt entirely. Immediately, we pointed out in our response: the comment is not defamatory, the statute of limitations had long since passed if it was defamatory, as an American company we're protected by Section 230 of the CDA, and even if he took us to court in Australia, we're still protected by the SPEECH Act. Finally, we suggested that perhaps he chill out and not care so much about what an anonymous person said in the comments of an internet blog over three years ago -- especially when many people consider it a compliment to be called "a gangster."
Either way, it seemed fairly clear that there was no actual "harm" to Mr. Trkulja, given that he didn't even seem to care about it for over three years.
We had hoped that this would be the end of it, but apparently it is not. A few weeks back, we received the following, absolutely bogus legal threat from an Australian lawyer by the name of Stuart Gibson, who appears to work for an actual law firm called Mills Oakley. The original threat from Mr. Trkulja could, perhaps, be forgiven, seeing as he almost certainly wrote it himself (again, it was incomprehensible in parts, and full of grammatical and typographical errors). Our response was an attempt to educate Mr. Trkulja against making bogus threats.
However, now that he's apparently wasting money on a real lawyer like Gibson, we will address the rest of our response to Gibson: Your letter is ridiculous, censorious and not even remotely applicable. Going to court over this will make you and your client look extremely foolish. But let's dig in, because Mr. Gibson seems to think that blustery bullshit will scare us off. He's woefully misinformed on this.
First off, if you send a legal threat and say "NOT FOR PUBLICATION" at the top, it's tough to take you seriously, because such a statement is meaningless. We have no contractual agreement not to publish such information, and if you send us a bogus legal threat, we are damn well going to publish it:
And now on to the crux of Gibson's argument: we said mean things about his client and somebody's feelings may have been hurt.
If you can't read that, it says:
The matter that you have published conveys false and defamatory meanings including (but not limited to) the following:
Our client is a gangster;
That our client by virtue of his legal claims is incompetent and unfit to be a litigant;
That our client by virtue of his legal claims is a ridiculous litigant;
That our client is a criminal and a participant in organised crime;
That our client is unfit to be a litigant
None of these meanings is defensible. Our client is not a criminal and has never been a gangster nor associated with such persons. Accordingly there is no factual basis for the imputations published.
Let's go through these one by one. First off, we never said that Mr. Trkulja is a gangster. In fact, in both of our previous stories about him, we noted that his concern was over being called a gangster when he was not one. To claim otherwise is Mr. Gibson lying in his threat to us. As a suggestion, lying in your legal threat letter is not a very good idea.
Second, at no point did we state that Mr. Trkulja was incompetent or unfit to be a litigant. We merely published his own words -- admittedly including his misspellings, grammatical errors and general confusion -- and our responses to them. If Mr. Gibson thinks this implies that his client is unfit to be a litigant, perhaps he should check his own biases.
Third, again, Mr. Gibson seems to be assuming the claim. We did say that the threat against us was ridiculous -- an opinion we stand by. But we did not say he was a "ridiculous litigant." Also, "ridiculous" is a statement of opinion and even in nutty Australia, "honest opinion" is not defamation. And it is our "honest opinion" that the threat is ridiculous.
Fourth, this is a repeat of the first claim. It was false the first time, and it's still false. Repeating a false claim may allow Mr. Gibson to add to his billable hours, but doesn't seem like particularly good lawyering.
Fifth, this is a repeat of the second claim. See point four above. And point two above.
So let's be clear: we did not say that Mr. Trkulja was a gangster. We said, in our honest opinion, that he won a lawsuit the results of which we disagree with, and that his legal threat to us was ridiculous. This is all perfectly reasonable and protected free speech. Second, we posted Mr. Trkulja's own words which, again in our honest opinions, do show the "ridiculousness" of his threat to us in that it was filled with grammar and spelling errors and was, at points, (again, in our honest opinion) incomprehensible gibberish.
Mr. Gibson, then suggests that arrogance is somehow defamatory:
If you can't see that, it says:
Moreover your commentary that still resides on your website is an arrogant, false and poorly researched piece for the following reasons:
The reference to "gangster" is not "totally innocuous". The reference is grossly defamatory and indefensible. One could not conceive a more defamatory reference than that. It may be a throwaway line in the United States but it is certainly not in this jurisdiction.
Judgments against US companies especially those resident in California are enforceable particularly monetary judgments.
You are not protected by the Speech Act.
This firm has enforced numerous judgments against corporations in your jurisdiction.
Your reference to "free speech" is absolute nonsense. Speech may be free but it is also actionable.
You did publish the comment. Under Australian defamation law, you have a duty as a moderator to moderate third party comments. If you do not and refuse to take action when given notice, you are liable.
First off, I may not be an expert on Australian defamation law, but I can tell you I find it difficult to believe that "arrogance" or "poorly researched" information is defamatory there. It certainly is not defamatory in the US, and, furthermore, Mr. Gibson, you are wrong that it was poorly researched. It was well researched and backed up with a great amount of detail -- details I will note your own threat letter to us appears to be lacking. And I'm sorry if we come off as arrogant to you, but we're allowed to speak our minds.
Next, Mr. Gibson, you "could not conceive a more defamatory reference" than calling someone a gangster? Really, now? Because I'm at least moderately familiar with some Australian insults and many of them seem way, way worse than "gangster" -- which, again I will remind, you we never called your client (and, in fact, correctly noted that he was upset at someone calling him a gangster). And, yes, it is innocuous. No one cares that someone anonymously in a blog comment jokingly called your client a gangster. It was harmless as is fairly clearly evidenced by the fact that your client didn't even notice it for over three years.
Next, I'll note that for all your talk of enforcing Australian monetary judgments in California, you don't name a single one. And, you're wrong, because the SPEECH Act absolutely does apply, and you'd be exceptionally foolish to test this, though of course that is your decision to make. The text of the SPEECH Act is pretty explicit, first about when defamation rulings are enforceable in the US and (clue time!) it doesn't count if the statements wouldn't be defamatory in the US:
a domestic court shall not recognize or enforce a foreign judgment for defamation unless the domestic court determines that the exercise of personal jurisdiction by the foreign court comported with the due process requirements that are imposed on domestic courts by the Constitution of the United States.
Second, the law is also explicit that a service provider, such as us (in reference to comments published by readers on our site), if protected by CDA 230 in the US, would be similarly protected from foreign judgment:
a domestic court shall not recognize or enforce a foreign judgment for defamation against the provider of an interactive computer service, as defined in section 230 of the Communications Act of 1934 (47 U.S.C. 230) unless the domestic court determines that the judgment would be consistent with section 230 if the information that is the subject of such judgment had been provided in the United States.
I recognize that you're an Australian lawyer, not a US one, but I would suggest doing at least a tiny bit of research into the caselaw on Section 230 in the US. You will quickly learn that we do qualify as a service provider and that, no, we are not liable for statements in the comments. And, hell, even if we were, and even if the comments were defamatory under US law (which they're not), the statute of limitations on those original comments is long past anyway.
And, yes, in case you still have not read the SPEECH Act, the legal burden will be on you here:
The party seeking recognition or enforcement of the foreign judgment shall bear the burden of establishing that the judgment is consistent with section 230.
Good luck with that.
In case you still decide to ignore the actual text of the law, you can also go digging through the legislative record on the SPEECH Act, in which it was made explicit that the law was designed to protect against such forms of "libel tourism."
The purpose of this provision is to ensure that libel tourists do not attempt to chill speech by suing a third-party interactive computer service, rather than the actual author of the offending statement.
You can claim the law doesn't apply, but you are wrong. The text is clear. You can claim that you have won judgments or monetary awards in the past. And perhaps you have, but if you try to move against us, you will be facing the SPEECH Act and you will lose.
So, given all of the above, we will not be undertaking any of your demands. We will not apologize as we have nothing to apologize for. We will not retract anything, as we did not make any false or defamatory publications. We will not remove anything from our website. We will not pay your client anything, whether "reasonable costs" nor "a sum of money in lieu of damages."
Instead, we will tell you, as we did originally, to go pound sand and to maybe think twice before making bogus legal threats that you cannot back up.