It Doesn't Make Sense To Treat Ads The Same As User Generated Content
from the cleaning-up-the-'net dept
Paid advertising content should not be covered by Section 230 of the Communications Decency Act. Online platforms should have the same legal risk for ads they run as print publishers do. This is a reform that I think supporters of Section 230 should support, in order to save it.
Before I explain why I support this idea, I want to make sure I'm clear as to what the idea is. I am not proposing that platforms be liable for content they run ads next to -- just for the ads themselves. I am not proposing that the liability lies in the "tools" that they provide that can be used for unlawful purposes, that's a different argument. This is not about liability for providing a printing press, but for specific uses of the printing press -- that is --publication.
I also don't suggest that platforms should lose Section 230 if they run ads at all, or some subset of ads like targeted ads, like a service-wide on/off switch. The liability would just be normal, common-law liability for the content of the ads themselves. And “ads” just means regular old ads, not all content that a platform commercially benefits from.
It's fair to wonder whom this applies to. Many of the examples listed below have to do with Facebook selling ads that are displaying on Facebook, or Google placing ads on Google properties, and it's pretty obvious that these companies would be the ones facing increased legal exposure under this proposal. But the internet advertising ecosystem is fiendishly complex, and there are often many intermediaries between the advertiser itself, and the proprietor of the site the ad is displayed on.
So at the outset, I would say that any and all of them could be potentially liable. If Section 230 doesn't apply to ads, it doesn't apply to supplying ads to others; in fact, these intermediary functions are considered a form of "publishing" under the common law. Which party to sue would be the plaintiff's choice, and there are existing legal doctrines that prevent double recovery, and to allow one losing defendant to bring in, or recover from, other responsible parties.
It's important to note, too, that this is not strict or vicarious liability. In any given case, it could be that the advertiser could be found liable for defamation or some kind of fraud but the platform isn't, because the elements of the tort are met for one and not the other. Whether a given actor has the "scienter" or knowledge necessary to be liable for some offense has to be determined for each party separately -- you can impute the state of mind from one party onto another, and strict liability torts for speech offenses are, in fact, unconstitutional.
The Origins Of An Idea
I first started thinking about it in the context of monetized content. After a certain dollar threshold is reached with monetized content, there should be liability for that, too, since the idea that YouTube can pay thousands of dollars a month to someone for their content but then have a legal shield for it simply doesn't make sense. The relationship of YouTube to a high-paid YouTuber is more similar to that between Netflix and a show producer, than it is between YouTube and your average YouTuber whose content is unlikely to have been reviewed by a live, human YouTube official. But monetized content is a marginal issue; very little of it is actionable, and frankly the most detestable internet figures don't seem to depend on it very much.
But the same logic runs the other way, to when the content creator is paying a platform for publishing and distribution, instead of the platform paying the content creator. And I think eliminating 230 for ads would solve some real problems, while making some less-workable reform proposals unnecessary.
Question zero should be: Why are ads covered by Section 230 to begin with? There are good policy justifications for Section 230 -- it makes it easier for there to be sites with a lot of user posts that don't need extensive vetting, and it gives sites a free hand in moderation. Great. It's hard to see what that has to do with ads, where there is a business relationship. Business should generally have some sense of whom they do business with, and it doesn't seem unreasonable for a platform to do quite a bit more screening of ads before it runs them than of tweets or vacation updates from users before it hosts them. In fact, I know that it's not an unreasonable expectation because the major platforms such as Google and Facebook already do subject ads to heightened screening.
I know I'm arguing against the status quo, so I have the burden of persuasion. But in a vacuum, the baseline should be that ads don't get a special liability shield, just as in a vacuum, platforms in general don't get a liability shield. The baseline is normal common law liability and deviations from this are what have to be justified.
I'm Aware That Much "Harmful" Content is not Unlawful
A lot of Section 230 reform ideas either miss the mark or are incompletely theorized, since, of course, much -- maybe even most -- harmful online content is not unlawful. If you sued a platform over it, without 230, you'd still lose, but it would take longer.
You could easily counter that the threat of liability would cause platforms to invest more in content moderation overall, and I do think that this is likely true, it is also likely that such investments could lead to over moderation that limits free expression by speakers that are even considered mildly controversial.
But with ads, there is a difference. Much speech that would be lawful in the normal case -- say, hate speech -- can be unlawful when it comes to housing and employment advertisements. Advertisements carry more restrictions and regulations in any number of ways. Finally, ads can be tortious in the standard ways as well: they can be fraudulent, defamatory, and so on. This is true of normal posts as well -- but with ads, there's a greater opportunity, and I would argue obligation, to pre-screen them.
Many Advertisements Perpetuate Harm
Scam ads are a problem online. Google recently ran ads for scam fishing licenses, despite being told about the problem. People looking for health care information are being sent to lookalike sites instead of the site for the ACA. Facebook has even run ads for low-quality counterfeits and fake concert tickets. Instead of searching for a locksmith, you might as well set your money on fire and batter down your door. Ads trick seniors out of their savings into paying for precious metals. Fake customer support lines steal people's information -- and money. Malware is distributed through ads. Troublingly, internet users in need of real help are misdirected to fake "rehab clinics" or pregnancy "crisis centers" through ads.
Examples of this kind are endless. Often, there is no way to track down the original fraudster. Currently, Section 230 allows platforms to escape most legal repercussions for enabling scams of this kind, while allowing the platforms to keep the revenue earned from spreading them.
There are many more examples of harm, but the last category I'll talk about is discrimination, specifically through housing and employment discrimination. Such ads might be unlawful in terms of what they say, or even to whom they are shown. Putting racially discriminatory text in a job or housing ad can be discriminatory, and choosing to show a facially neutral ad to just certain racial groups could be, as well. (There are tough questions to answer -- surely buying employment ads on sites likely to be read by certain racial groups is not necessarily unlawful -- but, in the shadow of Section 230, there's really no way to know how to answer these questions.)
In many cases under current law, there may be a path to liability in the case of racially discriminatory ads, or other harmful ads. Maybe you have a Roommates-style fact pattern where the platform is the co-creator of the unlawful content to begin with. Maybe you have a HomeAway fact pattern where you can attach liability to non-publisher activity that is related to user posts, such as transaction processing. Maybe you can find that providing tools that are prone to abuse is itself a violation of some duty of care, without attributing any responsibility for any particular act of misuse. All true, but each of these approaches only addresses a subset of harms and frankly seem to require some mental gymnastics and above-average lawyering. I don't want to dissuade people from taking these approaches, if warranted, but they don't seem like the best policy overall. By contrast, removing a liability shield from a category of content where there is a business relationship and a clear opportunity to review content prior to publication would incentivize platforms to more vigorously review.
A Cleaner Way to Enforce Anti-Discrimination Law and Broadly Police Harm
It's common for good faith reformers to propose simply exempting civil rights or other areas of law from Section 230, preventing platforms from claiming Section 230 as a defense of any civil rights lawsuit, much as how federal criminal law is already exempted.
The problem is that there is no end of good things that we'd like platforms to do more of. The EARN IT Act proposes to create more liability for platforms to address real harms, and SESTA/FOSTA likewise exempts certain categories of content. There are problems with this approach in terms of how you define what platforms should do, and what content is exempted, and issues of over-moderation in response to fears of liability. This approach threatens to make Section 230 a Swiss cheese statute where whether it applies to a given post requires a detailed legal analysis, which has other significant harms and consequences.
Another common proposal is to exempt "political" ads from Section 230, or targeted ads in general (or to somehow tackle targeting in some non-230 way). There are just so many line-drawing problems here, making enforcement extremely difficult. How, exactly, do you define "targeted"? How, by looking at an ad, can you tell whether it is targeted, contextual, or just part of some broad display campaign? With political ads, how do you define what counts? Ads from or about campaigns are only a subset of political ads--is an ad about climate change "political"--or an ad from an energy company touting its green record? In the broadest sense yes, but it's hard to see how you'd legislate around this topic.
Under the proposal to exempt ads from Section 230, the primary question to answer is not what is the content addressed to and what harms it may cause, but simply, whether it is an ad. Ads are typically labeled as such and quite distinct--and it may be the case that there need to be stronger ad disclosure requirements and penalties for running ads without disclosure. There may be other issues around boundary-drawing as well--I perfectly well understand that one of the perceived strengths of Section 230 is its simplicity, relative to complex and limited liability shields like Section 512 of the DMCA. Yet I think they're tractable.
Protection for Small Publishers
I've seen some publishers respond to user complaints of low-quality or even malware-distributing ads running on sites where the publishers point out that they don't see or control the ads--they are delivered straight from the ad network to the user, alongside publisher content. (I should say straight away that this still counts as "publishing" an ad--if the user's browser is infected by malware that inserts ads, or if an ISP or some other intermediary inserts the ad into the publisher's content, then no it is not liable, but if a website embeds code that serves ads from a third party, it is "publishing" that ad in the same sense as a back page ad on a fancy Conde Nast magazine. Whether that leads to liability just depends on whether the elements of the tort are met, and whether 230 applies, of course.)
For major publishers I don't have a lot of sympathy. If their current ad stack lets bad ads slip through, they should use a different one, if they can, or demand changes in how their vendors operate. The incentives don't align for publishers and ad tech vendors to adopt a more responsible approach. Changing the law would do that.
At the same time it may be true that some small publishers depend on ads delivered by third parties, and not only does the technology not allow them to take more ownership of ad content, they lack the leverage to demand to be given the right tools. Under this proposal, these small publishers would be treated like any other publisher for the most part, though I tend to think that it would be harder to meet the actual elements of an offense with respect to them. That said I would be on board with some kind of additional stipulation that ad tech vendors are required to defend and pay out for any case where publishers below a certain threshold are hauled into court for distributing ads they have no control over, but are financially dependent on. Additionally, to the extent that the ad tech marketplace is so concentrated that major vendors are able to shift liability away from themselves to less powerful players, antitrust and other regulatory intervention may be needed to assure that risks are borne by those who can best afford to prevent them.
The Tradeoffs That Accompany This Idea Are Worth It
I am proposing to throw sand in the gears on online commerce and publishing, because I think the tradeoffs in terms of consumer protection and enforcing anti-discrimination laws are worth it. Ad rates might go up, platforms might be less profitable, ads might take longer to place, and self-serve ad platforms as we know them might go away. At the same time, fewer ads could mean less ad-tracking and an across-the-board change to the law around ads should not tilt the playing field towards big players any more than it already is, and would not likely lead to an overall decline in ad spending, just a shift in how those dollars are spent (to different sites, and to fewer but more expensive ads)..
This proposal would burden some forms of speech more than others, too, so it’s worth considering First Amendment issues. One benefit of this proposal over subject matter-based proposals is that it is content neutral, applying to a business model. Commercial speech is already subject to greater regulation than other forms of speech, and this is hardly a regulation, just the failure to extend a benefit universally. Though of course this can be a different way of saying the same thing. But, if extending 230 to ads is required if it’s extended anywhere, it would seem that same logic would require that 230 be extended to print media or possibly even first-party speech. That cannot be the case. And I have to warn people that if proposed reforms to Section 230 are always argued to be unconstitutional, that makes outright repeal of 230 all the more likely, which is not an outcome I’d support.
Fans of Section 230 should like this idea because it forestalls changes they no doubt think would be worse. Critics of 230 should like it because it addresses many of the problems they've complained about for years, and has few if any of the drawbacks of content-based proposals. So I think it's a good idea.
John Bergmayer is Legal Director at Public Knowledge, specializing in telecommunications, media, internet, and intellectual property issues. He advocates for the public interest before courts and policymakers, and works to make sure that all stakeholders -- including ordinary citizens, artists, and technological innovators -- have a say in shaping emerging digital policies.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ads, advertisements, content moderation, liability, section 230
Reader Comments
Subscribe: RSS
View by: Thread
If an advert breaks the law, the advertiser can have criminal charges brought against them, but it rarely happens. Pass that liability onto the publisher, and given activist attorney generals, you have opposed publishers to attack when it is politically profitable to do so. In the current political climate in the US, this could be a real problem.
[ link to this | view in thread ]
But... Wha... Why?
You have just criminalized TechDirt for using Google who served them a bad ad... How is that making the world 'better'? Why should TechDirt have to be drug into a lawsuit because of a problem Google allowed to happen? That seems... unnecessary and doesn't solve the problem you are setting out to... You have just codified 3rd party liability because... reasons? So what if TD gets 'off' after spending $$$ fighting this? Them going through a nuisance suit doesn't help anyone.
[ link to this | view in thread ]
Given the massive, massive volume of ads served by advert-serving middlemen, I fail to see how any site could reasonably take responsibility for any bad actors who might sneak their way into the ad-stream.
Clearly, ad-moderation at scale is impossible.
[ link to this | view in thread ]
Re: But... Wha... Why?
The service that provides the bad ads would be the one responsible so Google would be responsible. Clear enough.
[ link to this | view in thread ]
This seems like slight of hand. The above implies that there is some special liability shield for those who publish the words of others. In a vacuum, a person (natural or corporate) does not acquire liability for speech that is not theirs. Why should this change simply because you add the words "on a computer", or the modern equivalent, "on the internet"?
Those sure look like criminal offenses to me. And with the internet involved, "wire fraud" statutes get invoked, don't they? Which makes them federal crimes, and immune to section 230. What part of "enforcing existing statutes" does not apply?
[ link to this | view in thread ]
Too Many Layer
If there are many layers, and dynamic advertisements, how can the proprietor know in advance what ad will be run, such that the proprietor will have a chance to approve or disapprove?
Similar to how it seems unreasonable for websites to preemptively police the user generated content, I doubt that policing dynamic advertisements would fare any better.
[ link to this | view in thread ]
This strikes me as akin to the "nerd harder" approach to encryption back doors. And so I ask the same question:
What does a small website have to do to defend itself against these scenarios?
-- An ad agency provides bad ads to it, carelessly. Anyone who doesn't like the website just watches until a bad ad comes along, then sues it into oblivion.
-- An ad agency provides bad ads to everyone. [Because, of course, they aren't big enough or visible enough to be profitable targets for a litigation/business model.] The Plutocratic president creates a Department of Online Security with power to prosecute all websites with a Demagogue slant that display bad ads. (or, of course, vice versa vice)
-- An ad agency is paid to take a website down. It provides bad ads to it deliberately and surreptitiously (concealing its malice behind something like user-agent checking), then the malicious actor sues the website. [There are a LOT of rich people who like attacking websites who show them for the vicious fools they are!]
Now, how much is this proactive-legal-defense going to cost? And what does that do to the ads?
I think you just destroyed all the small commercial websites in the world, because nobody but an eBay-sized/shaped organization would be able to run ads. (eBay is not only the ad broker but the fulfilment monitor.)
And how, in this new eBay-gets-30%-of-everything world, can a plumber run advertisements on websites displayed in Xburg or Yton?
I'd say you've come up with an approach that is guaranteed to always defend the guilty, empower the malicious, while placing unconscionable burdens on the innocent.
Now, if you could come up with a plan to present the ad brokers with infinite rounds of legal harassment, while protecting websites large and small, I'd stand up and cheer. Because a large variety of WEBSITES large and small is an infinite good, while a large number of ad brokers is not in any way useful: it is nothing but a way of avoiding the liability that rightly belongs to them.
Perhaps use the "rat out your dealer" approach to recreational-toxic-chemical prosecutions would work here. The website could COMPLETELY avoid liability by supplying correct name and address of the ad broker, so long as the broker is subject to the same national jurisdiction. (The ad broker could get the same protection, all the way back to the person who first bought the ad.)
This way, liability could be quickly tracked back to the perp, rather than merely assessed to his victims. And the ad broker would be responsible (like eBay is responsible) for ads that he brokes.
Would this cut down on the number of ad brokers? Very likely. Would THAT simplify the root problem mentioned above ("the ad business is too complicated t actually go after the guilty party"). Probably.
Would this protect Techdirt from the malicious prosecutions of those who hate it?
That's the most important question you could ask. Because Techdirt is the poster child for all the new-model online news sites that we rely on to handle subjects that the big media reporters are too ignorant to report.
[ link to this | view in thread ]
but if a website embeds code that serves ads from a third party, it is "publishing" that ad
Much of what Google search (and facebook and reddit and pretty much anyone serving user generated content) does is write code which serves content from a third party. A rather problematic definition of publishing if we are to maintain any part of s230.
Further, we now must consider what happens when a user embeds code that serves a tweet from a third party (aka a "retweet")... they are now "publishing" that tweet. Even just replying to a tweet causes that tweet to be served to new people.. making you liable for its content.
though I tend to think that it would be harder to meet the actual elements of an offense with respect to them.
How? Your explanation is that "if an ad is served by a site, the site is liable for it." The "actual elements" are extremely simple, certainly nothing that would be more difficult to pin on a smaller operator over a larger one.
should not tilt the playing field towards big players any more than it already is, and would not likely lead to an overall decline in ad spending, just a shift in how those dollars are spent (to different sites, and to fewer but more expensive ads)
In other words, it would result in fewer ads primarily directed to larger sites, with smaller sites left scrambling to either roll the dice on liability selling cheaper ads themselves, or accept shifting bargaining power even further toward the largest intermediaries.
Critics of 230 should like it because it addresses many of the problems they've complained about for years
From what I remember (and perhaps I'm wrong, I could just be remembering the portions which I found most interesting) only a minority of said problems were about actually illegal ads... most were about ads that people didn't like or about moderation/sorting decisions independent of ads. And while allowing rich people/groups to bully sites about ads they don't like might reduce pressure on s230... I'm not sure it would be worth it.
I'd also consider what would happen to craigslist, Ebay, facebook marketplace, and the thousands of buy/sell threads on random sites.... considering that substantially everything on them are ads.
[ link to this | view in thread ]
Re: But... Wha... Why?
Major difference between user-posted content and ads: the advertiser (or ad network) can't just go in and drop it's ads into the site's pages, the site has to enter into an agreement with the advertiser (or network) and modify it's pages to display the ads. In the process the site has to evaluate the kinds of ads that will be supplied and decide whether they want to carry them or not and, through the contract, has control over what is supposed to be displayed. If an ad network is known to distribute content the site doesn't want to be liable for, the site only has to decide not to use them and the content won't ever appear. This makes the difference.
The purpose of Section 230 isn't to permit sites to carry any kind of content without liability. It's to permit sites to allow user-posted content without requiring the kind of item-by-item review that would make carrying that content a practical impossibility. Advertising contract terms don't involve such difficulties in reviewing them.
[ link to this | view in thread ]
Re: Re: But... Wha... Why?
That would mean that every website, large and small, would need to review each and every ad posted to their sites, before allowing the ad to appear. How does that play with the ad networks contracts? How would that play for a small website like Techdirt, from a labor/expense standpoint? Does one ad slipped through over a weekend constitute liability to the website?
[ link to this | view in thread ]
AS everyone above..
WTF??
What logic are you trying to break?
This is a BACK DOOR..
Just to ask, WHAT makes an advert an advert??
Anyone that wishes to BUY TIME TO ADVERT, is the answer.
HOW many sites have the ability to SORT/SCAN/SELECT which companies CAN advert on the site?? NONE. I dont even see Trojan adverts on porn sites.
I really wonder about the logic SOME people have.
https://www.c-span.org/organization/?48866/Public-Knowledge
3 cspan videos...is this the person??
[ link to this | view in thread ]
Re: AS everyone above..
LETS ADD..
If you did this to the TV/SAT/CABLE..
How much TV would be left?
Can you see Channels being liable for adverts??
They Killed that IDEA along time ago. because a Politician would be LIABLE for his comments.
[ link to this | view in thread ]
Anotherone misunderstands section 230
As has been written on this site before, without section 230 you're only liable for content/ads you know about. So it would just be an incentive to stop the little ad moderation that exists today.
[ link to this | view in thread ]
Re: Re: Re: But... Wha... Why?
Which in theory could be different for every user. With targetted adds, they have to do the approval in real time, which will do wonders for page load times.
[ link to this | view in thread ]
Re: Anotherone misunderstands section 230
That assumes that in getting rid of 230, they do not impose liability at the same time, and/or a notice and stay-down for content. Even if they just get rid of section 230, a site can go broke just telling lots of courts that they do not moderate content, along with the discovery entailed in presenting their case.
[ link to this | view in thread ]
Re: Re: Re: But... Wha... Why?
If it made it impossible for any legally-targetable site to use ad networks who don't offer compensation for bad ads, google would have to choose between offering that protection or being cast out into the void where the penis pill ads live. In effect it would turn back the clock on internet advertising consolidation to the turn of the century, if they don't vet ads a lot better than they do now.
[ link to this | view in thread ]
Clearly you don't understand ...
The web. Content. Ad placement. Much less adtech, the dubious automated solutions that place ads on different websites while blocking some sites, blacklist. Add in the impossibly stupid block list of words that has fired grapeshot across the news industry. Ooo, COVID, scary.
Nope. You do not understand. But, you want to fix it so it feels right to you.
Just like moderation at scale doesn't compute, personal inspection of ads are impossible at low to medium levels. Because nobody is making enough money off of internet ads to fund a full time person to vet every single ad.
Checkout https://branded.substack.com/
That might help. Discover how hate sites are finding funding through cheating the ads.txt files.
[ link to this | view in thread ]
This is a really bad idea
This will restore the situation pre-230, which was really, really bad. And it will do it for the speech that is the most important to handle correctly.
Before 230 was passed, there was (nearly) no liability if you didn't filter the content in any way. But if you did try to filter the speech, say to filter out things that were harmful to children, suddenly you became liable for the contents. This was called the "moderator's dilemma" and Congress feared that it would result in platforms not moderating at all.
This will restore that horrible situation for advertisements. Platforms that don't look at their advertisements or filter them in any way will have much less liability that platforms that try to filter out harmful advertisements. So platforms will be perversely incentivized not to police their ads.
And this is very important for ads. Ads are speech that people pay amplify. People who want to spread misinformation like ads because all they need is money and they can reach as many people as they want, precisely targeted to be the ones they can affect the most.
Why create a perverse incentive for platforms not to police their ads at all?
If the thinking is that the law will somehow force the opposite -- that it will create liability for platforms that do not look at their ads at all -- I think that's a non-starter because of the First Amendment. Pre-230 law makes it clear that you can't impose liability on a platform for speech it does not analyze or filter but merely forwards. Ads include things like core political speech and aren't just commercial speech.
I have to respond specifically to one claim in the article:
"This proposal would burden some forms of speech more than others, too, so it’s worth considering First Amendment issues. One benefit of this proposal over subject matter-based proposals is that it is content neutral, applying to a business model. Commercial speech is already subject to greater regulation than other forms of speech, and this is hardly a regulation, just the failure to extend a benefit universally."
This is nonsense. The proposal is about ads, not commercial speech. Lots of commercial speech is not in the form of ads and lots of ads are not commercial speech. This will burden all paid speech, whether commercial or not.
Either this burdens lots of non-commercial speech and violates the First Amendment, or it doesn't burden non-commercial speech and re-creates the moderator's dilemma as platforms are strongly encouraged not to figure out which ads are commercial and which aren't.
[ link to this | view in thread ]
A Response to Mike
This is hardly a fleshed out response, but I find the constant argument that we cannot do X, whether it's privacy regulation or tweaks to Section 230 here, because it will assuredly further entrench Facebook and Google to be tired and played out.
It also assumes that their underlying business model -- OBA -- is legitimate, and that we must do what we can to prop up competition that can deliver targeted ads. I think there's mounting evidence that targeted advertising is fundamentally problematic. I'd rather we dramatically increase the costs to Google and Facebook of their business model than prop up equally problematic competition.
[ link to this | view in thread ]
Re: Re: But... Wha... Why?
According to this proposal, the site serving the ads (Techdirt) would also be liable. You read the article right?
[ link to this | view in thread ]
Re:
"Sleight of hand", "sleight" meaning deception.
[ link to this | view in thread ]
This has nothing to do with targeted ad systems. Insofar as targeted ads work (i.e. not very far) they will tend to DECREASE the number of frivolous suits this scheme will promote.
It's not Google and Facebook that are the problem. It's all the sleazy ad brokers, re-brokers, and re-re-brokers whose business model is sneaking the illegal ads into "legitimate" websites. And those are the ones whose business model needs to be broken.
Unfortunately, this proposal indemnifies the bad actors at the expense of everyone with an honest business (website or advertisor).
[ link to this | view in thread ]
Re: A Response to Mike
Targeted adverts are stupid..
I go buy a part for my Lawn mower and every where I do, I get MORE adverts for mower parts..
I DONT NEED ANY MORE PARTS/
[ link to this | view in thread ]
Re:
AND??
From the 1990's for 5+ years the gov. fought the SPAM makers for doing all this Crap in the first place.
I re-installed Win OS on a system 15 years ago. It had Dialup.
I set everything up, and asked the customer for the Name/password to set it up.
INSTED OF INSTALLING AV, ABOT, and Lots of protection...
I Thought to let Int explorer DO ITS JOB..
Dialed up, went Straight to MSN.
the System ALMOST FROZE, as the Modem Sat there and SUCKED UP everything sent to it for 15 minutes.
From the Front page I had 8 virus and 17 bots, and all the THIS AND THAT, placed on the computer.
There was so much crap running in the background, the system almost died.
I then, re-re-installed Windows, and Install Protections from a CD, with Firefox, and AV, antibot, and all the rest..Then used FF, and went to MSN..(nothing happened, just the main page popped up)
So I sent a letter explaining it to MSN and MS.
1 year later ALL adverts were stopped on MSN.
Now,
I contact sites that have TONS of scripts, and ask me to Lower my Adblock, or they wont allow me to enter.
I TRY to contact them IF they let me hit that button.
(should just use Whois)
And ask them if they would place a Label on the front page that they will be LIABLE for any crap I can prove Came from their site.
OR that they SCAN every 3rd party advert before I see it and remove All the CRAP for trackers, Virus, and other things I DO NOT WANT ON MY MACHINE..
I NEVER get a reply. Except here, someone listens, and understands the BS.(bad scripting)
[ link to this | view in thread ]
Re: A Response to Mike
This is hardly a fleshed out response, but I find the constant argument that we cannot do X, whether it's privacy regulation or tweaks to Section 230 here, because it will assuredly further entrench Facebook and Google to be tired and played out.
But if it's accurate? I mean, here that would be the obvious and inevitable result, no?
It also assumes that their underlying business model -- OBA -- is legitimate, and that we must do what we can to prop up competition that can deliver targeted ads.
I made no mention of targeted ads, and nor does John's proposal. I've made it clear that I have problems with behavioral advertising, but this proposal is not limited to just targeted ads using behavioral targeting. It would apply to any marketplace for ads, even the ones that are based on context or brand...
And thus if your complaint is targeted advertising, this is also not a well... er... targeted solution.
[ link to this | view in thread ]
Re: This is a really bad idea
Who is old enough here to have seen a political advert for Socialism?
Iv seen 2-3 in 60 years.
ON TV..
How about the reg. for FAIR/Equal TIME?
Truth in advertising??
This HAS to be an idea from the advertising counsel. FOR 1 REASON.
When they create an advert, they base it on the number of people a service can provide.(part of the reason they use trackers on the net)
With TV/SAT/Cable, those services can/say they can get the advert to ###### people. and that advert is worth $ per person.
Think of that number on the internet. Not talking about 1 system locked into NY. and 3-5 million people. How about being able to SAY, you can hit 2/3 of the USA, 200 million. How much can you demand Per person??
Which is abit stupid, as there is little guarantee. Unless you are using bots/trackers on everyones computers, to see how many Times a person has seen THAT 1 advert.
We know they have Backdoored MANY of the Adblockers. They pay to allow them threw the adblock. And WHY I use a script blocker, also.
And this is 1 of a few sites I can say is CLEANER then 99.99999% of the rest of the net.
Iv seen sites with over 27+ scripts, and every time I allow one script, MORE popup.
Another thing I suggest to sites that wish to advert, is to create your OWN Advertising section, and find the companies that Wish to advert with you. Show them your numbers, and Just use a Counter on the system, for each person that see's it, and then Clicks threw tot he site's.. Its not that hard. And most companies already have allot of the materials for adverts, UNLESS they paid another company to do it, which means YOU GET to make your OWN for that company and charge another fee.
But most of the sites never replay.
[ link to this | view in thread ]
Re: Re: A Response to Mike
Fair that John doesn't say behavioral advertising, but to my understanding, the entire online advertising ecosystem is premised on automating "targeting" and "reach." The opacity of the entire stack invites mischief, so for purposes of John's proposal, I don't think it matters how what we're terming the functionality being provided by Google/Facebook and the major ad networks and exchanges.
[ link to this | view in thread ]