AT&T, Verizon Feign Ethical Outrage, Pile On Google's 'Extremist' Ad Woes
from the manufactured-outrage dept
So you may have noticed that Google has been caught up in a bit of a stink in the UK over the company's YouTube ads being presented near "extremist" content. The fracas began after a report by the Times pointed out that advertisements for a rotating crop of brands were appearing next to videos uploaded to YouTube by a variety of hateful extremists. It didn't take long for the UK government -- and a number of companies including McDonald's, BBC, Channel 4, and Lloyd's -- to engage in some extended pearl-clutching, proclaiming they'd be suspending their ad buys until Google resolved the issue.
Of course, much like the conversation surrounding "fake news," most of the news coverage was bizarrely superficial and frequently teetering toward the naive. Most outlets were quick to malign Google for purposely letting extremist content get posted, ignoring the fact that the sheer volume of video content uploaded to YouTube on a daily basis makes hateful-idiot policing a Sisyphean task. Most of the reports also severely understate the complexity of modern internet advertising, where real-time bidding and programmatic placement means companies may not always know what brand ads show up where, or when.
Regardless, Google wound up issuing a mea culpa stating they'd try to do a better job at keeping ads for the McRib sandwich far away from hateful idiocy:
"We know advertisers don't want their ads next to content that doesn’t align with their values. So starting today, we’re taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories. This change will enable us to take action, where appropriate, on a larger set of ads and sites."
As we've noted countless times, policing hate speech is a complicated subject, where the well-intentioned often stumble down the rabbit hole into hysteria and overreach. Amusingly though, AT&T and Verizon -- two U.S. brands not exactly synonymous with ethical behavior -- were quick to take advantage of the situation, issuing statements that they too were simply outraged -- and would be pulling their advertising from some Google properties post haste. This resulted in a large number of websites regurgitating said outrage with a decidedly straight face:
"We are deeply concerned that our ads may have appeared alongside YouTube content promoting terrorism and hate," an AT&T spokesperson told Business Insider in a written statement. "Until Google can ensure this won’t happen again, we are removing our ads from Google’s non-search platforms."
"Once we were notified that our ads were appearing on non-sanctioned websites, we took immediate action to suspend this type of ad placement and launched an investigation," a Verizon spokesperson told Business Insider. "We are working with all of our digital advertising partners to understand the weak links so we can prevent this from happening in the future."
Of course, if you know the history of either company, you should find this pearl-clutching a little entertaining. In just the last few years, AT&T has been busted for turning a blind eye to drug dealer directory assistance scams, ripping off programs for the hearing impaired, defrauding government programs designed to shore up low-income connectivity, and actively helping "crammers" by making scam charges on consumer bills harder to detect. Verizon, recently busted for covertly spying on internet users and defrauding cities via bogus broadband deployment promises isn't a whole lot better.
That's not to say that all of the companies involved in the Google fracas are engaged in superficial posturing for competitive advantage. Nor is it to say that Google can't do more to police the global hatred brigades. But as somebody who has spent twenty years writing about these two companies specifically, the idea that either gives much of a shit about their ads showing up next to hateful ignoramuses is laughable. And it was bizarre to see an ocean of news outlets just skip over the fact that both companies are pushing hard into advertising themselves with completed or looming acquisitions of Time Warner, AOL and Yahoo.
Again, policing hateful idiocy is absolutely important. But overreach historically doesn't serve anybody. And neither does pretentious face fanning by companies looking to use the resulting hysteria to competitive advantage.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ads, extremism, moral panic, pearl clutching, search, video
Companies: at&t, google, verizon, youtube
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in chronology ]
Re: Extremist
[ link to this | view in chronology ]
Re: Re: Extremist
[ link to this | view in chronology ]
read: platforms we are not in open competition with.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
https://xkcd.com/1425/
"We want you to flag every video that has our intellectual property in it."
Google: "Sure, give us a few hours."
"Now we want you to flag every video that is evil."
[ link to this | view in chronology ]
Look on the bright side
[ link to this | view in chronology ]
Re: Look on the bright side
[ link to this | view in chronology ]
most effective ad deterrent
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
McRib
Regardless, Google wound up issuing a mea culpa stating they'd try to do a better job at keeping ads for the McRib sandwich far away from hateful idiocy:
I am sure that McD isquite happy to sell (Halal)burgers to Jhadis - just like Lenin's capitalist who would source his own noose.
Of course they are presumably bothered about being labelled "Islamaphobic" for advertising McRib ( a pork product in case anyone hadn't noticed) to Muslims.
[ link to this | view in chronology ]
Re: McRib
Pushing cloven hoofed animal meat in your face has been as hate-filed as the usual profit over prophet.
As a religious Green, the meat industry in this country is disrespectful and dangerous to our collective future.
'It is difficult to get a man to understand something, when his salary depends on his not understanding it.' -Upton Sinclair
[ link to this | view in chronology ]
Missing the problem
Don't listen to May and her crowd. Listen to ex-muslims and non-muslims who live in muslim majority countries - they know the truth.
[ link to this | view in chronology ]
I keep thinking about that silly line from "Demolition Man", "But all restaurants are Taco Bell, they won the franchise wars". Yeah, it becomes less funny every year.
[ link to this | view in chronology ]
Twitter is notorious for enforcing this double-standard, with Facebook not far behind, and now there is increasing pressure on Youtube to enact similar left-leaning censorship.
The DMCA has been the most potent weapon for censoring Youtube content. Right wing polemicists (YouTube has an awful lot of them) have learned to be very careful about using short video clips as "fair use" discussion material, as YouTube will reflexively remove such videos instantly (and slap the uploader with a copyright "strike") upon receiving a DMCA claim, putting the burden on the uploader to embark on the long slow process of getting the video restored by arguing fair use.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
If they cannot perhaps their business model is broken.
[ link to this | view in chronology ]
Everything is easy and cheap when you don't have to do it
Along those lines, the recording and movie industries also make billions in profits, which means they too can certainly afford to 'hire a room full of people' to review DMCA claims before sending them out to make sure that they don't flag something erroneously.
If they can't manage that, then perhaps their business models are broken.
Tell you what, if you think that it's really that easy to review(superficially or otherwise) at least 400 hours worth of video uploaded per minute why don't you give Google a call, I'm sure they'd love to offer you the job.
[ link to this | view in chronology ]
Re: Everything is easy and cheap when you don't have to do it
Second run the video past the google voice sppech capture system. Scan the results for keywords and kick out any video tgat fsils for manusl review.
Third check the first 30 second and the description text. Timeline jump to a dew place to spot check.
Already they will have gone a long way to spot troubles before they get posted.
[ link to this | view in chronology ]
Re: Re: Everything is easy and cheap when you don't have to do it
Once again: 400+ hours worth of video per minute.
What may be 'simple' for a few videos is anything but when you scale it up to that level, so you're not talking about 'a room full of people' but a massive system requiring various levels of review of enormous amounts of content.
There's also the problem of false positives, something that already plagues ContentID, a black or white 'Does X match Y?' system. Make the question a subjective one, 'Does X count as 'extremist' content?' and things would be even more insane.
[ link to this | view in chronology ]
Re: Re: Re: Everything is easy and cheap when you don't have to do it
[ link to this | view in chronology ]
Re: Re: Re: Re: Everything is easy and cheap when you don't have to do it
Only if what's being scaled up is part of the business model being used, and not something they're being slapped with after the fact.
Were Google/YT in the 'pre-screening video content' business then yes, they would be to blame if they set things up such that they couldn't handle the increased load of what they had to go through, but since they're not it's not a 'business model problem' at all. Youtube hosts videos, that's it's business model, saying they should have to pre-screen everything first isn't a matter of scaling up something they've always had to do, it's adding something new on top of what they already do, something that the scale of the problem would make insanely expensive and bring the service to a crawling halt if they were required to do, contrary to your claims otherwise.
On a semi-related tangent, but your mention of how Google is big so it's not a problem has me again wondering, do you hold others to that same standard? Do you think that the movie and recording industries should likewise hire 'a room full of people' to personally vet every DMCA claim they send out to avoid false positives? They make billions too after all, surely it would be just as easy if not easier for them to pre-screen DMCA claims as it would be for Google/YT to pre-screen videos, so does that standard of yours apply to everyone, or just Google?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Everything is easy and cheap when you don't have to do it
The entertainment industry doesn't make money from Dmca notices.
The difference here is huge.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Everything is easy and cheap when you don't have to do it
As for the practicality of per-screening, that would require at least 24,00 actively screening content all the time, so to keep that up 24/7 would require at least 100,000 people; (allowing for holidays, sickness, meal breaks etc. along with the necessary managers and HR personnel). Then you run into the problem that those people do not know every existing work, who owns the copyright, and what licenses have been granted, or whether the poster works for the company that they claim, and have the authority to post the work.
The only people who can reliably identify a work as belonging to them or their organization, are the producers (not necessaries the creators) of the work. And only they know, or have access to the information needed to determine whether or not it has been licensed to the poster.
There is no magic crystal ball that will identify infringing works. Indeed, because of lack of a data-base of all works, there is no way of identifying the copyright holders of any particular work, or verifying that the claimant is actually the copyright holder, or licensed yo use the work
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Everything is easy and cheap when you don't have to do it
You are assuming that every minute of every video would have to be pre-screened. That is stupid. Nobody needs to watch a whole cat video to know it's a cat video.
Nobody would have to watch the videos in real time either. Even with your example, run the videos all at double speed and the needed people drops in half. Only watch 10% of the video time at double speed, and suddenly the need is down to 5% of the people you suggested. So 5% of 100,000 people would be...5000 workers. Suddenly it's falling into the realm of possible. Apply a little automation to filter out half of the videos that aren't harmful at all, and boom, you need 2500 people. Getting easier, isn't it?
It's easy to blow it off as impossible. It's not. It's pretty simple stuff. The anonymous coward has it closer to right that anyone would like.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Everything is easy and cheap when you don't have to do it
Hosting user submitted videos, not posting. The distinction is significant, as it means that thanks to 230/common sense protections in the US at least they aren't held responsible for what their users post and as such have no requirement to pre-screen for CYOA reasons. With no requirement to pre-screen more videos isn't a scaling problem getting out of control because it was never a problem in the first place.
(Imagine if you will a donation-based library, where all the books are donated by others. Their 'business model' is to make sure that they have enough shelves to hold all the books and making sure that people can find what they want. So long as they can manage those two, no matter how many books are donated or how fast then they're doing good and their 'business model' is fine. Now imagine someone comes in and demands that they check every book for 'offensive' content before people can check it out. Now how much is donated is a problem, but that problem has nothing to do with their 'business model', and everything to do with the new requirement that's been dumped in their laps.)
The entertainment industry does make money from the content that they're filing DMCA claims for(assuming a valid notice anyway), and unlike user submitted content that YT makes money from hosting the DMCA contains a (effectively theoretical at this point) requirement to swear 'under penalty of perjury', which would require manual review.
As I noted above a DMCA claim is also easier to check, as the only subjective part involved is a consideration of fair use, which has a quick and easy 'checklist' attached, quite unlike the subjective 'is this offensive/extremist?' which, barring extreme cases(and sometimes not even then) can be much harder to decide on. Ask enough people and anything can be seen as 'offensive', so the question becomes 'how many people can we safely offend according to the requirements?'
There's also the difference in consequence, miss a 'guilty' copyright infringement case and the harm isn't likely to be very bad, whereas if a site is liable for user submitted content and they let an 'offensive/extremist' post through they're likely to be facing a serious penalty, which means they're much more likely to block even fringe stuff 'just in case', leading to large amounts of content and/or speech blocked.
In both cases a faulty claim means legitimate/legal content and/or speech being removed, and while services like YT don't have a requirement to screen content those sending out copyright claims (theoretically) do, so why is it you think that only the former group should be required to pre-screen?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Everything is easy and cheap when you don't have to do it
Google has incredibly powerful tools to index content online and to extract semantic information. You don't that they could apply this to videos and their comments to determine videos that need review?
I also don't th8nk they should delete videos, it would be good enough to flag them gor adults and remove ads from them. Deleting a video should be saved for the most egregious situations.
That would go a very long way towards resolving the issues at hand.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Everything is easy and cheap when you don't have to do it
Even having to pre-screen 'problematic' videos would be a huge problem due to how many they'd have to deal with, and the massive numbers of false positives they've be wading through.
ContentID, something that's based upon a 'Does it contain content X or doesn't it?' already has problems a plenty flagging things for reasons ranging from absurd to downright broken. Now imagine a similar system but for 'offensive' content and the nightmare that would be.
If Google wants to manually review videos flagged by users as 'offensive/extremist', which I believe they already do, then I've no problem with that. What I have a problem with is requiring them to do so ahead of time, as it would be insanely expensive, cause significant collateral damage, and make the service vastly less useful as a hosting platform(all of which would be bad enough for a huge company like them, but would be even worse for smaller services trying to break into the market and who wouldn't have the same resources that YT/Google does).
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Everything is easy and cheap when you don't have to do it
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Everything is easy and cheap when you don't have to do it
Uh, no. That was the entire point of codifying the idea of safe harbors into law, that sites can take steps to moderate content without suddenly becoming liable for content posted by users. Were what you are saying true then Youtube would have been better off not implementing ContentID and ignoring DMCA claims, and I rather doubt that's what you meant to imply.
'Voluntarily' implementing a (lousy) filter for one type of content and complying with the law does not magically change their status such that they are responsible for what's posted by others using their service, whether that content be copyright related or otherwise.
[ link to this | view in chronology ]
Re: Re: Everything is easy and cheap when you don't have to do it
Not to mention that meanings aren't always clear just by the words used. Under the type of filtering logic you want we'd see British content flagged as hardcore gay child pornography because someone asked if they could "bum a couple of fags for the boys".
[ link to this | view in chronology ]
Re:
YouTubes business model is not broke, it just that they are not gatekeepers, but rather facilitators that allow anybody to publish without seeking any form of permission or review.
[ link to this | view in chronology ]