No, Tech Companies Can't Easily Create A 'ContentID' For Harassment, And It Would Be A Disaster If They Did
from the not-how-it-works dept
Every so often we see a sort of "tech magical thinking" when it comes to solving big challenges -- in which people insist that "those smart people in Silicon Valley can fix [big social problem X] if they just decided to do so." This sort of thinking is wrong on multiple levels, and often is based on the false suggestion that tech innovators "don't care enough" about certain problems, rather than recognizing that, perhaps, there aren't any easy solutions. A perfect example of this is a recent column from Jessica Valenti, over at the Guardian, that claims that tech companies could "end online harassment" and that they could do it "tomorrow" if they just had the will to do so. How? Well, Valenti claims, by just making a "ContentID for harassment."If Twitter, Facebook or Google wanted to stop their users from receiving online harassment, they could do it tomorrow.See? Just like that. Snap your fingers and boom, harassment goes away. Except, no, it doesn't. Sarah Jeong has put together a fantastic response to Valenti's magical tech thinking, pointing out that ContentID doesn't work well and that harassment is different anyway. As she notes, the only reason ContentID "works" at all (and we use the term "works" loosely) is because it's a pure fingerprinting algorithm, matching content against a database of claimed copyright-covered material. That's very different than sorting out "harassment" which involves a series of subjective determinations.
When money is on the line, internet companies somehow magically find ways to remove content and block repeat offenders. For instance, YouTube already runs a sophisticated Content ID program dedicated to scanning uploaded videos for copyrighted material and taking them down quickly – just try to bootleg music videos or watch unofficial versions of Daily Show clips and see how quickly they get taken down. But a look at the comments under any video and it’s clear there’s no real screening system for even the most abusive language.
If these companies are so willing to protect intellectual property, why not protect the people using your services?
Furthermore, Jeong goes into great detail about how ContentID isn't even particularly good on the copyright front, as we've highlighted for years. It creates both Type I and Type II errors: pulling down plenty of content that isn't infringing, and still letting through plenty of content that is. Add in an even more difficult task of determining "harassment" which is much less identifiable than probable copyright infringement, and you would undoubtedly increase both types of errors to a hilarious degree -- likely shutting down many perfectly legitimate conversations, while doing little to stop actual harassment.
None of this is to suggest that harassment online isn't a serious problem. It is. And it's also possible that some enterprising folks may figure out some interesting, unique and compelling ways of dealing with it, sometimes via technological assistance. But this sort of "magic bullet" thinking is as dangerous as it is ridiculous -- because it often leads to reframing the debate, sometimes to the point of shifting the actual liability of the issue from those actually responsible (whether copyright infringers or harassers) to intermediaries who are providing a platform for communication.The more aggressive the tool, the greater the chance it will filter out communications that aren’t harassing — particularly, communications one wishes to receive. You can see this in the false positives flagged by systems like Content ID. For example, there’s the time that Content ID took down a video with birds chirping in the background, because it matched an avant-garde song that also had some birds chirping in the background. Or the time NASA’s official clips of a Mars landing got taken down by a news agency. Or the time a livestream was cut off because people began singing "Happy Birthday." Or when a live airing on UStream of the Hugo Awards was interrupted mid-broadcast as the awards ceremony aired clips from Doctor Who and other shows nominated for Hugo Awards.
In the latter case, UStream used something similar but not quite the same as Content ID—one in which blind algorithms automatically censored copyrighted content without the more sophisticated appeals process that YouTube has in place. Robots are not smart; they cannot sense context and meaning. Yet YouTube’s appeals system wouldn’t translate well to anti-harassment tools. What good is a system where you must report each and every instance of harassment and then follow through in a back-and-forth appeals system?
The idea that tech companies "don't care enough" about harassment (or, for that matter, infringement) to do the "simple things" to stop it are arguments of ignorance. If there were some magical silver bullet to make online communications platforms more welcoming and accommodating to all, that would be a huge selling point, and one that many would immediately embrace. But the reality is that some social challenges are problems that can't just be solved with a dollop of javascript, and pretending otherwise is a dangerous distraction that only leads to misplaced attacks, without taking on the underlying problems.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: abuse, algorithms, contentid, copyright, filtering, free speech, harassment, jessica valenti, sarah jeong
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in thread ]
Word filtering always works...
[ link to this | view in thread ]
[ link to this | view in thread ]
The downside of Asimov's law
This is a problem I've been noticing since I laid my hands on a computer for the first time, back in the stone age. People generally have no real idea how computers or software work, and just lump it all into the mental category of "magic".
Once you think of it as magic, whether consciously or not, it is but a tiny mental step to thinking that it can easily accomplish anything at all. Once you're there, then the only reason that there isn't a solution for a given problem is lack of will.
This is an old problem, actually, and you see it a lot in the field of medicine.
[ link to this | view in thread ]
Who decides what is abusive language?
People are opinionated, people are passionate about their opinions, arguments ensue.
If someone directly attacks someone else (not their ideas, but the person) then that would be considered abuse.
If someone says that someone else's idea is a steaming pile of dog-raping shit, well, that's another person's opinion applied to another's person's ideas. That would not qualify as abuse.
A direct threat, where someone says they are going to do x, y and z to someone - well, guess what, that's an online threat, and a federal crime - go after that person under existing laws.
Belittling someone because of their ideas or personal choices, even when taken to extremes, isn't really abuse.
-- You thought a, b and C, you're an idiot and should never have been born - does not qualify as abuse, it's rude and obnoxious, but not abuse.
-- You smoke or drink and are stupid and should die to save my insurance company money later on - does not qualify as abuse as it's about someone's decision to do something that may or may not be stupid that probably would end up costing the rest of us more money down the road because of their stupidity.
Belittling someone because of what they are is abuse.
-- Haha - you're a girl and are week and should never have been born qualifies as abuse, because it attacks what someone had no control over, how they were born, where they were born, gender, race, etc...
-- You suck at playing this game and should die, while rude and offensive is also not abuse, because well, maybe the person really does suck at the game and people have been telling other people they should die since people first learned to communicate. It doesn't literally mean they should die, but that they should "go away" from the current location so that others don't have to acknowledge them.
Just remember two simple rules.
Don't threaten someone with violence directly.
Don't make fun of someone because of things outside of their control - ie, how they were born.
Everything else is pretty much fair game.
[ link to this | view in thread ]
Re: The downside of Asimov's law
"I don't know any of that computer stuff."
How often have you heard that before?
Or people who have an issue with a particular piece of software they downloaded from somewhere and expect you to magically fix it like all software works the same, even if you've never seen it before yourself.
These people may as well be saying "Online harassment could end tomorrow if the computer wizards just crafted a new spell to stop it."
I quite honestly has the same meaning to them.
[ link to this | view in thread ]
As long as humanity is not yet a single slave hivemind, there will always be harassment.
[ link to this | view in thread ]
Turn off the wifi.
Or just ignore them... People have been calling eachother names online for years, why are these "special snowflakes" so offended? They could just ignore them like everyone else.
Not to mention that these days some people try to divert attention from their corrupt dealings by pretending to be harrassed.
[ link to this | view in thread ]
Simplicity itself
[ link to this | view in thread ]
Humans Suck...
Sender's intent and
Reader's reaction. (see, for example, Roca Labs! lol)
These often don't match; this is how too many of us get into needless stupid fights on forums and lists.
I think the first part requires an identification of who is entitled to what level of protection. For example, in my sole and arbitrary opinion, I think anyone should be entitled to freedom from death threats. Again, in my sole and arbitrary opinion, a minor should be entitled to protection from things that might reasonably cause serious harm, but public officials (especially those in Peoria) should be entitled to no such protection.
As noted, that can't be an automatic process...my reaction to being told "I suck" (presumably harrassing content) depends hugely on the context. Say it here on Techdirt, you can have a fig, and I might never even know! Say it at work, it has a lot more effect, and, whoops, how the heck is a big content filter gonna see it anyway? And what if it is exactly what I need to send on to HR to prove that the sender is a prick and should be fired? Same message, completely different reactions!
So you have to start the process on the receiving end, when the target sees the message and objects. [Not that some messages aren't inherently offensive and removable by moderation, but then what about the conversation amongst the multiple moderators about said offensive message where they pass around a copy to make a decision?]
Now, I'm quite sure that if I complained to Techdirt about that "you suck" message, Techdirt would take appropriate action, probably with more speech appending the offensive message. And maybe I need an agent, so if bad stuff started showing up in my e-mail box, someone can be hauled into court (suppose it's Prenda harrassing me!). Likewise for stalking. But there has to be clear and unmistakeable communication to the harrasser before there can be any real liability or cause of court action.
Just my humble opinion, feel free to tear some holes!
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Re: Simplicity itself
Give a fake name, a fake address and fake phone number, all belonging to someone else (yup, identity theft), and have fun posting, ranting, abusing and get that person arrested.
use a fake e-mail address that "soundexes" like their real email address and spoof your ip to match theirs and you can have even more fun.
No, your idea, while being an honest attempt at a solution is improbable and easily open to attack.
[ link to this | view in thread ]
Re: Who decides what is abusive language?
I would agree with that to some degree. This thing will lead to an even dumber generation of kids who will not be allowed to voice their opinions.
Obey!
[ link to this | view in thread ]
Re: Re: Simplicity itself
Here is an honest attempt, people say mean things, confront them on it weather it be on the tubes or IRL, if they threaten you IRL, Kick their door in and disabuse them of the idea.
DO NOT FEED TROLLS, People that are near to situations stop thinking that the consequences will never be the same and say something to people behaving abusively, as for the content ID, it's not fixable it will always give people that want to abuse it the power. Discard automated DCMA notices as SPAM that they are.
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Re: Re: The downside of Asimov's law
Interestingly, I've actually long argued that software engineering is literally magic. A reasonable definition of "magic" is "the alteration of reality through the manipulation of symbols". Programming is absolutely that. So are most of the creative arts, many forms of mathematics, etc. But I would never make that argument to a layman -- I don't want to encourage people to think that computers are just like Gandolf's staff.
[ link to this | view in thread ]
Re: Re: Who decides what is abusive language?
Politically Correct - these 2 words enable assholes of all types to stomp on the majority and make their minority issue step on inviolable rights granted by the Constitution.
Politically Correct equals Weapon of Constitutional Destruction.
[ link to this | view in thread ]
Re: The downside of Asimov's law
And when you think about it, they're not too far off. I mean, seriously: as a professional software developer, my vocation is to produce highly abstract formulae in an arcane language, ordered around priorities and concepts that are meaningless and counterintuitive to those not initiated in the Art, that, when invoked, alter reality according to the wishes of the person using the formula.
How am I not a mage? ;)
[ link to this | view in thread ]
I'm still waiting for replies to appeals I made 5-6 weeks ago about videos that Twitch's "magic" content system muted incorrectly. Why were they muted? Why, for having the audacity to play Nine Inch Nails' album "The Slip" on a stream of course, while living up to every piece of the CC license. Perfectly legal. Muted none the less. There's no way to clear music ahead of time, there doesn't seem to be any way of clearing the matter up after the filter's gone off. Nowhere is there a human involved to check if it's a false positive or a license, even after an appeal is filed.
Sure! Let's trust the magic. It'll be so easy for a machine to tell the difference between a joke and a threat. Hell, even judges can't tell the difference between terrorists and aspiring rappers - and this Valenti-character is telling me a fucking text filter is going to do a better job an fix everything? And that it is easy and can be done at the drop of a hat?
Back to school with you, Valenti. At the very least upload a few videos and see how many of them end up deleted or muted for no reason at all. Then again, I doubt she would learn anything.
[ link to this | view in thread ]
Re:
Just to be clear: Online is in fact a part of the real world.
[ link to this | view in thread ]
Re:
What is it about online interaction that makes people think it's somehow distinct from that which is "real"?
I have friends who met online, fell in love, proposed, and then met "for real" for the first time. They're now happily and successfully married and have been for several years. It doesn't get much more real than that!
[ link to this | view in thread ]
[ link to this | view in thread ]
Valenti, eh?
[ link to this | view in thread ]
Re: Re: The downside of Asimov's law
The unfortunate thing about that is that a lot of software does work the same. Computers generally have certain sets of functions you wish to do with them, so operating systems to many of the same things, it's just a matter of figuring out how they implemented the function. Likewise, programs intended for a specific purpose are going to need certain functions to accomplish that purpose, and it's a matter of determining how they implemented those functions.
Then people expect you to be able to fix functions that aren't shared, or aren't implemented, because you can figure out the shared ones fairly easily.
[ link to this | view in thread ]
Magic tech is everywhere
[ link to this | view in thread ]
Re: Re: The downside of Asimov's law
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Re: Magic tech is everywhere
[ link to this | view in thread ]
Re: Re: The downside of Asimov's law
Her reaction was "is that legal??"
[ link to this | view in thread ]
Re: Re: Re: The downside of Asimov's law
But I agree, it is pretty magical. Even our general acceptance of how "magic" works in the fantasy setting follows a basic set of rules like type and execution.
Magic tomes are computers and the spells on the pages within are just software. You execute the spell using either the stored magic (battery) or from an external source (AC).
[ link to this | view in thread ]
Re: Re: Re: Re: The downside of Asimov's law
The parallel does go way back in the industry, too, in the slang that is used. For example, if you fry a chip, the smoke it produces is called "magic smoke" on the theory that it must have been the smoke that made the thing work, since once it's released the thing stops working.
[ link to this | view in thread ]
Re: Magic tech is everywhere
[ link to this | view in thread ]
TURN OFF comments.
Problem solved.
Either that or moderate comments before allowing them to be posted. If one is harassing, mark it as spam and move on.
People have a choice and it's up to them to make it.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: The downside of Asimov's law
[ link to this | view in thread ]
Seriously, this person has at best a tangential connection to reality and the only thing she is good at is pushing narratives that don't hold up to even superficial scrutiny.
[ link to this | view in thread ]
Re: Re: Magic tech is everywhere
Have you seen a dirty bomb going off in the US? no, you must hate America
[ link to this | view in thread ]
Bingo, problem solved.
[ link to this | view in thread ]
So you can decide whether or not to listen to anything such a sexist hateful bitch has to say about harassment and hate.
[ link to this | view in thread ]
Re:
Any question or criticism, regardless of validity, is automatically declared harassment, persecution, oppression, misogyny, etc. We all know the buzzwords.
Like old time religion burning heretics at the stake. Same mentality, different century.
[ link to this | view in thread ]
Things that are on topic
So like real life, I have been actually assaulted more times than I can count, I've been followed and stalked, I have had people pull weapons on me, and try to kill me on at least 3 occasions, and you want to destroy the internet because you feel bad?
Sorry, I just RTF linked article and it is pathetic and weak people that have never had to interact with real people in real life should not be taken seriously, also what I said about content ID above is still true
[ link to this | view in thread ]
[ link to this | view in thread ]
Re:
A long time ago in a Web community far, far away, one of my fellow forumers called me "the guy with the biggest balls out of all of us" for using my real name rather than a handle. I was a bit confused by that, and to be honest I still am. I just sort of figure it's basic sociability.
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
[ link to this | view in thread ]
[ link to this | view in thread ]
Re: Re:
Seems many forget this. Especially lawyers. Harassment is harassment, no new law needed due to it being online.
[ link to this | view in thread ]
She's not totally wrong about one thing...
Copyright infringement frightens these firms enough that they are willing to create seriously ineffective means of stopping it that frequently target innocent bystanders, because attempting to solve the problem is, in their calculus, worth the number of problems their shitty systems cause.
And, by the same token, they could make another shitty semi-effective system to target online harassment, if they decided that the system's benefits were worth the drawbacks.
In other words, the current calculus is:
Policing Copyright Infringement Badly: Worth the inconvenience to users
Policing Harassment Badly: Not worth the inconvenience to users.
Complicating things is the fact that automated harassment detection would likely be even worse than ContentID, which probably changes the cost-benefit analysis.
But at the end of the day, all of these platforms could create a lumbering, semi-effective censor that eliminates lots of non-harassing speech and lets lots of harassment through. They've already done a bad job solving one problem, so there's no reason they couldn't do a worse job solving a similar problem.
The fact that she just takes it for granted that that would be a good idea is slightly alarming, though.
[ link to this | view in thread ]
Re:
What choice am I making if a moderator misunderstands my post?
[ link to this | view in thread ]
Re: Re: Re:
[ link to this | view in thread ]
Re:
Oh really? Funny that, seeing as how the whole "GamerGate" thing ostensibly started because gamers felt harassed by an article criticizing the continuing use of "gamer" as an identity, and the questionable culture that identity perpetuates.
I say "ostensibly" because we all know that it really started with the actual, documented, screen-capped harassment of a female game developer who allegedly, but not really it turns out, slept with a writer who then later went on to not review her game.
'Collusion at its finest.' [/sarc]
I've long wondered how long it would be before GamerGate madness started spilling into TD. Or have I just not been paying attention to the comments?
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Re: Re: Re: Magic tech is everywhere
[ link to this | view in thread ]
Re: She's not totally wrong about one thing...
I think the point you're missing is that it isn't currently feasible to make a ContentID-like system work any better than Google has done. As bad as it is, I'm actually very impressed that it works as well as it does.
It has nothing to do with anyone's priorities and everything to do with the fact that this is a Very Hard Problem ("very hard" meaning extraordinarily difficult).
Also what it's doing is several orders of magnitude easier than trying to detect "harassing" comments. That task is a highly nontrivial one (meaning that it approaches impossible).
[ link to this | view in thread ]
It's Knight Rider's fault
I recall attending the biggest conference in Vegas at the time, COMDEX, and doing demos at our booth, showing how it could be used with popular PC programs like Lotus 1-2-3. Imagine if you will, a guy wearing a headset talking to a luggable computer running Lotus saying, "column width"..."1"..."2"..."enter"..."up"..."left"..."equals"..."equals"..."EQUALS!"..."sum"..."b"..."5".. ."thru"..."thru"..."THRU!"..."f"..."5"... ok, you get the gist. To lots the technologists this was amazing. They loved it and were willing to part with $495 to get their gadget fix on. However, the non-technologists would always ask, "what's so amazing about that? There's a car on TV that can talk."...damn you Knight Rider! ;)
I worked for a company called Impermium acquired by Google, that focused exclusively on social media spam and offensive language around comments and other public social media spaces. I now work at real-time streaming social media management company that offers a commenting platform and has developed its own spam and offensive language detection systems. It's striking to see the lengths that people are willing to go through to avoid detection from spewing their negativity (or commercial messages) and one quickly realizes the number of different ways that make the problem of managing this near intractable. When combining this with the number of innocent interactions that can be misinterpreted outside of the context of the participants, makes one appreciate the complexity of this problem which ContentID can only scratch the surface of the surface on.
Fortunately, Sarah Jeong did a great take down of ContentID, but what Ms. Valenti clearly doesn't understand is that there are lots of people and companies actually trying to solve this immensely complex problem. The flip side of the tech ignorant is that they're also tech idealists, and nothing exemplifies this more than how she set-up her piece. There are cars on TV that can understand us, therefore making a computer program understand is easy. There are technologies for picking off exact duplicates of TV shows, movies and music, so picking off duplicates of contextual references should be easy...oy!
[ link to this | view in thread ]
[ link to this | view in thread ]
Re:
TURN OFF comments.
So, uh, remove all discussion from the internet except that which has been approved?
Oh, yeah, big improvement.
I've got a better idea...if you don't want to be harassed, TURN OFF your computer and get a flip phone.
Problem solved.
[ link to this | view in thread ]
[ link to this | view in thread ]
Re: Magic tech is everywhere
http://xkcd.com/1425/
[ link to this | view in thread ]
Re: Things that are on topic
Um, if this is true, I'd highly recommend finding a new group of people to hang out with, or a different place to live. Being assaulted "more times than you can count" is not a normal situation.
Just in case, www.thehotline.org is a resource you can use. There is help out there. I don't know if it applies to you, and if I misunderstood, I apologize. If it does, please seek help.
[ link to this | view in thread ]
Re: Re: The downside of Asimov's law
[ link to this | view in thread ]
Re: Re: Re: Re: The downside of Asimov's law
I can see why you think you'd be really good at it...
[ link to this | view in thread ]
Re: Re: Re: The downside of Asimov's law
At least we know the answer now: "Not if we don't like what you were doing"
[ link to this | view in thread ]
Re:
The founding fathers supported anonymous speech because it was a tool that they used to get the revolution going, and in an act rare for politicians, left that door open for others to potentially use against them.
[ link to this | view in thread ]
Re: The downside of Asimov's law
[ link to this | view in thread ]
Re: Valenti, eh?
[ link to this | view in thread ]
Re:
Explain how saying that you look fat is racist? That's not an attack on you, I just wonder how you made such a leap of logic, especially since your picture suggests that the "chocolate" part wasn't what was offensive.
"It's amazing what ignoring a meanie can do!!!"
You appear to have missed almost all of the points raised.
[ link to this | view in thread ]
Re: Re: She's not totally wrong about one thing...
I'd argue impossible, at least without direct, complete co-operation from every copyright holder.
Even if Google managed to get a system in place that accurately identifies every single piece of copyright content regardless of who owns it, how it's used, how long for, etc., a licence agreement they have no possible knowledge of can change the status of the work in an instant. A legal piece of work can be made immediately infringing, and vice versa depending on who posted it, without changing the content itself, and the list of allowed posters will change without warning. Even without including such subjective things as fair use, it's impossible for Google and Google alone to create a system that always works. Which is why it's such a mess - ordinary people and artists get their rights trampled because Prince doesn't want someone to hear a few notes without paying.
Then, people think that this can be applied to something as completely subjective as harassment, and the price is everybody's access to free speech? You really have to be a special kind of moron to believe that's realistically possible, let alone easy.
[ link to this | view in thread ]
Re: Re: Re: Who decides what is abusive language?
I'm not sure you'd call that abuse, but I sure as hell would. And why should anyone be obliged to grow a thicker skin and learn to accept being brutalized online?
I can take a bit of criticism, I dish it out often enough, but it crosses the line when people make personal attacks on your character and encourage others to join in to the point where you can't show your face there any more.
Yes, it's subjective, and that's another problem; I've seen trolls come and go on this site and many of them would whine and claim harassment. Protip: if they keep it on the site you visit, go elsewhere. If they come after you to your own e-spaces to hassle you there, it's harassment. If they sign you up for spam and get you locked out of your own email account by repeatedly trying to break in, it's harassment. And when real-life consequences occur as a result of their activities, I'd call that harassment.
It is, as Mike correctly said, a social problem; people want to do it. What we need to do is to encourage an online culture in which this is considered unacceptable conduct. TD does a great job of this, other sites need to follow. It's not an ISP's job to stop harassment but it's reasonable to expect the owners of blogs and websites to moderate effectively and to provide tools such as blocking and muting for users who don't want to hear from certain individuals.
I use them when I have to and haven't had any trouble for years. Mind you, I've learned that I'm not obliged to answer back every time.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: The downside of Asimov's law
[ link to this | view in thread ]
[ link to this | view in thread ]
Re: Re: Re: Re: Magic tech is everywhere
QED. How are we going to automate harassment handling when real humans can't detect sarcasm? Not to single you out. Yours is simply the second instance I've seen in the comments so far.
This story reminds me of the fool who decided to manage the contents of his freezer with a database. He'd know what's in the freezer just by querying the db! Sure, and if the wife and kids don't update the db every time they take something out or put something in, how long's it going to take before the two are out of sync?
It *might* be possible if the freezer had a UPC scanner built into it, but a regular old freezer? No, don't even try. Some things are just bound to fail for one or possibly many reasons. What happens when frost builds up on that UPC scanner, for instance? We've all seen UPC scanners fail in supermarkets, their ideal environment, many times.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Magic tech is everywhere
It wasn't sarcasm that I failed to detect. I genuinely couldn't figure out what that comment was saying. Still can't.
[ link to this | view in thread ]
Re: Re:
And as for those screencaps, well, "Not All Gamergaters are like that" (literally 99% aren't, but you don't care). That's an argument your ilk accept, right?
[ link to this | view in thread ]