German Court Says Facebook's Real Names Policy Violates Users' Privacy
from the really? dept
With more and more people attacking online trolls, one common refrain is that we should do away with anonymity online. There's this false belief that forcing everyone to use their "real name" online will somehow stop trolling and create better behavior. Of course, at the very same time, lots of people seem to be blaming online social media platforms for nefarious activity and trollish activity including "fake news." And Facebook is a prime target -- which is a bit ironic, given that Facebook already has a "real names" policy. On Facebook you're not allowed to use a pseudonym, but are expected to use your real name. And yet, trolling still takes place. Indeed, as we've written for the better part of a decade, the focus on attacking anonymity online is misplaced. We think that platforms like Facebook and Google that use a real names policy are making a mistake, because enabling anonymous or pseudononymous speech is quite important in enabling people to speak freely on a variety of subjects. Separately, as studies have shown, forcing people to use real names doesn't stop anti-social behavior.
All that is background for an interesting, and possibly surprising, ruling in a local German court, finding that Facebook's real names policy violates local data protection rules. I can't read the original ruling since my understanding of German is quite limited -- but it appears to have found that requiring real names is "a covert way" of obtaining someone's name which raises questions for privacy and data protection. The case was brought by VZBZ, which is the Federation of German Consumer Organizations. Facebook says it will appeal the ruling, so it's hardly final.
On the flip side, VZBZ is also appealing a part of the ruling that it lost. It had also claimed that it was misleading for Facebook to say that its service was "free" since users "pay" with their "data." The court didn't find that convincing.
It will certainly be interesting to see where the courts come out on this after the appeals process runs its course. As stated above, I think the real names policy is silly and those insisting that it's necessary are confused both about the importance of anonymity and the impact of real names on trollish behavior. However, I also think that should be a choice that Facebook gets to make on its own concerning how it runs its platform. So I'm troubled by the idea that a government can come in and tell a company that it can't require a real name to use its service. If people don't want to supply Facebook with their real name... don't use Facebook.
But, honestly, what's really perplexing is that this is all coming down at the same time that Germany -- especially -- has been trying to crack down on any "bad content" appearing on Facebook, demanding that Facebook wave a magic wand and stop all bad behavior from appearing on its site. I'd imagine that's significantly harder if it has to allow people to use the site anonymously. This is not to say that anonymity leads to more "bad" content (see above), but it certainly can make moderating users much more difficult for a platform.
So, if you're Facebook, at this point you have to wonder just what you have to do to keep the service running in Germany without upsetting officials. You can't let anything bad happen on the platform, and you can't get user's names. It increasingly seems that Germany wants Facebook to just magically "only allow good stuff" no matter how impossible that might be.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: anonymity, data protection, free speech, germany, privacy, real names
Companies: facebook, fzbz
Reader Comments
The First Word
“Bad for persecuted minority
Bravo! As an atheist promoting secularism in islamic shithole, I want to remain anonymous to avoid persecution.Facebook policies are really bad for secularists. We have to face mass reporting from muslim cyber army and facebook really favor these groups of online mobs rather than those who fight for freedom of speech!
Subscribe: RSS
View by: Time | Thread
By their T&Cs, perhaps, but I can vouch for the facts that there's at least a couple of people I know who have been on there for years using a pseudonym, most recognisably so even by people who don't know the and nothing to do with their real name. Others add silly things as their middle name, etc, which could probably violate the policy as well, although they're obviously not doing so to hide their identity in that case. Hell, I still have a few dogs and inanimate objects among my FB "friends" (largely from the days before pages existed) and their accounts don't seem to have been cleaned up either.
It's probably a handy policy for kicking people off if they're found violating any other rules, but to say it doesn't happen, and happen often, is not correct. It might be their policy, but it's not held to with any kind of regularity in my experience.
"If people don't want to supply Facebook with their real name... don't use Facebook."
Well... this. Even if it were actually necessary to use real names to sign up (and how FB can possibly confirm this with any accuracy is a different question), it's not really a problem. If you don't like the policy of a service, don't use that service. If you voluntarily sign up using your real name, don't be surprised if they then know your real name. Simple.
[ link to this | view in chronology ]
Re:
It only takes one jackass with a grudge to pull the pin on a fake name report and cause great annoyance for someone.
Uneven enforcement is not the solution to bad policy.
I guess that wisdom should be beaten into every social media company manager.
[ link to this | view in chronology ]
Re: Re:
I think the system is fine for any realistic scenario. They state they need real names, but don't enforce until there are real reported problem. At that point, they can use that as an extra rule to enforce against people engaging in harassment, even if what they do directly isn't specifically against the rules.
It would be impossible for them to check identifies before people sign up, so what's your solution? If the above is bad policy, what would good policy look like?
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
The German government and the VZBZ are two very distinct entities, with different motives. It's easy to conflate all Germans as "Germany", just like it's easy to be say "all Americans are gun nuts".
[ link to this | view in chronology ]
No magic wand necessary
No magic wand is required: just minimally competent system and network administration skills. Maybe if Facebook's technical staff weren't ignorant newbies, maybe if they tried -- and I know this is a shocking concept -- to learn from the experience of others, maybe if they actually invested some effort in running their own platform, then they could take a big bite out of this problem. (And of course, as everyone equipped with sufficient experience knows, reducing the scope of the problem isn't a solution, but it does make the remainder more tractable and thus amenable to techniques that might not scale to the size of the original problem. You don't have to solve it all at once.)
Don't tell me it can't be done. Of COURSE it can be done, it's really not all that hard. The problem isn't the feasibility, it's the lack of commitment.
The same situation exists at Twitter and other "social media" companies that are irresponsibly run. These people are making rudimentary mistakes that we *knew* were mistakes decades ago, mistakes that we made and wrote about so that others wouldn't have to repeat them. But in their ignorance and their arrogance, they're insisting on doing so anyway.
And now governments are starting to notice the fallout from this, and now they're responding the way they usually do: with regulation. This *could* have been largely avoided had these companies architected, designed, built, and operated themselves using best practices -- but they didn't. And now they're reaping what they've sown.
[ link to this | view in chronology ]
Re: No magic wand necessary
I'm particularly interested in how "minimally competent system and network administration skills" translates into working out which content is "bad" and which is "good". Especially since most of the things complained about are completely subjective and interpreted differently between different human beings.
Personally, I've worked in the industry for 20 years and I've never seen a network protocol that has such things included. What have I been missing?
[ link to this | view in chronology ]
Re: Re: No magic wand necessary
[ link to this | view in chronology ]
Re: Re: Re: No magic wand necessary
[ link to this | view in chronology ]
Re: Re: No magic wand necessary
Well-run operations have read this and implemented it, because they know that their peers (and others) will expect to use it to communicate with them. They've put in place the appropriate email plumbing to see that incoming traffic is sorted/prioritized/routed as necessary. That might mean forwarding it to a person, or to a group, or to a ticketing system -- those are internal choices that are driven by structure and size, RFC 2142 doesn't specify those. But whatever is behind those addresses, it should ensure that messages end up in front of clueful eyeballs that are in a position to read them, understand them, and do something about them.
This is something that everyone who's even *considering* running an Internet operation should know and have in place before they launch. And there are quite a few well-run operations (of all descriptions and sizes) who have done exactly that. Really good operations save all the traffic and do post-mortem analysis on resolved problems in order to identify persistent issues, and then they task someone with figuring out why that's happening and what can be done about it. The idea, of course, being to forestall the need for future reports by identifying the root cause and fixing it, thus reducing the need to keep dealing with the same thing over and over again.
This isn't magic and it's not hard: it's Internet operations 101. And there exist all kinds of techniques for making it scale, for filtering traffic by priority, for correlating it with internal problem reporting systems, and so on. (I'll give you one example of those: it's not hard to construct a procmail filter that keys off the addresses of everyone who's posted to NANOG in the past five years. If you get a problem report from one of those people directed to your postmaster or hostmaster or abuse or other address, there is a high probability that it's accurate and very much worth paying prompt attention to. Same for dnsops. Same for outages. And so on. Hey, if one of the more senior people out there is doing you the favor of doing your job for you, the least you can do is pay attention.)
It's not an accident that the operations that get this and practice it tend to be operations that don't exhibit chronic, systemic problems. And conversely, it's not an accident that organizations that neglect this tend to be ones that are cesspools of abuse and attacks.
Like I said at the outset: this is one example. But it happens to be a well-known best practice that everyone should be using, regardless of sector or scale. Any operation that's not up to this is probably not good enough to be part of the Internet -- because this is *easy* stuff, like I said, Internet operations 101. If they can't handle even this entry-level task, then they're going to flail miserably when faced with some of the tougher things.
[ link to this | view in chronology ]
Re: Re: Re: No magic wand necessary
[ link to this | view in chronology ]
Re: Re: Re: No magic wand necessary
Those fail to be useful when an operation scales up to the size of YouTube, and the mailboxes are flooded by complaints from users.
You are failing to grasp the scaling issue, which also impacts things like using email for users to notify the site of problems. Note, that is not a spam problem, but rather that with a large users base, enough of your users will find things on the site that they do not like, and flood any mechanism that relies on manual filtering.
[ link to this | view in chronology ]
Re: Re: Re: Re: No magic wand necessary
1. Yes, this works at YouTube scale *if you do it right*. If you do it foolishly, then of course it will fail miserably. Y'know, YouTube/Google is allegedly staffed by a lot of really smart people and they're really well-resourced; they should be able to eat a problem like this for breakfast.
2. If you're getting too many problem reports, then that's a pretty good indicator you're doing it wrong...whatever "it" is. The best way to reduce that volume is to figure out what you've botched and fix it. Repeat as necessary and watch the volume shrink. It's not hard.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: No magic wand necessary
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: No magic wand necessary
So I'm not saying "set up a really good, scalable RFC 2142 compliance mechanism and stop". I'm saying *start* with that, then do some of the other dozens of things that you should be doing.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: No magic wand necessary
But, an example of what? You're really not being clear either on what you think they're not doing right nor how they're doing it wrong. Which parts of the specs are they not implementing? Why do they need to be implemented in these cases? What are the specific violations?Somewhere in your rambling, you should at least be stating why and how you think they've failed.
Then, you can continue the thought and explain why being able to implement these things mean they should be able to filter out "bad" from "good" content when most human beings seem incapable of doing so. That's the "magic" - not the filtering itself, but determining which content needs to be filtered. No RFC will tell you that.
You've done nothing so far but ramble on for paragraphs about things that are irrelevant to the statement you found so objectionable. Perhaps explain why you're right rather than insisting you know automatically better than everyone else and referencing random specifications that might not be relevant to anything being disucussed.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: No magic wand necessary
What I'm trying to explain is something that everyone who's been around long enough already knows: sloppily-run environments become "abuse magnets", meaning that abusers and attackers eventually figure this out and use the ineptness of the operation for their own purposes. As in "200M bots on Facebook", which is their announced number and therefore a serious underestimate.
Think about how shitty an operation you have to be running to have an infestation that huge. And think about what kind of shitty people would *let that situation persist*. Anybody with an ounce of professionalism or responsibility or just self-respect would shut it down immediately and keep it that way while they figured out WTF went wrong, fixed it, and took steps to keep it from happening again.
The same thing is true at Twitter and in AWS and at YouTube and elsewhere. They're all very poorly run, and so it's not at all surprising that they have major issues, e.g., everyone who's paying attention to their own logs knows that AWS is a massive source of brute-force attacks.
The partial (note: PARTIAL) fix to this to not run the operation so damn sloppily. It's not a panacea, and I've never said it was. It's necessary, not sufficient. And by "necessary", I mean that it enables folks to have a fighting chance of dealing with this nonsense. Without it? Well, they're pretty much screwed and so are the rest of us who have to deal with the fallout.
Whether it's RFC 2142 or BCP 38 or not making the mistakes outlined in RFC 1912 or using the DROP list or any of the other myriad things that are of Internet operations 101 varies by the operation. But it's not an accident that the operations which have the worst problems are the same ones that've failed to do this stuff. Conversely, it's not an accident that some of the operations we never talk about -- because we don't need to -- are the ones who've done all of this stuff and more. They've pre-empted most of their problems and made the rest easier to solve.
BTW: I don't want to hear any whining about "scale". Of course these things scale, and it's really not even that hard. This is the *easy* part of professional system/network admin, and anybody who can't handle these basics is going to be overwhelmed when the hard stuff hits their desk.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: No magic wand necessary
And what happens when you aren't doing it wrong and people just like to complain because you aren't doing it they want you to do? Because that is the internet. You can do everything right and there will still be a large group of people out there who disagree with you and will complain about it.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: No magic wand necessary
Ugh, my grammar, I am ashamed.
Should be "because you aren't doing it the way they want you to do it".
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: No magic wand necessary
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: No magic wand necessary
Let's suppose that somewhere out there are Alice and Bob. Alice doesn't send problem reports to hostmaster often, but when she does, they're accurate, timely, and complete (that is: they lay out the problem explicitly so that you can see what's wrong).
Then there's Bob. Bob is a loon. Bob sends problem reports to webmaster every other day saying that the HTML markup is controlled by aliens and they are eating his brain. (I suppose this also lays out the problem explicitly, but in a rather different way.)
Clearly, you want a mechanism that puts Alice's reports at the top of the queue and Bob's at the bottom. Now, how you build that mechanism depends on how many Alice's and Bob's you've got, because it's got to scale. BUT, and here's the key, every time you deal with one of these messages, and either solve the problem that Alice told you about, or realize that Bob is still a loon, you incorporate that knowledge into the problem reporting system. (BTW, you do this whether the system takes its input from email or the web or something else or multiple of these.)
Over time, this yields a system that's quite efficient at prioritizing the things that need to be. Of course you can also augment it with a priori knowledge, e.g., "we work closely with Foo Corp and Charlie is their senior network engineer, so flag anything from Charlie". Or you can use heuristics - which I won't get into here, because it's long. Whatever you use, the point is that you'll end up building something that isn't perfect *but doesn't have to be*.
To put that last part another way: this gets easier, NOT harder, at scale. It gets easier because you can make a lot of mistakes and still end up with the most important/timely/accurate problem reports at the head of the queue. Of course, you should still fix those mistakes as you find them, but that can be backfilled.
BTW, one place that has deployed this started with about 400 a priori rules and now has about 20,000. Reports are gatewayed into a ticketing system that also gets input from web forms, monitoring systems, etc. Yeah, every now and then, it screws up and something important doesn't get marked as important...but when that happens, it's the last time it happens. For the most part, it does a really good job of triage and as a direct result of that, traffic volume has been declining every year since it was deployed.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: No magic wand necessary
"Reports are gatewayed into a ticketing system that also gets input from web forms, monitoring systems, etc"
This is a world away from what's being discussed.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: No magic wand necessary
If 'Rich Kulawiec' is correct then "the way Facebook operate(s)" is, in part, due to the poor and unprofessional build up of their service.
Facebook is big, powerful, famous, rich and therefore "successful". Unfortunately, the popular modern use of the successful tag often includes only the illusion of merit. Mr. Kulawiec seems to be suggesting that the building of Facebook was reckless and/or careless. Too many of us religiously exalt 'big things' with little concern with how or why. The ends justify the means, right?
...Anyway. Must we reject ideas 100% or embrace them 100%? Can't we wrestle with the pieces?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: No magic wand necessary
Wrestle with the larger problem in as many small pieces as you like. The problem with "Rich's" argument is that it is not based in reality. The real issue has nothing to do with individual problem reports but in detecting objectionable content before the wrong person does and you get sued for it. That's a vastly different problem statement than "handle a volume of complaints".
"Rich" also assumes, with a very high degree of possibility he is entirely wrong, that Facebook does not already have a good individual complaint handling system. That system has done nothing to save them from becoming a target of many governments around the world.
It seems clear that "Rich" works for small to mid-sized businesses with a dramatically smaller footprint than Facebook. The business fundamentally changes with global exposure/popularity as must the infrastructure that supports it. The economics are completely different. The same rules cannot apply because technology isn't powerful enough to scale beyond a certain threshold and economics prevent being able to attempt to scale beyond that threshold. It's an asymptotic curve. New approaches have to be developed which may include letting some problems work themselves out.
Basically, this whole thread is little more than hot air.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: No magic wand necessary
For me, dismissing whole comments has a "high bar" (which the Internet routinely meets).
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: No magic wand necessary
Sorry, that was me.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: No magic wand necessary
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: No magic wand necessary
He's not, on several demonstrable fundamental levels, plus the main thing he objected to is outside of the scope of what he was talking about anyway.
"Must we reject ideas 100% or embrace them 100%? Can't we wrestle with the pieces?"
Feel free to show out where I said either of those things. My point is, to use an analogy, when other people are complaining about the stocking quality in the supermarket, he's complaining about the way the wiring was installed. His response to people saying it's impossible to stock unlimited amounts of fruit is to talk about how the lighting was fitted and therefore they should work out their fruit situation if they were competent. They have nothing in relation to each other, even if he has a point with what he's bringing up, which I'm still not convincved he has.
[ link to this | view in chronology ]
Re: Re: Re: No magic wand necessary
That RFC specifically for arranging email is great. I'm not sure of Facebook's email system layout, but the company don't really use email for most communications so I'm not sure how much actually applies. But, what does that have to do with filtering the content on Facebook into "bad" and "good", which is the subject at hand?
Stop typing wasteful paragraphs. Explain yourself.
[ link to this | view in chronology ]
Re: Re: Re: No magic wand necessary
What do you do when you have no idea who is behind an IP address, all the requests are coming in either port 80 or 443 and all those requests are simply users uploading text? The problem is not that the requests are coming in on those ports or from specific IP addresses, the problem is the text that those requests post to user profile statuses. The question is, how do you filter that?
No one is talking about the network and sysadmin side of this (except for people who don't understand how this works), what they are talking about is preventing people from saying bad things online. To my knowledge there is no way to filter that. Sure you can implement some kind of profanity filter but those are notoriously inaccurate, easily bypassed, and regularly flag non-profanity. Plus how do you distinguish between someone saying "I'm going to kill you" as a real threat of violence or two friends bantering back and forth about their next Call of Duty match?
You may think this is merely a simple sysadmin issue, it really isn't. Take it from us sysadmins who actually do this for a living.
[ link to this | view in chronology ]
Re: No magic wand necessary
[ link to this | view in chronology ]
What makes FB's policy even sillier is that, in the UK, there's not really the same concept of 'real name'. There's a canonical name for official paperwork, and it can be a ball-ache to get that changed, but there's no idea that other names are inherently false.
That's even referenced in FB's own T&Cs in the UK, whereby the 'real name' provision is along the lines of 'must be a name people use for you' rather than any reference to birth certificates and whatnot. Which then clashes with the documentation FB demands from people to prove it's their name. An acquaintance pointed that mismatch out to them in a name dispute, and got a 'yeah, that doesn't make sense, your name is fine, carry on' out of them.
[ link to this | view in chronology ]
Re: Requiring a "real" name
Related quote from Mike's post:
Facebook interprets "real" as "appearing on a government-issued identity document", and they sometimes ask for scans of these documents when they think a name's not "real". So it's not really correct to pretend this is a private thing with no government ties. Why should they be able to process government documents without being subject to the related rules?
If their policy required "the name people generally use to refer to you", that might be different. It would make more sense for UK users, and for everyone that uses a nickname—some people never use their "real" name except on government/bank forms (my employer doesn't know, for example).
[ link to this | view in chronology ]
"should be a choice that Facebook gets to make on its own concerning how it runs its platform" -- No, corporatist: it's THE PUBLIC'S PLATFORM. The public allows corporations to exist on condition serve OUR purposes, not the 1%.
Even Germans aren't so corporatist as you. You're always pushing that corporations, mere legal fictions that shield from personal liability while allowing to gain money in public markets, have an alleged Right to do as please and control "natural" persons.
Corporations uber alles is the theme of this piece. You simply sneer at "natural" persons not wanting Facebook to sell on their info.
Whenever you can wedge in pro-corporatism, you do. It's pathological. -- I point it up and mock so often as can by mentioning dangers from Google on least excuse, but with YOU it's primary goal: even if doesn't rise naturally in topic, you push out the lawyer's fiction that corporations have instrinsic rights by which can control natural persons.
[ link to this | view in chronology ]
Re: "should be a choice that Facebook gets to make on its own concerning how it runs its platform" -- No, corporatist: it's THE PUBLIC'S PLATFORM. The public allows corporations to exist on condition serve OUR purposes, not the 1%.
Indeed, you should take your medicines properly. Maybe then you'll finally leave this site you hate so much but can't seem to distance yourself from.
Anyway.
"have an alleged Right to do as please and control "natural" persons."
First, they cannot control 'natural persons' whatever you mean with that. Anybody is free NOT TO USE their services. Second, yes they can do as they please within the law in the US where most of of your bs is directed to. In this specific article he is criticizing the interference from the government in an issue that should be left to the companies and even if he disagrees with Facebook policies regarding real names. In this case this ruling may be backed up by a law but said law is misguided in its essence.
It's amusing to see you accusing Mike of forcing some topic into some place it wouldn't rise naturally. You are an ace at doing this. Truly a psychology case study you are.
[ link to this | view in chronology ]
(attempted) clarification by a german speaker
I'll try to summarize the keypoints of the ruling as explained on the VZBZ site. They also have a scan of the ruling, but the quality is abysmal.
1.) You are only allowed to use personal information of a user with the informed consent of that user. Facebooks default privacy settings allow the use of personal data. Those settings can be changed, but they are opt-out instead of opt-in and the existence of the privacy center isn't explicitly made clear to the user. As a result the consent isn't informed (and even the consent itself is vague at best) and thus invalid.
2.) Facebook put some premade declarations of consent into their TOS, that allow them to use names and profile images for commercial purposes. Turns out they are invalid and putting stuff like this into your TOS does not equal consent.
3.) The real name thing. Aparently this is invalid for two reasons. The first is, that agreeing to use your real name also somehow implies consent to the use of that information. The second is that, its just against a law that states that online services need to provide users with a way to stay anonymous.
[ link to this | view in chronology ]
This article operates from a child's understanding of consent and coercion.
If I say "My great grandfather had the biggest club, so he got all the farmland, so suck my dick or starve", that's not a choice and your doing it doesn't imply any real consent. And if everybody you need to interact with has been manipulated into using my "platform", or even just chosen to use my platform, then saying "Give me your real name or go be isolated" isn't a choice either.
And let's talk about this "come in and tell a company" business.
Facebook. Is. A. Creation. Of. Government.
Governments aren't "coming in" to Facebook's affairs. People "came in" and asked governments to create the company in the first place.
A corporation doesn't exist at all except as a matter of law. It's not a person. It has no natural rights (and no mind, so it couldn't exercise natural rights if it had them). By chartering such an entity, the government actually RESTRICTS the rights of natural persons, most famously the right to individually sue people who act in concert to do them damage.
Issuing charters like that has side effects. No actual person could operate at that scale without some similar kind of charter. The existence of Facebook's "platform" requires the government to recognize fictional entities. And scale is a big part of the reason there's a problem.
There is absolutely no reason governments shouldn't put any restrictions those entrusted think appropriate on gifts like the "right" for a total fiction to be treated as a legal entity or the "right" for its owners and employees to avoid accountability for their actions.
It's not even like Facebook is a vehicle for its owners to exercise their rights to free speech. Facebook is a vehicle for selling advertising, period.
Don't pretend that massive institutions are beings with rights. If you want a "free" system, then decentralize the technology and eliminate these fiefdoms.
[ link to this | view in chronology ]
Re:
I should have stopped here but I'm glad I continued. You should be a comedian!
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Bad for persecuted minority
Facebook policies are really bad for secularists. We have to face mass reporting from muslim cyber army and facebook really favor these groups of online mobs rather than those who fight for freedom of speech!
[ link to this | view in chronology ]
and then there's official documents
driver's licenses
marriage certificates
court documents
school records
This is like the right to be forgotten on steroids.....
[ link to this | view in chronology ]
too glib
Sadly a lot of people are pressured into FB as way of keeping in touch with family, friends etc.
I don't do FB, but as a consequence there are several (previously good friends in different areas of the country) who I now rarely communicate with as most of their "social chat" is via FB.,, I only chat to them via phone or email.
So, many people are not so much freely consenting, but consenting due to "emotional blackmail" as want to keep in contact with friends who are using FB as primary means of social chat.
[ link to this | view in chronology ]
Re: too glib
[ link to this | view in chronology ]
Re: Re: too glib
[ link to this | view in chronology ]
Re: Re: Re: too glib
Facebook to communicate? There's plenty of other ways, even by AC's own admission, it's just that some people prefer FB for everything. Facebook aren't blocking other methods or reducing their effectiveness, they are just one of many methods.
It's more analogous to texting people. Some people don't want a mobile or prefer to phone people rather than texting. They might feel pressured into getting a phone and texting because everyone they know does it and they feel left out if they don't. Understandable, but you shouldn't blame the phone manufacturer because you felt pressured into buying one so you could text.
[ link to this | view in chronology ]
Re: Re: Re: Re: too glib
Sometimes your choice of the means of communication is determined by those you wish, or need to communicate with.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: too glib
That's not a problem, any more than it is for your mobile provider to insist on a line rental even though you only want to use it because your friends are. It's simply not a monopoly or unfair position, no matter how many of your friends accepted the terms you don't wish to accept.
[ link to this | view in chronology ]
Re: too glib
Too glib by far.
[ link to this | view in chronology ]
Re: Re: too glib
Of course. There's not really any way for them not to.
Even if you're not on Facebook, there's still going to be information about you on Facebook. Somebody's mentioned you. People who know you have searched for you. People have probably tagged you in photos.
And that's before we even get into stuff like Facebook scripts on third-party pages that are tracking you.
[ link to this | view in chronology ]
Re: Re: Re: too glib
Sure there is: they could just *not do it!* Collecting and organizing data isn't something that happens by default, let alone something that one has to put in effort to avoid doing; it's something that one has to put in effort to actually do. And Facebook is doing it, when they have no right to.
[ link to this | view in chronology ]
Re: Re: Re: Re: too glib
How?
How are they going to prevent somebody from tagging me in a photo? Or mentioning me in a public conversation? Or sending me an invite?
It...is when the function of your software is the collection and organization of data.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: too glib
I realize that this methodology would be antithetical to their business model. It does point out how their business model needs adjusting. That is before some class action lawsuit about collecting information about non-customers forces them to. You know, a proactive PR position such as 'we are not actually evil'.
[ link to this | view in chronology ]
Who is that?
This caused a certain amount of consternation as I matured. There were endless issues of school districts and teachers misspelling my name, calling me things I was not. Then there was the way other students treated me. Name calling was creative, to say the least. BTW, this had absolutely no impact on my personality, I am quite normal, depending on ones definition of normal.
Despite the above, things went well, until I tried to apply for a drivers license. The DMV absolutely refused to put that name on a government issued document. I had to go into court and get a legal name change, I chose 'Appropriate Misbehavior'. When that didn't work I went to the phone book and flipped open pages at random and picked first a first name and second a second name. Then back to court. The DMV finally issued my drivers license, but the fact of the matter is, it isn't my real name.
Now, about facial recognition...there is this plastic surgeon...
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Free users pay with data
- Forced to watch a 30 second commercial on YouTube before the video starts.
- Forced to watch an ad to continue playing a free game on your phone.
- Going to a Windows download site to get a driver, but having to guess which of the 5 download buttons will download the driver and which will download a toolbar or malware.
- Having to keep a sharp eye on software installers (such as Flash updates) that want to install toolbars or change your home page.
- Not to mention the usually irrelevant ads on places like Facebook and Pinterest.
All of these are considered annoyances and distractions, but we put up with them to get the free item or service.
[ link to this | view in chronology ]
Re: Free users pay with data
The malware/crapware risk is a relatively new one, but I dare say that if you asked the average German to choose between Facebook knowing some otherwise publicly available information about them and having that stuff on their hard drive doing who knows what, they would prefer the former.
[ link to this | view in chronology ]
Re: Free users pay with data
When courts aren't willing to make companies tell you the final price of an item or service (including all taxes, fees, and bullshit), they're certaining not going to put a stop to this usage of "free". Hell, the FCC recently reversed its policy that required ISPs to tell customers the price of their service—it's too burdensome for ISPs, you see.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
Once again - if only you people were as interested in discussing these situations as are in whining about people writing about them.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
“Covert”?
[ link to this | view in chronology ]
It's an everything ban
[ link to this | view in chronology ]
[ link to this | view in chronology ]
[ link to this | view in chronology ]