Interoperability And Privacy: Squaring The Circle
from the adversarial-interoperability dept
Not long ago, the Electronic Frontier Foundation published a comprehensive look at the ways that Facebook could and should open up its data so that users could control their experience on the service, making it easier for competing services to thrive.
In the time since, Facebook has continued to be rocked by scandals: privacy breaches, livestreamed terrorist attacks, harassment, and more. At the same time, competition regulators, scholars and technologists have stepped up calls for Facebook to create and/or adopt interoperability standards to open up its messenger products (and others) to competitors.
To make matters more complex, there is an increasing appetite in both the USA and Europe, to hold Facebook and other online services directly accountable for the actions of its users: both in terms of what those users make available (copyright infringement, political extremism, incitements to violence, etc) and in how they treat each other (harassment, stalking, etc).
Fool me twice…
Facebook execs have complained that these goals are in conflict: they say that for the company to detect and block undesirable user behaviors as well as interdicting future Cambridge Analytica-style data-hijacking, they need to be able to observe and analyze everything every user does, both to train automated filters and to allow them to block abusers. By allowing third parties to both inject data into their network and pull data out of it—that is, allowing interoperability—the company's ability to monitor and control its users' bad behavior will be weakened.
There is a good deal of truth to this, but buried in that truth is a critical (and highly debatable) assumption: "If you believe that Facebook has the will and ability to stop 2.3 billion people from abusing its systems and each other, then weakening Facebook's control over these 2.3 billion people might limit the company's ability to make that happen."
But if there's one thing we've learned from more than a decade of Facebook scandals, it's that there's little reason to believe that Facebook possesses the requisite will and capabilities. Indeed, it may be that there is no automated system or system of human judgments that could serve as a moderator and arbiter of the daily lives of billions of people. Given Facebook's ambition to put more and more of our daily lives behind its walled garden, it's hard to see why we would ever trust Facebook to be the one to fix all that's wrong with Facebook.
After all, Facebook's moderation efforts to date have been a mess of backfiring, overblocking, and self-censorship, a "solution" that no one is happy with.
Which is why interoperability is an important piece of the puzzle when it comes to addressing the very real harms of market concentration in the tech sector, including Facebook's dominance over social media. Facebook users are eager for alternatives to the service, but are held back by the fact that the people they want to talk with are all locked within the company's walled garden.
Interoperability presents a means for people to remain partially on Facebook, but while using third-party tools that are designed to respond to their idiosyncratic needs. While it seems likely that no one is able to build a single system that protects 2.3 billion users, it's certainly possible to build a service whose social norms and technological rules are suited to smaller groups. Facebook can't figure out how to serve every individual's and community's needs -- but those individuals and communities might be able to do so for themselves, especially if they get to choose which toolsmith's tools they use to mediate their Facebook experience.
Standards-washing: the lesson of Bush v Gore
But not all interoperability is created equal. Companies have historically shown themselves to be more than capable of subverting mandates to adhere to standards and allow for interconnection.
A good historic example of this is the drive to standardize voting machines in the wake of the Supreme Court's decision in Bush v Gore. Ambiguous results from voting machines resulted in an election whose outcome had to be determined by the Supreme Court, which led to Congress passing the Help America Vote Act, which mandated standards for voting machines.
The ensuing process included a top-tier standards development organization to oversee the work: the Institute of Electrical and Electronics Engineers (IEEE), which set about creating a standard for voting machines. But rather than creating a "performance standard" describing how a voting machine should process ballots, the industry sneakily tried to get the IEEE to create a "design standard" that largely described the machines they'd already sold to local election officials.
In other words, rather than using standards to describe how a good voting machine should work, the industry pushed a standard that described how their existing, flawed machines did work with some small changes in configurations. Had they succeeded, they could have simply slapped a "complies with IEEE standard" label on everything they were already selling and declared themselves to have fixed the problem... without making the serious changes needed to fix their systems, including requiring a voter-verified paper ballot.
Big Tech is even more concentrated than the voting machine industry is, and it's far more concentrated than the voting machine industry was in 2003 (most industries are more concentrated today than they were in 2003). Legislatures, courts or regulators that seek to define "interoperability" should be aware of the real risk of the definition being hijacked by the dominant players (who are already very skilled at subverting standardization processes). Any interoperability standard developed without recognizing Facebook's current power and interest is at risk of standardizing the parts of Facebook's business that it does not view as competitive risks, while leaving the company's core business (and its bad business practices) untouched.
Even if we do manage to impose interoperability on Facebook in ways that allow for meaningful competition, in the absence of robust anti-monopoly rules, the ecosystem that grows up around that new standard is likely to view everything that's not a standardized interoperable component as a competitive advantage, something that no competitor should be allowed to make incursions upon, on pain of a lawsuit for violating terms of service or infringing a patent or reverse-engineering a copyright lock or even more nebulous claims like "tortious interference with contract."
Everything not forbidden is mandatory
In other words, the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that's not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge.
Standards should be the floor on interoperability, but adversarial interoperability should be the ceiling. Adversarial interoperability takes place when a new company designs a product or service that works with another company's existing products or services, without seeking permission to do so.
Facebook is a notorious opponent of adversarial interoperability. In 2008, Facebook successfully wielded a radical legal theory that allowed it to shut down Power Ventures, a competitor that allowed Facebook's users to use multiple social networks from a single interface. Facebook argued that by allowing users to log in and display Facebook with a different interface, even after receipt of a cease and desist letter telling Power Ventures to stop, the company had broken a Reagan-era anti-hacking law called the Computer Fraud and Abuse Act (CFAA). In other words, upsetting Facebook's investors made their conduct illegal.
Adversarial interoperability flips the script
Clearing this legal thicket would go a long way toward allowing online communities to self-govern by federating their discussions with Facebook without relying on Facebook's privacy tools and practices. Software vendors could create tools that allowed community members to communicate in private, using encrypted messages that are unintelligible to Facebook's data-mining tools, but whose potential members could still discover and join the group using Facebook.
This could allow new entrants to flip the script on Facebook's "network effects" advantage. Today, Facebook is viewed as holding all the cards because it has corralled everyone who might join a new service within its walled garden. But legal reforms to safeguard the right to adversarial interoperability would turn this on its head: Facebook would be the place that had conveniently organized all the people whom you might tempt to leave Facebook, and even supply you with the tools you need to target those people.
Revenge of Carterfone
There is good historic precedent for using a mix of interoperability mandates and a legal right to interoperate beyond those mandates to reduce monopoly power. The FCC has imposed a series of interoperability obligations on incumbent phone companies: for example, the rules that allow phone subscribers to choose their own long-distance carriers.
At the same time, federal agencies and courts have also stripped away many of the legal tools that phone companies once used to punish third parties who plugged gear into their networks.
The incumbent telecom companies historically argued that they couldn't maintain a reliable phone network if they didn't get to specify which devices were connected to it, a position that also allowed the companies to extract rental payments for home phones for decades, selling you the same phone dozens or even hundreds of times over.
When agencies and courts cleared the legal thicket around adversarial interoperability in the phone network, it did not mean that the phone companies had to help new entrants connect stuff to their wires: manufacturers of modems, answering machines, and switchboards sometimes had to contend with technical changes in the Bell system that broke their products. Sometimes, this was an accident of some unrelated technical administration of the system; sometimes it seemed like a deliberate bid to harm a competitor. Often, it was ambiguous.
Monopolists don't have a monopoly on talent
But it turns out that you don't need the phone company's cooperation to design a device that works with its system. Careful reverse-engineering and diligent product updates meant that even devices that the phone companies hated -- devices that eroded their most profitable markets -- had long and profitable runs in the market, with devoted customers.
Those customers are key to the success of adversarial interoperators. Remember that the audience for a legitimate adversarial interoperability product are the customers of the existing service that it connects to. Anything that the Bell system did to block third-party phone devices ultimately punished the customers who bought those devices, creating ill will.
And when a critical mass of an incumbent giant's customer base depends on—and enjoys—a competitor's product, even the most jealous and uncooperative giants are often convinced to change tactics and support the businesses they've been trying to destroy. In a competitive market (which adversarial interoperability can help to bring into existence), even very large companies can't afford to enrage their customers.
Is Facebook better than everyone else?
Facebook is one of the largest companies in the world. Many of the world's most talented engineers and security experts already work there, and many others aspire to do so. Given that, is it realistic to think that a would-be adversarial interoperator could design a service that plugs into Facebook without Facebook's permission?
Ultimately, this is not a question with an empirical answer. It's true that few have tried to pull this off since Power Ventures was destroyed by Facebook litigation, but it's not clear whether the competitive vacuum is the result of potential competitors being too timid to lock engineering horns with Facebook's brain-trust, or potential competitors and investors whose legal departments won't let them even try.
But it is instructive to look at the history of the Bell system after Carterfone and Hush-a-Phone: though the Bell system was the single biggest employer of telephone technicians in the world, and represented the best, safest, highest-paid opportunities for would-be telecoms innovators, after Carterfone and Hush-a-Phone, Bell's rivals proceeded to make device after device after device that extended the capabilities of the phone network, without permission, overcoming the impediments that the network's operator put in their way.
Closer to home, remember that when Facebook wanted to get Power Ventures out of its network, its primary tool of choice wasn't technical measures -- Facebook didn't (or couldn't) use API changes or firewall rules alone to keep Power Ventures off the service -- it was mainly lawsuits. Perhaps that's because Facebook wanted to set an example for later challengers by winning a definitive legal battle, but it's very telling that the company that operated the network didn't (or couldn't!) just kick its rival out, and instead went through a lengthy, expensive and risky legal battle when simple IP blocking didn't work.
Facebook has a lot of talented engineers, but it doesn't have all of them.
Being a defender is hard
Facebook's problem with would-be future challengers is a familiar one: in security, it is easier to attack than to defend. For Facebook to keep a potential competitor off its network, it has to make no mistakes. In order for a third party to bypass Facebook's defenses in order to interoperate with Facebook without permission, it has only to find and exploit a single mistake.
Facebook labors under other constraints: like the Bell system fending off Hush-a-Phone, the things that Facebook does to make life hard for competitors who are helping its users get more out of its service, are also making life harder for all its users. For example, any tripwire that blocks logins by suspected bots will also block users whose behaviors appear bot-like: the more strict the bot-detector is, the more actual humans it will catch.
Here again, Facebook's dizzying scale works against it: with billions of users, a one-in-a-million event is going to happen thousands of times every day, so Facebook has to accommodate a wide variety of use-cases, and some of those behaviors will be sufficiently weird to allow a rival's bot to slip through.
Back to privacy
Facebook users (and even non-Facebook users) who want more privacy have a variety of options, none of them very good. Users can tweak Facebook's famously hard-to-understand privacy dashboard to lock down their accounts and bet that Facebook will honor their settings (this has not always been a good bet).
Everyone can use tracker-blockers, ad-blockers and script-blockers to prevent Facebook from tracking them when they're not on Facebook, by watching how they interact with pages that have Facebook "Like" buttons and other beacons that let Facebook monitor activity elsewhere on the Internet. We're rightfully proud of our own tracker blocker, Privacy Badger, but it doesn't stop Facebook from tracking you if you have a Facebook account and you're using Facebook's service.
Facebook users can also watch what they say on Facebook, hoping that they won't slip up and put something compromising on the service that will come back to haunt them (though this isn't always easy to predict).
But even if people do all this, they're still exposing themselves to Facebook's scrutiny when they use Facebook, which monitors how they use the service, every click and mouse-movement. What's more, anyone using a Facebook mobile app might be exposing themselves to incredibly intrusive data-gathering, including some surprisingly creepy and underhanded tactics.
If users could use a third-party service to exchange private messages with friends, or to participate in a group they're a member of, they can avoid much (but not all) of this surveillance.
Such a tool would allow someone to use Facebook while minimizing how they are used by Facebook. For people who want to leave Facebook but whose friends, colleagues or fellow travelers are not ready to join them, a service like this could let Facebook vegans get out of the Facebook pool while still leaving a toe in its waters.
What's more, it lets their friends follow them, by creating alternatives to Facebook where the people they want to talk to are still reachable. One user at a time, Facebook's rivals could siphon off whole communities. As Facebook's market power dwindled, so would the pressure that web publishers feel to embed Facebook trackers on their sites, so that non-Facebook users would not be as likely to be tracked as they use the Web.
Third-party tools could automate the process of encrypting conversations, allowing users to communicate in private without having to trust Facebook's promises about its security.
Finally, such a system would put real competitive pressure on Facebook. Today, Facebook's scandals do not trigger mass departures from the service, and when users do leave, they tend to end up on Instagram, which is also owned by Facebook.
But if there was a constellation of third-party services that were constantly carving escape hatches in Facebook's walled garden, Facebook would have to contend with the very real possibility that a scandal could result in the permanent departure of its users. Just the possibility would change the way that Facebook made decisions: product designers and other internal personnel who argued for treating users with respect on ethical grounds would be able to add an instrumental benefit to being "good guys": failing to do so could trigger yet another exodus from the platform.
Lower and upper bounds
It's clear that online services need rules about privacy and interoperability setting out how they should treat their users, including those users who want to use a competing service.
The danger is that these rules will become the ceiling on competition and privacy, rather than the floor. For users who have privacy needs -- and other needs -- beyond those the big platforms are willing to fulfill, it's important that we keep the door open to competitors (for-profit, nonprofit, hobbyist and individuals) who are willing to fill those needs.
None of this means that we should have an online free-for-all. A rival of Facebook that bypassed its safeguards to raid user data should still get in trouble (just as Facebook should get in trouble for privacy violations, inadequate security, or other bad activity). Shouldering your way into Facebook in order to break the law is, and should remain, illegal, and the power of the courts and even law enforcement should remain a check on those activities.
But helping Facebook's own users, or the users of any big service, to configure their experience to make their lives better should be legal and encouraged even (and especially) if it provides a path for users to either diversify their social media experience or move away entirely from the big, concentrated services. Either way, we'd be on our way to a more pluralistic, decentralized, diverse Internet.
Filed Under: adversarial interoperability, competition, interoperability, privacy
Companies: facebook