from the time-for-a-gut-check dept
Every
time I ask anyone associated with Facebook’s new Oversight
Board
whether the nominally independent, separately endowed tribunal is
going address misuse of private information, I get the same
answer—that’s not the Board’s job. This means that
the Oversight Board, in addition to having such an on-the-nose proper
name, falls short in a more important way—its architects
imagined that content issues can be tackled substantively without
addressing privacy issues. Yet surely the recent scandals that have
plagued Facebook and some other tech companies in recent years have
shown us that private information issues and harmful-content problems
have become intimately connected.
We
can’t turn a blind eye to this connection anymore. We need the
companies, and the governments of the world, and the communities of
users, and the technologists, and the advocates, to unite behind a
framework that emphasizes the deeper-than-ever connection between
privacy problems and free-speech problems.
What
we need most now, as we grapple more fiercely with the public-policy
questions arising from digital tools and internet platforms, is a
unified
field theory—or,
more properly—a “Grand
Unified Theory”
(a.k.a. “GUT”)—of free expression and privacy.
But
the road to that theory is going to be hard. From the beginning
three decades ago when digital civil-liberties emerged as a distinct
set of issues that needed public-policy attention, the relationship
between freedom of expression and personal privacy in the digital
world has been a bit strained. Even the name of the first big
conference to bring all the policy people, technologists, government
officials, hackers, and computer cops reflected the tension. The
first Computers, Freedom and Privacy conference was held in
Burlingame California, in 1991, made sure that attendees knew that
“Privacy” was not just a kind of “Freedom”
but its own thing that deserved its own special attention.
The
tensions emerged early on. It seemed self-evident to most of us back
then that the relationship between freedom of expression (and freedom
of assembly and freedom of inquiry) had to have some limits—including
limits on what any of us could do with the private information about
other people. But while it’s conceptually easy to define in
fairly clear terms what counts as “freedom of expression,”
the consensus about what counts as a privacy interest is murkier.
Because I started out as a free-speech guy, I liked the
law-school-endorsed framework of “privacy torts,” which
carved out some fairly narrow privacy exceptions to the broad
guarantees of expressive freedom. That “privacy torts”
setup meant that, at least when we talked about “invasion of
privacy,” I could say what counted as such an invasion and what
didn’t. Privacy in the American system was narrow and easy to
grasp.
But
this wasn’t the universal view in the 1990s, and it’s
certainly not the universal view in 2020. In the developed world,
including the developed democracies of the European Union, the
balance between privacy and free expression has been struck in a
different way. The presumptions in the EU favor greater protection of
personal information (and related interests like reputation) and
somewhat less protection of what freedom of expression. Sure, the
international human-rights source texts like the Universal
Declaration of Human Rights (in Article 19) may protect “freedom
to hold opinions without interference and to seek, receive and impart
information and ideas through any media regardless of frontiers.”
But ranked above those informational rights (in both the Universal
Declaration of Human Rights and the International Covenant on Civil
and Political Rights) is the protection of private information,
correspondence, “honor,” and reputation. This difference
balance is reflected in European rules like the General Data
Protection Regulation.
The
emerging international balance, driven by the GDPR, has created new
tensions between freedom of expression and what we loosely call
“privacy.” (I use quotation marks because the GDPR
regulates not just the use of private information but also the use of
“personal” information that may not be private—like
old newspaper reports of government actions to recover
social-security debts. This was the issue in the
leading “right to be forgotten” case
prior to the GDPR.) Standing by themselves, the emerging
international consensus doesn’t provide clear rules for
resolving those tensions.
Don’t
get me wrong: I think the idea of using international human rights
instruments as guidance for content approaches on social-media
platforms has its virtues. The advantage is that in international
forums and tribunals it gives the companies as strong a defense as
one might wish in the international environment for allowing some
(presumptively protected) speech to stay up in the face of criticism
and removing some (arguably illegal) speech. The disadvantages are
harder to grapple with. Countries will differ on what kind of speech
is protected, but the internet does not quite honor borders the way
some governments would like. (Thailand's
lèse-majesté is
a good example.) In addition, some social-media platforms may want to
create environments that are more civil, or child-friendly, or
whatever, which will entail more content-moderation choices and
policies than human-rights frameworks would normally allow. Do we
want to say that Facebook or Google *can't* do this? That Twitter
should simply be forbidden to tag
a presidential tweet as “unsubstantiated”?
Some governments and other stakeholders would disapprove.
If
a human-rights framework doesn’t resolve the
free-speech/privacy tensions, what could? Ultimately, I believe that
the best remedial frameworks will involve multistakeholderism, but I
think they also need to begin with a shared (consensus) ethical
framework. I present the argument in condensed form here: "It’s
Time to Reframe Our Relationship With Facebook.”
(I also published
a book last year
that presents this argument in greater depth.)
Can
a code of ethics be a GUT of free speech and privacy? I don’t
think it can, but I do think it can be the seed of one. But it has to
be bigger than a single company’s initiative—which more
or less is the best we can reasonably hope Facebook’s Oversight
Board (assuming it sets out ethical principles as a product of its
work on content cases) will ever be. I try not to be cynical about
Facebook, which has plenty of people working on these issues who
genuinely mean well, and who are willing to forgo short-term profits
to put better rules in place. While it's true at some sufficiently
high level that the companies privilege profits over public interest,
the fact is that once a company is market-dominant (as Facebook is),
it may well trade off short-term profits as part of a grand bargain
with governments and regulators. Facebook is rich enough to absorb
the costs of compliance with whatever regimes the democratic
governments come up with. (A more cynical read of Zuckerberg's public
writings in the aftermath of the company’s various public
writings, is that he wants the governments to get the rules in
place, and then FB will comply, as it can afford to do better than
most other companies, and then FB's compliance will be a defense
against subsequent criticism.)
But
the main reason I think reform has to come in part at the industry
level rather than at the company level, is that company-level
reforms, even if well-intended, tend to instantiate a public-policy
version of Wittgenstein's "private
language" problem.
Put simply, if the ethical rules are internal to a company, the
company can always change them. If they're external to a company,
then there's a shared ethical framework we can use to criticize a
company that transgresses the standards.
But
we can’t stop at the industry level either—we need
governments and users and other stakeholders to be able to step in
and say to the tech industries that, hey, your industry-wide
standards are still insufficient. You know that industry standards
are more likely to be adequate and comprehensive when they’re
buttressed both by public approval and by law. That’s what
happened with medical ethics and legal ethics—the frameworks
were crafted by the professions but then recognized as codes that
deserve to be integrated into our legal system. There’s an
international consensus that doctors have duties to patients (“First,
do no harm”) and that lawyers and other professions have
“fiduciary duties” to their clients. I outline how
fiduciary approaches might address Big Tech’s consumer-trust
problems in a series of Techdirt articles that begins here.
The
“fiduciary” code-of-ethics approach to free-speech and
privacy problems for Big Tech is the only way I see of harmonizing
digital privacy and free-speech interests in a way that will leave
most stakeholders satisfied (as most stakeholders are now satisfied
with medical-ethics frameworks and with lawyers’ obligations to
protect and serve their clients). Because lawyers and doctors are
generally obligated to tell their clients the truth (or, if for some
reason they can’t, end the relationship and refer the clients
to other practitioners), and because they’re also obligated to
“do no harm” (e.g., by allowing companies to use personal
information in a manipulative way or to violate clients’
privacy or autonomy), these professions already have a Grand Unified
Theory that protects both speech and privacy in the context of
clients relationships with practitioners.
Big
Tech has a better shot at resolving the contradictory demands on its
speech and privacy practices if it aspires to do the same, and if it
embraces an industry-wide code of ethics that is acceptable to users
(who deserve client protections even if they’re not paying for
the services in question). Ultimately, if the ethics code is backed
by legislators and written into the law, you have something much
closer to a Grand Unified Theory that harmonizes privacy, autonomy,
and freedom of expression.
I’m
a big booster of this GUT, and I’ve been making versions of
this argument before now. (Please don’t call it “Godwin-Unified
Theory”—having one “law”
named after me is enough.) But here in 2020 we need to do more than
argue about this approach—we need to convene and begin to
hammer out a consensus about a systematic, harmonized approach that
protects human needs for freedom of expression, for privacy, and for
autonomy that’s reasonably free of psychological-warfare
tactics
of informational manipulation. The issue is not just false content,
and it’s not just personal information—open
societies
have to incorporate a fairly high degree of tolerance for
unintentionally false expression and for non-malicious or
non-manipulative disclosure or use of personal information. But an
open society also needs to promote supporting an ecosystem—a
public sphere of discourse—in which neither the manipulative
crafting of deceptive and destructive content nor the manipulative
targeting of it based on our personal data is the norm. That’s
an ecosystem that will require commitment from all stakeholders to
build—a GUT based not on gut instincts but on critical rationalism, colloquy, and consensus.
Filed Under: data protection, facebook oversight board, fiduciary duty, free speech, grand unified theory, greenhouse, multi-stakeholder, oversight board, privacy