Chris Riley’s Techdirt Profile

mchrisriley

About Chris Riley




Posted on Techdirt - 5 March 2021 @ 10:44am

Moving the Web Beyond Third-Party Identifiers

from the privacy-and-cookies dept

(This piece overlaps a bit with Mike’s piece from yesterday, “How the Third-Party Cookie Crumbles”; Mike graciously agreed to run this one anyway, so that it can offer additional context for why Google’s news can be seen as a meaningful step forward for privacy.)

Privacy is a complex and critical issue shaping the future of our internet experience and the internet economy. This week there were two major developments: first, the State of Virginia passed a new data protection law, the Consumer Data Protection Act (CDPA), which has been compared to Europe’s General Data Protection Regulation; and second, Google announced that it would move away from all forms of third-party identifiers for Web advertising, rather than look to replace cookies with newer techniques like hashes of personally identifiable information (PII). The ink is still drying on the Virginia law and its effective date isn’t until 2023, meaning it may be preempted by federal law if this Congress moves a privacy bill forward. But Google’s action will change the market immediately. While the road ahead is long and there are many questions left to answer, moving the Web beyond cross-site tracking is a clear step forward.

We’re in the midst of a global conversation about what the future of the internet should look like, across many dimensions. In privacy, one huge part of that discussion, it’s not good enough in 2021 to say that user choice means “take it or leave it”; companies are expected to provide full-featured experiences with meaningful privacy options, including for advertising-based services. These heightened expectations—some set by law, some by the market—challenge existing assumptions around business models and revenue streams in a major way. As a result, the ecosystem must evolve away from its current state toward a future that offers a richer diversity of models and user experiences.

Google’s Privacy Sandbox, in particular, could be a big step forward along that evolutionary path. It’s plausible that a combination of subscription services, contextual advertising and more privacy-preserving techniques for learning can collectively match or even grow the pie for advertising revenue beyond what it is today, while providing users with compelling and meaningful choices that don’t involve cross-site tracking. But that can’t be determined until new services are built, offered and measured at scale.

And sometimes, to make change happen, band-aids need to be ripped off. By ending its support for third-party identifiers on the Web, that’s what Google is doing. Critics of the move will focus on the short-term impact for those smaller advertisers who currently rely on third-party identifiers and tracking to target specific audiences, and will need to adapt their methods and strategies significantly. That concern is understandable; level playing fields are important, and centralization in the advertising ecosystem is widely perceived to be a problem. However, the writing has been on the wall for a long time for third-party identifiers and cross-site tracking. Firefox blocked third-party cookies by default in September 2019; Apple’s Safari followed suit in April 2020—Firefox first made moves to block third-party cookies as far back as 2013, but it was, then, an idea ahead of its time. And the problem was never the cookies per se; it was the tracking they powered.

As for leveling the playing field for the future, working through standards bodies is an established approach for Web companies to share information and innovate collectively. Google’s engagement with the W3C should, hopefully, help open doors for other advertisers, limiting any reinforcement effects for Google’s position in Web advertising.

Further, limits on third-party tracking do not apply to first-party behavior, where a company tracks the pages on its own site that a user visits, for example when a shopping website remembers products that a user viewed in order to recommend other items of potential interest. While first-party relationships are important and offer clear positive value, it’s also not hard to imagine privacy-invasive acts that use solely first-party information. But Google’s moves must be contextualized within the backdrop of rapidly evolving privacy law—including the Virginia data protection law that just passed. From that perspective, they’re not a delaying tactic nor a substitute for legislation, but rather a complementary piece, and in particular a way to catalyze much-needed new thinking and new business models for advertising.

I don’t think it’s possible for Google to put privacy advocates’ minds at ease concerning its first-party practices through voluntary action. To stop capitalizing totally on its visibility into activity within its network would leave so much money on the table Google might be violating its fiduciary duty as a public company to serve its shareholders' interest. If it cleared that hurdle and stopped anyway, what would prevent the company from going back and doing it later? The only sustainable answer for first-party privacy concerns is legislation. And that kind of legislation will struggle to be feasible until new techniques and new business models have been tested and built. And that more than anything is the dilemma I think Google sees, and is working constructively to address.

Often, private sector privacy reforms are derided as merely scratching the surface of a deeper business model problem. While there’s much more to be done, moving beyond third-party identifiers goes deeper, and deserves broad attention and engagement to help preserve good balances going forward.

4 Comments

Posted on Techdirt - 24 September 2020 @ 12:17pm

The Need For A Robust Critical Community In Content Policy

from the it's-coming-one-way-or-the-other dept

Over this series of policy posts, I’m exploring the evolution of internet regulation from my perspective as an advocate for constructive reform. It is my goal in these posts to unpack the movement towards regulatory change and to offer some creative ideas that may help to catalyze further substantive discussion. In that vein, this post focuses on the need for "critical community" in content policy -- a loose network of civil society organizations, industry professionals, and policymakers with subject matter expertise and independence to opine on the policies and practices of platforms that serve as intermediaries for user communications and content online. And to feed and vitalize that community, we need better and more consistent transparency into those policies and practices, particularly intentional harm mitigation efforts.

The techlash dynamic is seen in both political parties in the United States as well as in a broad range of political viewpoints globally. One reason for the robustness of the response is that so much of the internet ecosystem feels like a black box, thus undermining trust and agency. One of my persistent refrains in the context of artificial intelligence, where the “black box” feeling is particularly strong, is that trust can’t be restored by any law or improved corporate practice operating in isolation. (And certainly, the answer isn’t just "

I’m using the term "critical community" as I see it used in community psychology and social justice contexts. For example, this talk by Professor Silvia Bettez offers a specific definition of critical community as "interconnected, porously bordered, shifting webs of people who through dialogue, active listening, and critical question posing, assist each other in critically thinking through issues of power, oppression,and privilege." While in the field of internet policy the issues are different, the themes of power, oppression, and privilege strike me as resonant in the context of social media platform practices.

I wrote an early version of this community-centric theory of change in a piece last year focused specifically on recommendation engines. In that piece, I looked at the world of privacy, where, over the past few decades, a seed of transparency offered voluntarily in the form of privacy policies helped to fuel the growth of a professional community of privacy specialists who are now able to provide meaningful feedback to companies, both positive and critical. We have a rich ecosystem in privacy with institutions ranging from IAPP to the Future of Privacy Forum to EPIC.

The tech industry has a nascent ecosystem built around specifically content moderation practices, which I tend to think of as a (large) subset of content policy focused specifically on moderation -- policies regarding the permissible use of a platform and actions taken to enforce those policies for specific users or pieces of content. (The biggest part of content policy not included within my framing of content moderation is the work of recommendation engines to filter information and present users with an intentional experience.) The Santa Clara Principles and extensive academic research have helped to advance norms around moderation. The new Trust & Safety Professionals Association could evolve into a IAPP or FPF equivalent. Content moderation was the second Techdirt Greenhouse topic after privacy, reflecting the diversity of voices in this space. And plenty of interesting work is being done beyond the moderation space as well, such as Mozilla’s "YouTube Regrets" campaign, to illustrate online harm arising from recommendation engines steering permissible and legal content to poorly chosen audiences.

As the critical community around content policy grows, regulation races ahead. The Digital Services Act consultation submissions closed this month; here’s my former team’s post about that. The regulatory posture of the European Commission has advanced a great deal over the past couple of years, shifting toward a paradigm of accountability and a focus on processes and procedures. The DSA will prove to be a turning point on a global scale, just as the GDPR was for privacy. Going forward, platforms will expect to be held accountable. Just as it’s increasingly untenable to assume that an internet company can collect data and monetize it at will, so, too will it be untenable to dismiss harms online through tropes like “more speech is a solution to bad speech.” While the First Amendment court challenges in the U.S. legal context will be serious and difficult to navigate, the normative reality will more and more be set: tech companies must confront and respond to the real harms of hate speech, as Brandi Collins-Dexter’s Greenhouse post so well illustrates.

The DSA has a few years left in its process. The European Commission must adopt a draft law, the Parliament will table hundreds of amendments and put together a final package for vote, the Council will produce its own version, trialogue will hash out a single document, and then, finally, Parliament will vote again -- a vote that might not succeed, restarting some portions of the process. Yet, even at this early stage, it seems virtually certain that the DSA legislative process will produce a strong set of principles-based requirements without specific guidance for implementing practices. To many, such an outcome seems vague and hard to work with. But it’s preferable in many ways to specifying technical or business practices in law which can easily result in outdated and insufficient guidance to address evolving harm, not to mention restrictions that are easier for large companies to comply with, at least facially, than smaller firms.

So, there’s a gap here. It’s the same gap seen in the PACT Act. As both a practical consideration in the context of American constitutional law and in the state of collective understanding of policy best practices, the PACT Act doesn’t specify exactly what practices need to be adopted. Rather, it requires transparency and accountability to those self asserted practices. The internet polity needs something broader than just a statute to determine what “good” means in the context of intermediary management of user-generated content.

Ultimately, that gap will be filled by the critical community in content policy, working collectively to develop norms and provide answers to questions that often seem impossible to answer. Trust will be strongest, and the norms and decisions that emerge the most robust and sustainable, if that community is diverse, well resourced, and with broad and deep expertise.

The impact of critical community on platform behavior will depend on two factors: first, the receptivity of powerful tech companies to outside pressure, and second, sufficient transparency into platform practices to enable timely and informed substantive criticism. Neither of these should be assumed, particularly with respect to harm occurring outside the United States. Two Techdirt Greenhouse pieces (by Aye Min Thant and Michael Karanicolas) and the recent Buzzfeed Facebook expose illustrate the limitations of both transparency and influence to shape international platform practices.

I expect legal developments to help strengthen both of these. Transparency is a key component of the developing frameworks for both the DSA and thoughtful Section 230 reform efforts like the PACT Act. While it may seem like low-hanging fruit, the ability of transparency to support critical community is of great long-term strategic importance. And the legal act of empowering of a governmental agency to adopt and enforce rules going forward will, hopefully, help create incentives for companies to take outside input very seriously (the popular metaphor here is to the “sword of Damocles”).

We built an effective critical community around privacy long ago. We’ve been building it on cybersecurity for 20+ years. We built it in telecom around net neutrality over the past ~15 years. The pieces of a critical community for content policy are there, and what seems most needed right now to complete the puzzle is regulatory ambition driving greater transparency by platforms along with sufficient funding for coordinated, constructive, and sustained engagement.

2 Comments

Posted on Techdirt - 10 September 2020 @ 10:44am

Could A Narrow Reform Of Section 230 Enable Platform Interoperability?

from the one-approach dept

Perhaps the most de rigeur issue in tech policy in 2020 is antitrust. The European Union made market power a significant component of its Digital Services Act consultation, and the United Kingdom released a massive final report detailing competition challenges in digital advertising, search, and social media. In the U.S., the House of Representatives held an historic (virtual) hearing with the CEOs of Amazon, Apple, Facebook, and Google (Alphabet) on the same panel. As soon as the end of this month the Department of Justice is expected to file a “case of the century” scale antitrust lawsuit against Google. One competition policy issue that I’ve written about extensively is interoperability, and, while we’ve already seen significant proposals to promote interoperability, notably the 2019 ACCESS Act, I want to throw another idea into the hopper: I think Congress should consider amending Section 230 of the Communications Act to condition its immunity for large online intermediaries on the provision of an open, raw feed for independent downstream presentation.

I know, I know. I can almost feel your fingers hovering over that big blue “Tweet” button or the “Leave a Comment” link -- but please, hear me out first.

For those not already aware of (if not completely sick of) the active discussions around it, Section 230, originally passed as part of the Communications Decency Act, is an immunity provision within U.S. law intended to encourage internet services to engage in beneficial content moderation without fearing liability as a consequence of such action. It’s famously only 26 words long in its central part, so I’ll paste that key text in full: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

I’ll attempt to summarize the political context. Section 230 has come under intense, bipartisan criticism over the past couple of years as a locus of animosity related to a diverse range of concerns with the practices of a few large tech companies, in particular. Some argue that the choices made by platform operators are biased against conservatives; others argue that the platforms aren’t responsible enough and aren’t held sufficiently accountable. The support for amending Section 230 is substantial, although it is far from universal. The current President has issued an executive order seeking to catalyze change in the law; and the Democratic nominee has in the past bluntly called for it to be revoked. Members of Congress have introduced several bills that touch Section 230 (after the passage of one such bill, FOSTA-SESTA, in 2018), such as the EARN IT Act which would push internet companies to do more to respond to online child exploitation, to the point of undermining secure encryption. A perhaps more on-point proposal is the PACT ACT, which focuses on specific platform content practices; I’ve called it the best starting point for Section 230 reform discussions.

Why is this one, short section of law so frequently used as a political punching bag? The attention goes beyond its hard law significance, revealing a deeper resonance in the modern-day notion of “publishing”. I believe this law in particular is amplified because the centralization and siloing of our internet experience has produced a widespread feeling (or reality) of a lack of meaningful user agency. By definition, social media is a business of taking human input (user generated content) and packaging it to produce output for humans, doubling the poignancy of human agency in some sense. The user agency gap spills over from the realm of competition, making it hard to evaluate content liability and privacy harms as entirely independent issues. In so many ways, the internet ecosystem is built on the idea of consumer mobility and freedom; also in so very many ways, that idea is bankrupt today.

Yet debating whether online intermediaries for user content are “platforms” or “publishers” is a distraction. A more meaningful articulation of the underlying problem, I believe, is to say that we end users are unable to customize sufficiently the way in which the content is presented to us because we are locked into a single experience.

Services like Facebook and YouTube operate powerful recommendation engines that are designed to sift through vast amount of potentially-desirable content and present the user with what they most value. This content is based on individual contextual factors such as what the user has been watching, and the broader signals of desirability such as engagement level from other users. As many critics allege, the underlying business model of these companies benefits by keeping users as engaged as possible, spending as much time on the platform as possible. That means recommending content that gets high engagement, even though human behavior doesn’t equate positive social value with high engagement (that’s the understatement of the day, there!).

One of the interesting technical questions is how to design such systems to make them “better” from a social perspective. It’s the subject of academic research, in addition to ample industry investment. I’ve given YouTube credit in the past for offering some amount of transparency into changes it’s making (and the effects of those changes) to improve the social value of its recommendations, although I believe making that transparency more collaborative and systematic would help immensely. (I plan to expand on that in my next post!).

Recommendation engines remain by and large black boxes to the outside world, including the users who receive their output. No matter how much credit you give individual companies for their efforts to balance properly their business model demands, optimal user experience, and improving social value, there are fundamental limits on users’ inability to customize, or replace, the recommendation algorithm that mediates the lion’s share of their interaction with the social network and the user-generated content that it hosts. We also can’t facilitate innovation or experimentation with presentation algorithms as things stand due to the lack of effective interoperability.

And that’s why Section 230 gets so much attention -- because we don’t have the freedom to experiment at scale with things like Ethan Zuckerman’s Gobo.social project and thus improve the quality of, and better control, our social media experiences. Yes, there are filters and settings that users can change to customize their experience to some degree, likely far more than most people know. Yet, by design, these settings do not provide enough control to affect the core functioning of the recommendation engine itself.

Thus, many users perceive the platforms to be packaging up third party, user generated content and making conscious choices of how to present it to us -- choices that our limited downstream controls are insufficient to manage. That’s why it feels to some like they’re “publishing,” and doing a bad job of it at that. Despite massive investments by the service operators, it’s not hard to find evidence of poor outcomes of recommendations; see, e.g., YouTube recommending videos about upcoming civil war. And there are also occasional news stories of willful actions making things worse to add more fuel to the fire.

So let’s create that space for empowerment by conditioning the Section 230 immunity on the provision of more raw, open access to their content experience so users can better control how to “publish” it to themselves by using an alternative recommendation engine. Here’s how to scale and design such an openness requirement properly:

  • Apply an openness requirement only where the problems described above apply, which is for services that primarily host and present social, user generated content.

  • Limit an openness requirement to larger platforms, for example borrowing the 100 million MAUs (Monthly Active Users) metric from the Senate’s ACCESS Act.

  • Design the requirement to be variable across different services, and to engage platforms in the process. The kinds of APIs that Facebook and YouTube would set up to make this concept successful would be quite different.

  • Allow platforms to adopt reasonable security and privacy access controls for their provisioned APIs or other interoperability interfaces.

  • Preserve platform takedowns of content and accounts upstream of any provisioned APIs or other interoperability interfaces, to take advantage of scale in responding to Coordinated Inauthentic Behavior (CIB).

  • Encourage platform providers to allow small amounts of API/interoperability interface access for free, while permitting them to charge fair, reasonable, and nondiscriminatory rates to third parties operating at larger scale.

Providing this kind of openness downstream would create opportunities for innovation and experimentation with recommendation engines at a scale never before seen. This is not just an evolutionary step forward in what we think of as internet infrastructure; it’s also a roadmap to sustainable alternative business models for the internet ecosystem. Even assuming that many users would stick with the platform’s default experience and the business model underlying it, for those who choose to change, they’d have a true business model choice, and a deep, meaningful user experience choice at the same time.

I recognize that this is a thumbnail sketch of a very complex idea, with much more analysis needed. I publish these thoughts to help illustrate the relationship between agita over Section 230 and the concentrated tech ecosystem. The centralized power of a few companies and their recommendation engines doesn’t provide sufficient empowerment and interoperability, thus limiting the perception of meaningful agency and choice. Turning this open feed concept into a legal and technical requirement is not impossible, but I recognize it would carry risk. In an ideal world, we’d see the desired outcome - meaningful downstream interoperability including user substitutability of recommendation engines - offered voluntarily. That would avoid the costs and complexities of regulation, put platforms in a position to strike the right balance, and release a political pressure relief valve to keep the central protections of Section 230 intact. Unfortunately, the present day market and political realities suggest that may not occur without substantial regulatory pressure.

25 Comments

Posted on Techdirt - 3 September 2020 @ 12:04pm

It's Time To Regulate The Internet... But Thoughtfully

from the easier-said-than-done dept

The internet policy world is headed for change, and the change that’s coming isn’t just a matter of more regulations but, rather, involves an evolution in how we think about communications technologies. The most successful businesses operating at what we have, up until now, called the internet’s “edge” are going to be treated like infrastructure more and more. What’s ahead is not exactly the “break them up” plan of the 2019 Presidential campaign of Senator Warren, but something a bit different. It’s a positive vision of government intervention to generate an evolution in our communications infrastructure to ensure a level playing field for competition; meaningful choices for end users; and responsibility, transparency, and accountability for the companies that provide economically and socially valuable platforms and services.

We’ve seen evolutions in our communications infrastructure a few times before: first, when the telephone network became infrastructure for the internet protocol stack; again when the internet protocol stack became infrastructure for the World Wide Web; and then again when the Web became infrastructure on which key “edge” services like search and social media were built. Now, these edge services themselves are becoming infrastructure. And as a consequence, they will increasingly be regulated.

Throughout its history, the “edge” of the internet sector has - for the most part - always enjoyed a light regulatory yoke, particularly in the United States. Many treated the lack of oversight as a matter of design, or even as necessarily inherent, given the differences between the timetables and processes of technology innovation and legislation. From John Perry Barlow’s infamous “Declaration of the Independence of Cyberspace” to Frank Easterbrook’s “Cyberspace and the Law of the Horse” to Larry Lessig’s “Code is law,” an entire generation of thinkers were inculcated in the belief that the internet was too complex to regulate directly (or too critical, too fragile, or, well, too “something”).

We didn’t need regulatory change to catalyze the prior iterations of the internet’s evolution. The phone network was already regulated as a common carrier service, creating ample opportunity for edge innovation. And the IP stack and the Web were built as fully open standards, structurally designed to prevent the emergence of vertical monopolies and gatekeeping behavior. In contrast, from the get-go, today’s “edge” services have been dominated by private sector companies, a formula that has arguably helped contribute to their steady innovation and growth. At the same time, limited government intervention results in limited opportunity to address the diverse harms facing internet users and competing businesses.

As the cover of the November 17, 2019 New York Times magazine so well illustrated the internet of today is no utopia. I won’t try to summarize the challenges, but I’ll direct anyone interested in unpacking them to my former employer Mozilla’s Internet Health Report as a starting point. We are due for another evolution of the internet, but in contrast to prior iterations, the market isn't set up for change on its own - we need government action to force the issue.

I’m not alone in observing that the internet regulatory tide has turned. Governments are no longer bystanders. We are witnessing an inexorable rise in intervention. This is scary to many people: private companies operating in the sector worried about new costs and changes, academics and think tanks who celebrate the anti-regulatory approach we’ve had thus far, and human rights advocates concerned about future risks to speech and other freedoms. The internet has been an incredible socioeconomic engine, and continuing the benefits it brings requires preserving its fundamental good characteristics.

While new laws are not without risk of harm, further regulatory change today seems both necessary and inevitable. The open question is whether the effect will be, on balance, good or bad. If these imminent changes are done well, the power of government oversight will be harnessed to increase accountability and meaningful transparency, promote openness and interoperability, and center the future on user agency and empowerment to help make markets work to their fullest. If on the other hand these changes are done poorly, we risk, among other undesirable outcomes, reinforcing the status quo of centralized power, barriers to entry and growth, and business models that don’t empower users but instead subject them to ever-worsening garbage.

I’m an optimist; I think we’re on a course to make the internet better through good government intervention. From my perspective, we can already see the framework of the future comprehensive internet regulation that is to come, for better or for worse. Think of it as the Internet Communications Act of 2024, to use a U.S. naming convention; or the General Internet Sector Regulation, following the E.U. style. Advocates for a better internet can either sit on the sidelines as these developments continue, decrying the (legitimate) risks and concerns; or they can get into the mix, put forward some good ideas, and build strategies and coalitions to help shape the outcome so that it best serves the public’s interest.

Where are the key policy fights taking place? Geographically, over the past few years, we’ve seen the center of internet policy shift from Washington D.C. to Brussels, and that’s where we can see the future emerging most clearly today. The GDPR illustrates this shift, as despite its imperfections, it established a new paradigm for data protection that has been echoed in Kenya and California, with more to come.

This isn’t just a story about Europe, or about privacy, though. Competition reform is racing forward with major investigations and reports around the world; the United Kingdom has done perhaps the most work here, with its eye-opening Final Report of July 2020 (all 437 pages and 27 appendices of it!). Many countries are undertaking antitrust investigations of specific companies or reevaluating the modern day fitness of their competition legal frameworks.

Meanwhile social media companies and, more broadly, internet companies as intermediaries for user communications online, have come under fire all around the world, with Pakistan and India making some of the most aggressive moves so far. The European Union is advancing its own comprehensive regulatory vision for online content through the Digital Services Act, just as the United States is reevaluating its historical intermediary liability safe harbor, Section 230.

In the United States, we’re seeing a moment that bears many similarities to the late 1960s in the buildup to the Clean Air Act of 1970. That law had powerful bipartisan support, and commensurate industry opposition. Just as with those early climate political wars, advocates for reform are facing the weaponization of uncertainty as a tactic to resist government intervention, with the abuse of data and science and metrics to advocate for an outcome of inaction. As with climate change, inaction to address the harms presented by the internet ecosystem today is itself is a policy choice, and it’s the wrong one for the future health of the internet. I believe change will come though, and as with the Clean Air Act, eventually we'll look back and appreciate the sea change we made by intervening at a critical moment. (Sorry, that pun was mostly inadvertent - and, in fact, a bit unfortunate given the current state of play of climate politics and the climate crisis… but that’s a piece for another author, another day.)

Considering that the Clean Air Act established the Environmental Protection Agency, perhaps in the U.S. we need what Harold Feld and his colleagues at Public Knowledge have been calling for in the Digital Platform Act, establishing something akin to an Internet Protection Agency. Or perhaps, as I’ve supported in the past, we need a revamped Federal Trade Commission with greater authority, building on that agency’s success at integrating technologists into its consumer protection work. Increasingly, I’m inclined towards the idea that what we need is an expanded Federal Communications Commission, given that agency’s relatively broad authority (no matter how circumscribed by the current leadership) and the nature of this evolution as advancing what feels like modern day communications infrastructure. The United Kingdom has decided to go in this direction for content regulation, for example, appointing OFCOM to manage future “duty of care” obligations for online platforms. The technologies and businesses are very different between the traditional telecom sector and the internet ecosystem, though, and substantial evolution of the regulatory model would be necessary.

Regardless of where you situate the future policy making and enforcement function within the U.S. government, we’re still at the normative development stage on these policy issues. And frankly, the internet policy world needs some new ideas for what comes next. So, over the next few posts in this series, I’m going to share a few fresh thoughts that I’ve been mulling over. Stay tuned!

72 Comments


This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it