Our Bipolar Free-Speech Disorder And How To Fix It (Part 2)

from the the-free-speech-triangle dept

In Part 1 of this series, I gave attention to law professor Jack Balkin's model of "free speech as a triangle," where each vertex of the triangle represents a group of stakeholders. The first vertex is government and intergovernmental actors. The second is internet platform and infrastructure providers, and the third is users themselves. This "triangle" model of speech actors is useful because it enables us to characterize the relationships among each set of actors, thereby illuminating how the nature of regulation of speech has changed and become more complicated than it used to be.

Take a look again at Balkin's Figure 1.

Although it's clearer when we visualize all the players in the free-speech regulation landscape that a "free-speech triangle" at least captures more complexity than the usual speakers-against-the-government or speakers-against-the-companies or companies-against-the-government models, the fact is that our constitutional law and legal traditions predispose us to think of these questions in binary rather than, uh, "trinary" terms. We've been thinking this way for centuries, and it's a hard habit to shake. But shaking the binary habit is a necessity if we're going to get the free-speech ecosystem right in this century.

To do this we first have to look at how we typically reduce these "trinary" models to the binary models we're more used to dealing with. With three classes of actors, there are three possible "dyads" of relationships: user–platform, government–platform, and user–government.

(a) Dyad 1: User complaints against platforms (censorship and data gathering)

Users' complaints about platforms may ignore or obscure the effects of government demands on platforms and their content-moderation policies.

Typically, public controversies around internet freedom of expression are framed by news coverage and analysis as well as by stakeholders themselves, as binary oppositions. If there is a conflict over content between (for example) Facebook and a user, especially if it occurs more than once, that user may conclude that that her content was removed for fundamentally political reasons. This perception may be exacerbated if the censorship occurred and was framed as a violation of the platform's terms of service. A user subject to such censorship may believe that her content is no more objectionable than that of users who weren't censored, or that her content is being censored while content that is just as heated, but representing a different political point of view, isn't being censored. Naturally enough, this outcome seems unfair, and a user may infer that the platform as a whole is politically biased against those of her political beliefs. It should be noted that complaints about politically motivated censorship apparently come from most and perhaps all sectors.

A second complaint from users may derive from data collection by a platform. This may not directly affect the direct content of a user's speech, but it may affect the kind of content she encounters, which, when driven by algorithms aimed increasing her engagement on the platform, may serve not only to urge her participation in more or more commercial transactions, but also to "radicalize" her, anger her, or otherwise disturb her. Even if an individual may judge herself more or less immune from algorithmically driven urges to view more and more radical and radicalizing content, she may be disturbed by the radicalizing effects that such content may be having on her culture generally. (See, e.g., Tufekci, Zeynep, "YouTube, the Great Radicalizer.") And she may be disturbed at how an apparently more radicalized culture around her interacts with her in more disturbing ways.

Users may be concerned both about censorship of their own content (censorship that may seem unjustified) and platforms' use of data, which may seem to be designed to manipulate them or else manipulate other people. In response, users (and others) may demand that platforms track bad speakers or retain data about who bad speakers are (e.g., to prevent bad speakers from abandoning "burned" user accounts and returning with new accounts to create the same problems) as well as about what speakers say (so as to police bad speech more). But there are two likely outcomes of a short-term pursuit of pressuring platforms to censor more or differently, or to gather less data (about users themselves) or to gather more data (about how users' data are being used). One obvious, predictable outcome of these pressures is that, to the extent the companies respond to them, governments may leverage platforms' responses to user complaints in ways that make it easier for government to pressure platforms for more user content control (not always with the same concerns that individual users have) or to provide user data (because governments like to exercise the "third-party" doctrine to get access to data that users have "voluntarily" left behind on internet companies' and platform providers' services).

(b) Dyad 2: Governments' demands on platforms (content and data)

Government efforts to impose new moderation obligations on platforms, even in response to user complaints, may result in versions of the platforms that users value less, as well as more pressure on government to intervene further.

In the United States, internet platform companies (like many other entities, including ordinary blog-hosting servers and arguably bloggers themselves) will find that their First Amendment rights are buttressed and extended by Section 230 of the Communications Decency Act, which generally prohibits content-based liability for those who reproduce on the internet content that is originated by others. Although a full discussion of the breadth and the exceptions to Section 230—which was enacted as part of the omnibus federal Telecommunications Act reform in 1996—is beyond the scope of this particular paper, it is important to underscore that Section 230 extends the scope of protection for "intermediaries" more broadly than First Amendment case law alone, if we are to judge by relevant digital-platform cases prior to 1996, might have done. But the embryonic case law in those early years of the digital revolution seemed to be moving in a direction that would have resulted in at least some First Amendment protections for platforms consistent with principles that protect traditional bookstores from legal liability for the content of particular books. One of the earliest cases prominent cases concerning online computer services, Cubby v. CompuServe (1991), drew heavily on a 1959 Supreme Court case, Smith v. California, that established that bookstores and newsstands were properly understood to deserve First Amendment protections based on their importance to the distribution of First Amendment-protected content.

Section 230's broad, bright-line protections (taken together with the copyright-specific protections for internet platforms created by the Digital Millennium Copyright Act in 1998) are widely interpreted by legal analysts and commentators as having created the legal framework that gave rise to internet-company success stories like Google, Facebook, and Twitter. These companies, as well as a raft of smaller, successful enterprises like Wikipedia and Reddit, originated in the United States and were protected in their infancy by Section 230. Even critics of the platforms—and there are many—typically attribute the success of these enterprises to the scope of Section 230. So it's no great surprise to discover that many and perhaps most critics of these companies (who may be government actors or private individuals) have become critics of Section 230 and want to repeal or amend it.

In particular, government entities in the United States, both at the federal level and at the state level, have sought to impose greater obligations on internet platforms not merely to remove content that is purportedly illegal, but also to prevent that content from being broadcast by a platform in the first place. The notice-and-takedown model of the Digital Millennium Copyright Act of 1998, which lends itself to automated enforcement and remedies to a higher degree than non-copyright-related content complaints, is frequently suggested by government stakeholders as a model for how platforms ought to respond to complaints about other types of purportedly illegal content, including user-generated content. The fact that copyright enforcement, as distinct from enforcement other communications-related crimes or private causes of action, is comparatively much simpler than most other remedies in communications law, is a fact that is typically passed over by those who are unsympathetic to today's social-media landscape.

Although I'm focusing here primarily on U.S. government entities, this tendency is also evident among the governments of many other countries, including many countries that rank as "free" or "partly free" in Freedom House's annual world freedom report. It may be reasonably asserted that the impulse of governments to offload the work of screening for illegal (or legal but disturbing) content is international. The European Union, for example, is actively exploring regulatory schemes that implicitly or explicitly impose content-policing norms on platform companies and that impose quick and large penalties if the platforms fail to comply. American platforms, which operate internationally, must abide by these systems at least with regard to their content delivery within EU jurisdictions as well as (some European regulators have argued) anywhere else in the world.

Added to governments' impulse to impose content restrictions and policing obligations on platforms is governments' hunger for the data that platforms collect. Not every aspect of the data that platforms like Google and Facebook and Twitter collect on users is publicly known, nor have the algorithms (decision-making processes and criteria implemented by computers) that the platforms use to decide what content may need monitoring, or what content users might prefer, being generally published. The reasons some aspects of the platforms' algorithmic decision-making may be generally reduced to two primary arguments. First, the platforms' particular choices about algorithmically selecting and serving content, based on user data, may reasonably classed as trade secrets, so that if they were made utterly public a competitor could free-ride on the platforms' (former) trade secrets to develop competing products. Second, if platform algorithms are made wholly public, it becomes easier for anyone—ranging from commercial interests to mischievous hackers and state actors—to "game" content so that it is served to more users by the platform algorithms.

Governments' recognition that protections for platforms has made it easier for the platforms to survive and thrive may wish to modify the protections they have granted, or to impose further content-moderation obligations on platforms as a condition of statutory protections. But even AI-assisted moderation measures will necessarily be either post-hoc (which means that lots of objectionable content will be public before the platform curates it) or pre-hoc (which means that platforms will become gatekeepers of public participation, shoehorning users into a traditional publishing model or an online-forum model as constrained by top editors as the early version of the joint Sears-IBM service Prodigy was).

(c) Dyad 3: People (and traditional press) versus government.

New, frequently market-dominant internet platforms for speakers create new government temptations and capabilities to (i) surveil online speech, (ii) leverage platforms to suppress dissident or unpopular speech or deplatform speakers, and/or (iii) employ or compel platforms to manipulate public opinion (or to regulate or suppress manipulation).

It's trivially demonstrable that some great percentage of complaints about censorship in open societies is grounded in individual speakers' or traditional publishers' complaints that government is acting to suppress certain kinds of speech. Frequently the speech in question is political speech but sometimes it is speech of other kinds (e.g., allegedly defamatory, threatening, fraudulent, or obscene) of speech. This dyad is, for the most part, the primary subject matter of traditional First Amendment law. It is also a primary focus of international free-expression law where freedom of expression is understood to be guaranteed by national or international human-rights instruments (notably Article 19 of the International Covenant on Civil and Political Rights).

But this dyad has been distorted in the twenty-first century, in which, more often than not, troubling political speech or other kinds of troubling public speech are normally mediated by internet platforms. It is easier on some platforms, but by no means all platforms, for speakers to be anonymous or pseudonymous. Anonymous or pseudonymous speech is not universally regarded by governments as a boon to public discourse, and frequently governments will want to track or even prosecute certain kinds of speakers. Tracking such speakers was difficult (although not necessarily impossible) in the pre-internet era of unsigned postcards and ubiquitous public telephones. But internet platforms have created new opportunities to discover, track, and suppress speech as a result of the platforms' collection of user data for their own purposes.

Every successful internet platform that allows users to express themselves has been a target of government demands for disclosure of information about users. In addition, internet platforms are increasingly the target of government efforts to mandate assistance (including the building of more surveillance-supportive technologies) in criminal-law or national-security investigations. In most ways this is analogous to the 1994 passage of CALEA in the United States, which obligated telephone companies (that is, providers of voice telephony) to build technologies that facilitated wiretapping. But a major difference is that the internet platforms more often than not capture far more information about users than telephone companies traditionally had done. (This generalization to some extent oversimplifies the difference, given that there is frequently convergence between the suites of services that internet platforms and telephone companies—or cable companies—now offer their users.)

Governmental monitoring may suppress dissenting (or otherwise troubling) speech, but governments (and other political actors, such as political parties) may also use internet platforms to create or potentiate certain kinds of political speech in opposition to the interests of users. Siva Vaidhyanathan documents particular uses of Facebook advertising in ways that aimed to achieve political results, including not just voting for an approved candidate but also dissuasion of some voters from voting at all, in the 2016 election.

As Vaidhyanathan writes: "Custom Audiences is a powerful tool that was not available to President Barack Obama and Governor Mitt Romney when they ran for president in 2012. It was developed in 2014 to help Facebook reach the takeoff point in profits and revenue." Plus this: "Because Facebook develops advertising tools for firms that sell shoes and cosmetics and only later invites political campaigns to use them, 'they never worried about the worst-case abuse of this capability, unaccountable, unreviewable political ads,' said Professor David Carroll of the Parsons School of Design."

There are legitimate differences of opinion regarding the proper regime for regulation of political advertising, as well as regarding the extent to which regulation of political advertising can be implemented consistent with existing First Amendment precedent. It should be noted, however, that advertising of the sort that Vaidhyanathan discusses raises issues not only of campaign spending (although in 2016, at least, the spending on targeted Facebook political advertising of the "Custom Audiences" variety seems to have been comparatively small) as of transparency and accountability. Advertising that's micro-targeted and ephemeral is arguably not accountable to the degree that an open society should require. There will be temptations for government actors to use mechanisms like "Custom Audiences" to suppress opponents' speech—and there also will be temptations by government to limit or even abolish such micro-targeted instances of political speech.

What is most relevant here is that the government may address temptations either to employ features like "Custom Audiences" or to suppress the use of those features by other political actors in non-transparent or less formal ways, (e.g., through the "jawboning" that Jack Balkin describes in his "New School Speech Regulation" paper). Platforms—especially market-dominant platforms that, as a function of their success and dominance, may be particularly targeted on speech issues—may feel pressured to remove dissident speech in response to government "jawboning" or other threats of regulation. And, given the limitations of both automated and human-based filtering, a platform that feels compelled to respond to such governmental pressure is almost certain to generate results that are inconsistent and that give rise to further dissatisfaction, complaints, and suspicions on the part of users—not just the users subject to censorship or deplatforming, but also users who witness such actions and disapprove of them.

Considered both separately and together, it seems clear that each of the traditional "dyadic" models of how to regulate free speech tend to focus on two vertices of the free-speech triangle while overlooking a third vertex, whose stakeholders may intervene or distort or exploit or be exploited by outcomes of conflicts of the other two stakeholder groups. What this suggests is that no "dyadic" conception of the free-speech ecosystem is sufficiently complex and stable enough to protect freedom of expression or, for that matter, citizens' autonomy interests in privacy and self-determination. This leaves us with the question of whether it is possible to direct our law and policy in a direction that takes into account today's "triangular" free-speech ecosystem in ways that provide stable, durable, expansive protections of freedom of speech and other valid interests of all three stakeholder groups. That question is the subject of Part 3 of this series.

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: 1st amendment, free speech, jack balkin, social media


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    Mason Wheeler (profile), 29 Nov 2018 @ 12:30pm

    The reasons some aspects of the platforms' algorithmic decision-making may be generally reduced to two primary arguments. First, the platforms' particular choices about algorithmically selecting and serving content, based on user data, may reasonably classed as trade secrets

    There is nothing reasonable about trade secrets. There are only two reasons for a "trade secret" to exist:

    1. The knowledge is useful enough to give the company an edge in the market. For centuries now, we have recognized that any such knowledge is worth preserving, particularly because it's all too easy for it to be lost if something unexpectedly goes wrong. And for all of the promise of modern information technology, that is still a real problem; the only difference today is that it requires a server to abruptly die instead of a craftsman. This is the problem that patents were invented to solve. For all the problems that patents can cause, they're less bad than trade secrets, because requiring useful information to be published means that it can be preserved. Therefore, there is no legitimate reason for such information to be covered by trade secrets, both because it has a more valuable protection under patent law and because it legitimizes trade secrets, allowing for them to provide cover for the second type:
    2. The secret knowledge is irresponsible, malicious, or otherwise harmful to other individuals or to society in general. We see this all the time, when companies from chemical manufacturers to oil extractors to food and beverage producers to voting machine suppliers to social networks refuse to disclose vital details of how their products work when credible accusations are raised (often in court) that they are causing real harm. But because we recognize trade secret protections for the first type, all the company has to do is claim that their secrets of this type are the first type and all too often they are shielded from scrutiny. This is a thing that should never happen.

    Given that the first type of protection has no legitimacy, as the relevant societal goals would be better served by the patent system, and that the second type of protection is actively harmful to society, legal protection for trade secrets should be abolished altogether.

    link to this | view in thread ]

  2. identicon
    Christenson, 29 Nov 2018 @ 1:10pm

    Don't like this model

    Dear Mr Godwin:
    I very much appreciate that the existing binary model of free speech is broken, but I am not at all sure that breaking it into just three simplified parts is the most helpful way to model.

    To appreciate the complexity, begin with the crucial importance of context to the meaning of much speech. Discussing a hypothetical threat is entirely different from making a threat which is again different from carrying it out, and we haven't even covered reporting on a threat. I also think that scale is also a crucially important distinction -- a small platform like Techdirt can do a reasonable job of moderation, but it is close to provably impossible on a really large one like Facebook.

    IMO, subject to verification, we seem to have arrived at the following types of entities:
    a) end-users, who fall on a spectrum from intelligent to nutjobs, many of whom seem to be made, not born.
    b) Infrastructure providers, such as ISPs, DNS's, web host companies, etc.
    c) Advertisers, who are trying to find their audience amongst a sea of "lies, damned lies, and audience metrics".
    d) Small websites, like Techdirt. Larger commercial websites, such as Autozone.com also qualify.
    e) Large Platforms, like Facebook, Instagram, Google, Amazon, Uber, youtube.
    f) Producers of traditionally copyrighted content.
    g) The gubmn't, which is by no means a monolith.

    There's some permeability between these. Large platforms become government-like. Micro-targeted advertising destroys transparency. Robots are everywhere and positive identification has become rather difficult as malware takes over all of our computers.

    link to this | view in thread ]

  3. identicon
    Anonymous Coward, 29 Nov 2018 @ 1:29pm

    Re: You have a point, but...

    Here's the rub with your first point -- while patents do have the advantages you mention over trade secrets for trade secrets of the first type, they only can convey these advantages in the physical (can you drop the patented object, or the output of the patented process, on your foot?) sphere.

    Once you step outside the space of things that can be dropped on feet into the realm of bits, patents cease to become applicable, due to the inherently mathematical nature of software (most clearly shown by the Curry-Howard isomorphism) combined with the fundamental computability limitations of Turing machines (equality over TMs isn't computable). More practically speaking -- a patent on an algorithm forecloses that algorithm independent of application domain, while the noncomputability of TM equality means that disasters like the double patenting of LZW are unavoidable in the general case.

    Given that copyright is not the correct tool for protecting an algorithm, business rule-set, or design basis (vs. a specific implementation) either, and trademark is inapposite to this situation -- trade secret (or at the very least, business confidentiality precautions, even if not backed by trade secret law) seems to be the least bad solution to the issue of not leaving the algorithmic crown jewels out for anyone to grab and (ab)use.

    link to this | view in thread ]

  4. icon
    ECA (profile), 29 Nov 2018 @ 1:41pm

    Error..

    I would separate ISP's..
    they may have a section thats PART of the open internet, but being the final mile, they are BETWEEN the open internet and the customers..

    And Amazon, google, yahoo, Excite and MSN, probably Fight with the ISP's as much as the customers..

    link to this | view in thread ]

  5. icon
    Gary (profile), 29 Nov 2018 @ 1:54pm

    Or...

    Or to put it a different way, It Ain't Easy. It may not even be possible to simplify this in a meaningful way, but the trinary approach is certainly more nuanced.
    It will never be possible to make everyone happy, or to get everyone to agree on what is "best."

    link to this | view in thread ]

  6. icon
    Mason Wheeler (profile), 29 Nov 2018 @ 2:10pm

    Re: Re: You have a point, but...

    trade secret (or at the very least, business confidentiality precautions, even if not backed by trade secret law) seems to be the least bad solution to the issue of not leaving the algorithmic crown jewels out for anyone to grab and (ab)use.

    This statement only makes sense if you accept the premise that this "issue" is a problem in need of a solution. I don't accept that, for three reasons:

    1. Keeping algorithms secret is causing real, non-hypothetical problems today in the real world. (cf. Facebook.)
    2. The hypothetical problems that might be caused by these secret algorithms becoming public are outweighed by the security interest of Kerckhoff's Principle, ie. that any legitimate security analysis must assume that all secret algorithms are already known to the adversary, and only the key remains confidential. Therefore, in matters of security, secrecy harms and hinders the good guys (who are unable to analyze secret works) far more than the bad guys.
    3. The hypothetical problems that might be caused by these secret algorithms becoming public, making the system trivial to copy, are highly overstated. As an experienced professional software developer, even without any access to secret algorithms, it would take me a few months tops to create and launch a website that does all the same basic functionality as Facebook or YouTube. That wouldn't make me a serious competitor, though, as the value of the site lies in its user base far more than its codebase. (This is why Reddit is likely the only Reddit-like site you're familiar with, despite them publishing their Reddit server code as open source. They understand this principle.)

    What compelling reason is there for keeping algorithms secret that outweighs these reasons not to?

    link to this | view in thread ]

  7. identicon
    Anonymous Coward, 29 Nov 2018 @ 2:24pm

    Re: Re: Re: You have a point, but...

    Keeping algorithms secret is causing real, non-hypothetical problems today in the real world. (cf. Facebook.)

    Unfortunately, with things like machine learning, even opening up the algorithms may not give us enough information to solve these problems.

    link to this | view in thread ]

  8. identicon
    Christenson, 29 Nov 2018 @ 2:29pm

    Re: Re: Re: Re: You have a point,

    I agree that algorithms, such as machine learning, that inherently keep secrets are themselves a problem. But we have no chance of teasing out their secrets if the machine learning algorithm itself is secret?

    link to this | view in thread ]

  9. icon
    Mason Wheeler (profile), 29 Nov 2018 @ 2:45pm

    Re: Re: Re: Re: You have a point, but...

    That's a good point. It's not always easy for AI researchers to determine why an algorithm reached the decision it did, because algorithms don't "think" the way we do.

    That would be a good research project for this field. When another human being makes a decision we find strange, we can ask them to explain themselves, and they lay out their line of reasoning for us. (Assuming they're feeling cooperative, of course.) Having an AI that's capable of doing the same thing would be a major step forward.

    link to this | view in thread ]

  10. identicon
    Christenson, 29 Nov 2018 @ 4:04pm

    Re: Or...

    Thanks Gary! I agree, it's complicated, and it isn't easy, so there will be lots of disagreement as to "best"!

    Hopefully, you will help upgrade my proposal for "better" above.

    link to this | view in thread ]

  11. icon
    takitus (profile), 30 Nov 2018 @ 11:29am

    Ecosystem metaphors and unpleasant speech

    A second complaint from users may derive from data collection … it may affect the kind of content she encounters, which … may serve … to "radicalize" her, anger her, or otherwise disturb her.

    This is the complaint about data collection? Not the use of collected data (possibly from private communications) to profile speakers, and the dissemination of that data to domestic and foreign governments? The chilling effect created by the nowhere-to-hide paradigm of mass data collection is a major threat to speech and should be far more disturbing than chance exposure to unpleasant content.

    But this exaggerated emphasis on “bad speech” leads me to question the drift of “ecosystem” metaphors. When concerns about the emotional impact of speech are raised to the same level of importance as government censorship, the “ecosystem” language makes it far too easy to argue for the suppression of unpleasant speech—after all, if speech is an ecosystem, shouldn’t “harmful” and “viral” elements be excluded from our habitat?

    While I agree that the binary government ⇔ citizen model is too simple, we should be wary of biological metaphors that (among other things) suggest it’s reasonable to suppress upsetting speech. Our traditional, simplistic model nevertheless includes a commitment to the belief that, while we should all enjoy free speech, free speech is not always enjoyable, and that intellectual maturity is essential to living in a free society. Any “ecosystem” model that lacks such a commitment is, IMHO, doomed to be abused by the powerful and hypersensitive.

    link to this | view in thread ]

  12. identicon
    Anonymous Coward, 30 Nov 2018 @ 12:39pm

    Re: Re: Re: Re: Re: You have a point,

    You need to know the algorithm, and we could mandate people be told which personal data of theirs is fed into the algorithm to reach a decision. E.g., we input "30-35 y/o female $1234/week income" and your loan was rejected. But if they input the same type of data from everyone on your friend list, they couldn't give you the details; if they used the data from millions of people (like credit agencies) they couldn't even tell you the names.

    link to this | view in thread ]

  13. identicon
    Christenson, 30 Nov 2018 @ 6:42pm

    Re: Ecosystem metaphors and unpleasant speech

    The real issue of data collection has several parts:
    a) it's pervasiveness. So there's no space to try things on for size without serious consequences.
    b) it "never forgets"...and it does not understand context. (There's no difference between studying something and being something, for example)
    c) it makes decisions invisibly and unaccountably with real world consequences. No plane ticket for you, proud boy/libtard/commie/terrorist/activist/whiner!

    link to this | view in thread ]

  14. identicon
    Anonymous Coward, 1 Dec 2018 @ 1:46am

    Re: Ecosystem metaphors and unpleasant speech

    I suppose it depends on how deep you take the metaphor. Scavengers, parasites, viruses, and bacteria all have their place in ecosystems. Parasites help bring the ecosystem into a more stable equilibrium - reducing numbers and making them more susceptible to predators. Scavengers while often riddled with diseases help prevent the spread of them by consuming decaying biomass and putting it back into the ecosystem. Bacteria fill many roles - mostly around breaking things down. Viruses in addition to their destructive role also contribute genetic information - mostly to bacteria.

    link to this | view in thread ]

  15. icon
    nasch (profile), 1 Dec 2018 @ 2:04pm

    Re:

    Are you proposing that business methods and algorithms be patentable? Please no.

    link to this | view in thread ]

  16. icon
    nasch (profile), 1 Dec 2018 @ 2:07pm

    Proofreading?

    Good piece, but so many grammatical errors. This could really use a proofreader.

    link to this | view in thread ]

  17. icon
    Mike Godwin (profile), 1 Dec 2018 @ 3:51pm

    Re: Proofreading?

    If you can share the errors, I can get them fixed.

    link to this | view in thread ]

  18. icon
    nasch (profile), 1 Dec 2018 @ 10:13pm

    Re: Re: Proofreading?

    Sure!

    which, when driven by algorithms aimed increasing her engagement on the platform (missing "at")

    her participation in more or more commercial transactions (should be "more and more")

    as distinct from enforcement other communications-related crimes (missing "of")

    nor have the algorithms (decision-making processes and criteria implemented by computers) that the platforms use to decide what content may need monitoring, or what content users might prefer, being generally published. ("being" should be "been")

    The reasons some aspects of the platforms' algorithmic decision-making may be generally reduced to two primary arguments. (missing something, but not sure what)

    may reasonably classed as trade secrets (should be "may reasonably be classed" or similar)

    protections for platforms has made it easier (really nit picking here but should be "have", as "protections" is plural)

    sometimes it is speech of other kinds (e.g., allegedly defamatory, threatening, fraudulent, or obscene) of speech. ("of speech" is redundant)

    raises issues not only of campaign spending (although in 2016, at least, the spending on targeted Facebook political advertising of the "Custom Audiences" variety seems to have been comparatively small) as of transparency and accountability. ("as of" should be "but of" or something similar)

    Thanks, looking forward to reading part 3.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.