The service provider would be the ISP or the company hosting the hardware.
Having spent 45+ years in the business of creating and providing computer services, I can assure you that the choices made by implementors concerning what is done in hardware and what is done in either firmware or software are much too arbitrary and anecdotal to be considered relevant in a discussion of speech rights. While most developers may be highly constrained in their practical range of choices, not all are so constrained. Having spent 11+ years working for a company that provided both hardware and software, Digital Equipment Corp, I can assure you that some developers do, in fact, have great freedom in deciding whether a function is performed by hard-, firm-, or software. Their choices do not elevate them to the role of deciders of what is and is not protected speech.
The distinction between “ISP” and others is also arbitrary and anecdotal if only because the largest providers of public forums often serve as their own ISPs, or own an interest in their ISPs. It is also the case that those who call themselves “ISPs” often provide services such as newsgroups, email, chat services, etc. that are functionally indistinguishable from those provided by “platforms.” There isn’t really much that distinguishes an ISP from a platform. In fact, if we had more “Protocols, not Platforms,” the only real difference between ISPs and platforms is that one would be more likely to provide support for open protocols while the other would probably rely more on closed systems. But even that distinction would be the result of arbitrary and anecdotal decision histories.
From the point of view of system design or architecture, the provision of the network communications functions which typify those considered to be ISPs are largely irrelevant. There is little useful distinction between communication that occurs within a system and communication that occurs between systems. The choice to use remote procedure calls, of any form, rather than local procedure calls, is largely arbitrary. Such decisions involve choices about optimizing investment, addressing performance or capacity concerns, etc. They don’t reflect inherent attributes of a system at the architectural level. Remember, before we had networking, we often relied on single-machine timesharing systems that provided services largely indistinguishable from those we enjoy today – except in scale. Thus, the choice to use networking should be considered too arbitrary and anecdotal to have bearing on discussions of fundamental rights.
Some claim that there is some essential difference between a service that merely passes messages in real-time and one that stores messages. This is also an arbitrary distinction since many of the “networking” systems, and even methods for in-machine messaging or procedure calling, repeatedly store messages which are in transit. Given the technologies we use today, storage facilitates the communication function, whether that function is to communicate immediately or with significant delays. It should also be recognized that a platform provider’s decision as to how long data will be stored is largely arbitrary. Some services will only allow a message to be viewed once and will then delete the message, others will allow repeated viewings of all messages ever sent. Some services will provide repeated viewing of messages until they reach some age and are deleted. These are arbitrary decisions that shouldn’t impact our view of the fundamental nature of the service being provided. The length of time that a message is stored is too arbitrary and anecdotal to have bearing on discussions of speech rights.
Those concerned with rights should be focused on the fundamental, technology neutral function which are provided. The question should not be: “How is the system implemented?” but rather: “Is this system an essential or dominant forum for public discourse?” For most systems, and some would argue all that exist today, the answer to that latter question will be “No.” Nonetheless, we need to understand what we would do if we ever answered “Yes” to that question. If a particular service or platform does ever become either essential or dominant in supporting or providing public discourse, should we regulate it differently from other non-essential, non-dominant systems? If not why not? And, if so, then how?
For what reason should the government be forced to post all its speech on every possible platform and through every possible protocol if it chooses to use Twitter for the same reason?
Imagine that there is a "White Christian Citizens Social Club (WCCSC)" whose membership includes the vast majority, but not all, of those who live in some town. I do not think that the government will have fulfilled its duty if it only provides notices of new mortgage or hiring programs to the WCCSC or if it provides them with notices earlier than it does to others. Even if the government rightly claimed that all notices provided to the WCCSC were also open to public discovery and inspection as filed in drawer 5 of cabinet 27 in the 2nd sub-basement of City Hall, I don't think that would offset the preferred access provided to the WCCSC. I would be especially concerned if other social clubs used a common, open protocol for distributing news and announcements among themselves and with others. In this case, I think it is clear that it is reasonable to direct the government to make use of common, open protocols, if they exist, and thus avoid giving any unavoidable preference or special access to the members of the WCCSC. In fact, I think it would make sense for the government to encourage the WCCSC to connect using that common protocol in order to reduce the complexity of government publication. On the other hand, if there was a Block Association that represented a string of 5 houses and they had some complicated or unusual method for receiving messages (perhaps messages must be physically posted on a bulletin board in the middle of the block), I think it would be reasonable for the government to honestly claim that they simply didn't have the resources to address such unusual needs.
Now, you might believe that racially discriminatory clubs no longer exist. But, that doesn't matter. The point is that when the government speaks, it should take reasonable care to speak to all citizens and to do so without preference for any of them. Thus, if it speaks to closed groups, like those on Twitter, it should also seek out and speak to more open groups, like those federated using ActivityStreams protocols, Mastodon instances, or whatever.
I'm beginning to wonder whether we can ascribe good faith here.
I affirm, although you need not believe it, that I am arguing in good faith.
I spent a long career, over 45 years, involved in the creation of tools to enable online communication. I got into the business, not because of a love of computers, but as an act of politics. I saw a need to help develop society's infrastructure for public discourse and for facilitating the preservation and access to the full record of human thought. So, I built tools to address the need. During my career, either for pay on my own, I built some of the first email, groupware, discussion, or PubSub systems. I was one of the earliest to build hypertext systems, etc. But, throughout my career, the thing that has worried me the most was the potential for private actors to envelope what I felt should be public spaces into private spaces for private profit. What you see here is simply my expressing that concern a bit more vocally and publicly than I normally do. My normal approach would be to just go build something that addresses the problem, but, these days, I'm busy and frankly tired of writing code. If you wish to know more about my background, see: https://www.linkedin.com/in/bobwyman/
Yes, the Constitution says "...congress shall make no law ... abridging the freedom of speech..." The relevant question is: "What is speech?"
I believe that "speech" is that which involves the expression of thoughts, feelings, etc. (i.e. speech has rhetorical intent.) The difference between us seems to be over whether or not we accept that the provision of a communications service, such as Twitter, involves expression or "speech" on the part of the service provider even though that service unquestionably facilitates the properly protected speech of others. I think providing a channel is not speech. Apparently, you think that doing so is speech. Putting aside that distinction, I suspect that you and I are both equal in our ardent support for the First Amendment.
Also, I don't accept the relevance of the various references to newspapers. For newspapers and for all forms of the "press," the mere act of selecting what they will and will not publish is rightly seen as one of the means by which they speak. The press inevitably has perspective and expresses that perspective not only explicitly, in what they choose to publish, but also implicitly, in what they choose not to publish. (In this case, the failure to speak is speech...)
And, by the way, I am not "alt-right." Not even close. I am a Democrat, always have been and expect that I always will be. I'm also frequently horrified by what folk on the Right say and I suspect that many of them have much less respect for the Constitution than they claim. However, I'm convinced that no matter how much I may disagree with what they say and no matter how much I wish they wouldn't, it is essential to defend their right to say it. As I've mentioned before, I think the proper response to bad speech is more speech. That is one of several reasons that I've argued in other comments that we should find a way to popularize the use of technical means, such as annotation systems, that would allow us to more effectively respond to at least online speech, whether or not the publisher of that speech provides facilities to do so.
Do you believe the government should have the legal right to compel any privately owned interactive web service into hosting legally protected speech that the owners/operators of said service don’t want to host?
I've answered this question several times. Once again, the answer is: Normally No. However, if a platform becomes an essential component of the nation's infrastructure for public discourse, the rules should change. As I've said, I agree now, and always have, that we should have Protocols, Not Platforms. Thus, my concern is with closed, unfederated services that do not use open protocols such as those that support Mastodon. When open protocols are used, it is the protocol, not any single platform, that is the essential infrastructural element. When protocols are the foundation for discourse, then anyone is free to select the service they use. If one service moderates excessively, users can switch to a different service that moderates differently or they can build their own service. This is as it should be. If discourse depends on protocols, not platforms, I don't think that the government would ever rightly interfere with activity on any particular platform.
I would even support a law that required that if the government, or its agents, were to participate in a closed system, by posting announcements or whatever, that it should also be required to at least simultaneously post the same announcements using existing open protocols. This implies, of course, that if the government wants to use Twitter, they should be compelled to publish in such a way that Mastodon instances are given equal and non-discriminatory access to those announcements. The government, through its actions, should not be permitted to give a distinct advantage to a closed, private service if an open alternative exists.
If people didn't like how a particular social platform is moderating content, why are a majority of people still using it?
In a democratic system, I believe we should be seek to protect the rights of minorities and not be guided only by the opinions or preferences of the majority. The exclusion of some, simply because most approve of that exclusion, is not a compelling case for exclusion.
the red herring thrown in about "optional moderation".
It's not a "red herring." I am trying to distinguish between what should be the subject of regulation and what should not.
If we had Protocols, Not Platforms, as proposed by Mike Masnick, any moderation would be optional and implemented at the edges of the network since there would be no central component that could impose global moderation. Even if moderation was built into some implementations of protocol clients, users who disagreed with that moderation would be free to either switch to a new client or even to build their own. (Think about the way that spam filtering is done for email -- its all at the edges since there is no "central authority" able to inspect and filter all email traffic. Individual email services can filter what comes into or out from them, but no service can filter what flows between other services in the same email system.) I believe that some of the attributes inherent to "Protocols, Not Platforms" are things that we should require of platforms whose failure to support open protocols prevents our enjoying their benefits.
I am fully aware of the value of content moderation and often regret that better moderation isn't available. (I would really like to be free of all Facebook posts concerning sports or that are "quizzes," even when forwarded by folk I follow.) My objection is to the channel itself imposing moderation without my consent or input. I would not even object to Facebook, Twitter, etc. providing a moderation service to accompany their channel -- as long as I could opt out of it. Ideally, the channel services would others with access to the interfaces needed to build competitive moderation services. (Just as telephony operators are often required to provide others with access, at reasonable cost, to interfaces equivalent to those used by the telephony service when delivering value-added services. I'd like to see a generalization of this rule applied more broadly.)
If content moderation was optional and competitive, I think we'd see a great deal more useful innovation in moderation systems and much happier users.
The problem with non-optional moderation wired into the channel itself is that there is no way that any single operator can select "community standards" that accurately reflect the standards of all the different communities that use the channel. A simple example can be found in practice of moderating pictures that show exposed female chests. While there are many communities that applaud such moderation, and there are even some that would require it by law, here in New York, it is completely legal for women to expose their chests in public. So, how can a service justify banning the portrayal of something that is legal in this state? Is Facebook a higher authority than our State's Legislature or our Courts? I don't think so. On the other hand, individual New Yorkers, who might be offended or disturbed by exposure to this legal behavior, should be free to choose moderation services that shield them from such unwelcome sights. It should be the individual's choice, not the channels' or the platform's choice.
Community standards change over time. If we had had the Internet 100 years ago, we might have seen services that chose to ensure that blacks were excluded from "mixing" with whites or that Jews or Muslims were excluded. In those days, a service provider might have reasonably said that failing to prevent such mixing would hurt the commercial prospects for their service. Many would have accepted that as a reasonable belief. (Even today, we remain burdened with people who would agree with such a claim and who regret that such selectivity isn't the norm...) Nonetheless, it should be clear that this once "reasonable" moderation and selection would have caused a great and enduring harm. The problem here is that there may remain some kinds of statement, or type of speaker, or whatever, that is broadly considered a proper subject of moderation today but that we will discover, at some future time, should not have been moderated. One way to avoid non-optional moderation that enforces implicit and often unrecognized bias would be to ensure that individuals, not channels, are the ones who select the moderation regime.
So, no, I don't object to moderation per se. Rather, I object to the providers of channels which have become important components of our nations' infrastructure for public discourse imposing their idea of appropriate discourse on the rest of us. I believe moderation should be considered a value added service and I'd like it to be a competitively provided service. In any case, if moderation is to be done, it should done optionally and with the user's approval, it should not be required.
Repeatedly complaining that others are repeatedly claiming something is not very useful. It is apparent that you disagree with the idea that corporations should have different (often lesser) rights than people do. That is fine. You have a right to your opinion. But, others also have a right to maintain their disagreement with you. The issue here isn't the repetition, It is that we differ in how the law or Constitution should be interpreted.
Yes, the Supreme Court seems to agree with you that "Corporations are People," however, that remains a controversial opinion on which we should be free to differ -- even if, for the moment, that disagreement has little impact. Things may change. As we've often seen in the past, the Supreme Court may change its position once the full consequences of their past decisions are made clear. It is not guaranteed that the Supreme Court will forever maintain its current interpretation of the Constitution. As does everyone, they occasionally make mistakes and later correct them.
Personally, I find it very suspect that a corporation, other than one that qualifies as "Press," should be granted broad speech rights, particularly when those rights include an ability to restrict the ability of others to speak as they wish. My concern here is for non-press organizations who are in the business of making money by hosting or facilitating the speech of others rather the business of broadcasting or publishing their own speech. Thus, while I would strongly oppose any suggestion that the New York Times should be forced to publish anything that anyone might want published there, I am much less concerned about restricting the ability of a purely facilitative "carrier" service (like Facebook, Twitter, Gmail, etc.) to restrict the legal speech of its users.
Under long-established law, we frequently impose on commercial entities an "obligation to serve" in a non-discriminatory manner. Operators of distribution systems (water, gas, electricity, telephony, etc.) are often required to provide service to all who request it -- whether or not they agree with the purposes for which their services will be used. (A telephone company must provide service to its union members, to those who oppose its policies in rate cases, and even to its competitors.) While such services are allowed to establish reasonable interconnection standards, intended only to preserve their capacity to serve, they can't otherwise restrict the provision of service for legal purposes.
I don't see how the distribution, for profit, of packets of data via social media sites, email systems, or any other interchange systems, differs enough from the distribution of any other good to require that their distributor be granted a right to control the content of what it distributed.
if by education you mean training in critical thinking, I believe that while it would be certainly be useful, it is insufficient. Something more is needed. We can't expect the average user to critically analyze every piece of content to which they are exposed. They need triggers or signals to indicate which content might warrant extra analysis. So, where do the signals come from? I suggest that the proper response to "bad" speech is more speech.
I'm intrigued by the potential that annotation systems might have as technical means to improve the process of public discourse. If every user, as well as "fact checking" groups, were able to make easily discoverable comments on or flag at least public content, then users might be much better able to discover which content should be more carefully analyzed.
Imagine if web browsers supported an ability, using the W3 Annotation protocol, to create comments, ratings, etc. that users could choose to have displayed next to web pages. The browser might then offer an indication that "100 people approve, 200 disapprove" of the content on a web page or on some part of it. That notification that the content was controversial would be a useful signal for a user who had been educated in the methods of critical thinking and content analysis. That user might then inspect the specific annotations, perhaps filtered in useful ways (e.g. "only show annotations from people or organizations I trust") or employ other methods to assess the apparently controversial content.
Annotation systems, at least for web resources, operate independently of those resources and thus impose no burdens on publishers. An annotation system like Hypothes.is or Dokieli can provide a great deal of commentary on a site with absolutely no impact on or cost for that site. In fact, many whose content is annotated don't even know it is happening.
It seems to me that the "problem" with online speech isn't that people are able to say stupid or bad things, but rather that today they can often do so in ways that afford no opportunity for response. But, at least for content identified by an URL, etc. annotations systems can remove ability to speak without response. I think this would be a good thing.
Whether or not annotations would be a useful technical measure, I must admit that no one has yet figured out how to make such systems popular enough to provide a useful counterweight. Nonetheless, I think it would be useful to create a dialog about how annotation might be used and popularized. From that discussion, we might see at least a partial technical solution emerge.
Why was the comment to which this is a reply "flagged by the community?" I can see nothing objectionable in it.
In any case, I think it is interesting to note that if we had "Protocols, not Platforms," then the discussion would be very different. There wouldn't be pressure on individual companies to censor content since no single company would control the entire discussion. If Twitter were simply one of several clients for a "Tweeting Protocol," I suspect that we'd see various client providers competing based on the qualities of the services they provided. Some would distinguish themselves by offering more effective or easier to use interfaces, others might offer unique moderation services. In fact, we might even see the development of a number of competing content moderation services which could be optionally selected by users. These moderation services would vary according to the focus of their moderation (sex, politics, use of language, etc.) or the techniques used (automated, manual, etc.). It would be more like the distribution of USENET Newsgroups where some ISPs block various newsgroups but any user who wants to access a blocked feed can find an alternative provider. Competing clients and services would also be able to support a great deal of variety in the provision of credibility signals, links to context, etc. We wouldn't have to rely on any single provider to pursue innovation in the way the discourse was supported.
If there was an open "Tweeting Protocol" that provided for content to be passed through intermediary nodes, as with store-and-forward email, I suspect that there would be many who would argue that those intermediary nodes should not be permitted to impose moderation on messages that flowed through them and might be temporarily stored on their equipment.
I would greatly prefer a world in which public discourse was distributed via open protocols supported by competitive services and interfaces. Unfortunately, supporting open protocols is often not in the commercial interest of incumbent large providers. (We saw, for instance, Facebook's refusal to support federated XMPP. Later, Google also dropped XMPP support.) We have protocols that could be used to provide what Facebook or Twitter provide, but it is exceptionally unlikely that anyone might successfully build an open system that provides an effective alternative to the existing walled services. This path-dependency should concern us all.
The mere popularity of the club as a place to socialize would not be compelling and should have no bearing on the government's regulation. However, if the government came to use that club to make announcements, if the President of the United States and members of Congress used it to make statements not as easily accessible elsewhere or to receive comments from there more readily than from other locations, etc., I might judge that the club had become more than a mere club and suggest that it had undergone a qualitative transformation into something more -- an essential realm of public discourse that we might consider to be one which should not be selective in allowing admission.
It isn't the popularity of the club that would be determinant. It is the role that the club plays in the public discourse of the nation, or of the community, that should be the focus of attention.
Yes, even truly vile speech is protected. As has been demonstrated in cases like that concerning the Skokie Nazi march, even anti-Semitic gutter trash have the right to spew their filth. I do not suggest that we should limit their rights -- as much as I might wish they didn't use them. The problem, of course, is that any attempt to constrain the anti-Semites is likely to eventually have the unintended consequence of limiting the speech of "good" people. Fortunately, most speech is protected.
Certainly the government should not be able to compel "any" privately owned service to host speech, however, the government does compel common carriers, such as telephony providers, to carry speech without moderation. Similarly, I think the government should be free to compel at least some web services to refrain from moderation of legal speech -- but only some, not "any."
If one establishes a purportedly neutral, agenda-free channel for communication, and if that channel is successful in becoming an important part of the infrastructure for public discourse, I believe that the government should be free to declare it a common carrier and constrain its ability to moderate legal speech. On the other hand, something like a web service for "conservative" or "liberal" discussion, or a service hosting topical discussions, such as "dog breeding," should not be compelled to carry content not fitting that service's charter.
My personal feeling is that services like Twitter, Facebook, and a very few others, have grown to the point where they are, in fact, essential elements of our nation's, if not the world's, infrastructure for public discourse. As such, they should be regulated in order to protect that public discourse from which they profit. These channels are distributors of general discourse, and, like many other distribution systems, they have many of the characteristics of "natural monopolies." Thus, they can be reasonably regulated in the same way that we regulate telephone companies, electric and gas distributors, or operators of our water and sewer supplies.
I think the real danger is not in the regulation of a few, exceptional services, but rather in attempts to write or enforce rules that apply to all providers of discourse-based services. The danger is in writing general laws to control exceptional circumstances. It is essential that we recognize that not all communications channels are significant enough to warrant regulation and we should seek to regulate as little as possible even when regulation is justified. Also, we should recognize that even within the services owned by a single private entity, there should be distinctions drawn. For instance, rules that might apply reasonably to Twitter or to Facebook's public channels should not apply to a Facebook group that I host for discussions between fellow alumni of my high school. My "Alumni Group" is provides for a limited, topical discussion, it is not a general channel for public discourse. As such, it should be protected from external control even though other Facebook services are, I think, more reasonably regulated.
So, No, I can't give you a Yes or No answer. The best I can do is say that it depends on the nature and power of the service. But, the mere fact that a service is privately owned is, I think, insufficient to determine if it should or should not be subject to regulation.
You wrote: "We can always have more companies; but there only is one government."
Well, any government that exists today is only the latest in a very long sequence of previous governments...
In any case, this often-cited government v. corporation distinction doesn't ring true for me. I think the reality is that at the time when the US Constitution was written, it simply wasn't conceivable that any non-governmental entity, other than perhaps a Church, would be able to accumulate sufficient power to have significant control over public discourse. But, today, some private entities at least match, if not exceed, the ability of government or churches to control public discourse.
Madison's first draft of what became the first amendment read, in part:
"The people shall not be deprived or abridged of their right to speak, to write, or to publish their sentiments; and the freedom of the press, as one of the great bulwarks of liberty, shall be inviolable."
In Madison's first draft, the focus was on protecting the "right to speak," rather than being limited to just who (e.g. the government) might be interfering with that right. My personal feeling is that once any entity, government or not, accumulates sufficient power to deprive or abridge our ability to engage in public discourse, then we should be concerned and seek to redress that power.
The issue here is with the possession and use of power over public discourse, not just with who might be wielding that power.
It is clearly wrong for the government, or some corporation, to have the power to judge and control the content of public discourse. But, it is also quite problematic to endure a system that allows lies and misinformation to be spread as easily as they are today. So, where is the middle ground?
What can be done to mitigate the problem sufficiently so that fools are not so easily tempted to encourage censorship?
Perhaps I give her to much credit, but I assume that Klobuchar is not an idiot and thus that she recognizes the danger in her proposal. Thus, I am tempted to see this bill as an act of desperation more than as a well-considered approach to a general problem. Klobuchar is not the only one getting desperate. The failure of the technical community to lead by coming up with concrete proposals to address the problem of misinformation and credibility within the constraints of our Constitution is something that I think we will long regret.
So, how many people who read these posts, or write their own, are actually part of the process of solving these issues? If you don't work at Facebook, Twitter, or Parler, are you at least involved in industry forums dedicated to crafting solutions? Are you a member of the W3's CredWeb (https://credweb.org/) working group? If not, why not? If you're working with some other group, what is it and how can others of us get involved in helping to define the protocols, features, or systems that might make it possible to mitigate this problem sufficiently so that we don't have to see desperate proposals like Klobuchar's being taken seriously?
App stores, as distributors, are natural monopolies, and are thus not subject to market pressure on either price or quality of service. As natural monopolies they should be regulated in the same way that we regulate distributors of electricity, gas, water, or telephone traffic.
Their revenues should be limited to the actual cost of service provided plus a reasonable return on investment. If they provide no service, as in the case of in-app purchases, they should collect no revenue.
As with telephone network providers, App store's control over the content or function of applications should be limited to that which is necessary to maintain the integrity and function of devices (i.e. coding standards and protection against viruses or hacks.). Also, they should not be able to bar or disadvantage apps that "compete" with their own apps.
Should community standards or local laws influence content moderation? If so, which ones?
In New York City, it has been legal for quite some time for women to appear topless in public. Given this, should content moderators, who might restrict depictions of breasts for readers in some areas, restrain from restricting such images when they are being displayed within the confines of New York City? If not, then do we accept the rule that content moderation must always impose the most restrictive interpretation of what is or is not appropriate or permissible?
Hardware/Software distinctions aren't relevant to speech rights
You wrote:
Having spent 45+ years in the business of creating and providing computer services, I can assure you that the choices made by implementors concerning what is done in hardware and what is done in either firmware or software are much too arbitrary and anecdotal to be considered relevant in a discussion of speech rights. While most developers may be highly constrained in their practical range of choices, not all are so constrained. Having spent 11+ years working for a company that provided both hardware and software, Digital Equipment Corp, I can assure you that some developers do, in fact, have great freedom in deciding whether a function is performed by hard-, firm-, or software. Their choices do not elevate them to the role of deciders of what is and is not protected speech.
The distinction between “ISP” and others is also arbitrary and anecdotal if only because the largest providers of public forums often serve as their own ISPs, or own an interest in their ISPs. It is also the case that those who call themselves “ISPs” often provide services such as newsgroups, email, chat services, etc. that are functionally indistinguishable from those provided by “platforms.” There isn’t really much that distinguishes an ISP from a platform. In fact, if we had more “Protocols, not Platforms,” the only real difference between ISPs and platforms is that one would be more likely to provide support for open protocols while the other would probably rely more on closed systems. But even that distinction would be the result of arbitrary and anecdotal decision histories.
From the point of view of system design or architecture, the provision of the network communications functions which typify those considered to be ISPs are largely irrelevant. There is little useful distinction between communication that occurs within a system and communication that occurs between systems. The choice to use remote procedure calls, of any form, rather than local procedure calls, is largely arbitrary. Such decisions involve choices about optimizing investment, addressing performance or capacity concerns, etc. They don’t reflect inherent attributes of a system at the architectural level. Remember, before we had networking, we often relied on single-machine timesharing systems that provided services largely indistinguishable from those we enjoy today – except in scale. Thus, the choice to use networking should be considered too arbitrary and anecdotal to have bearing on discussions of fundamental rights.
Some claim that there is some essential difference between a service that merely passes messages in real-time and one that stores messages. This is also an arbitrary distinction since many of the “networking” systems, and even methods for in-machine messaging or procedure calling, repeatedly store messages which are in transit. Given the technologies we use today, storage facilitates the communication function, whether that function is to communicate immediately or with significant delays. It should also be recognized that a platform provider’s decision as to how long data will be stored is largely arbitrary. Some services will only allow a message to be viewed once and will then delete the message, others will allow repeated viewings of all messages ever sent. Some services will provide repeated viewing of messages until they reach some age and are deleted. These are arbitrary decisions that shouldn’t impact our view of the fundamental nature of the service being provided. The length of time that a message is stored is too arbitrary and anecdotal to have bearing on discussions of speech rights.
Those concerned with rights should be focused on the fundamental, technology neutral function which are provided. The question should not be: “How is the system implemented?” but rather: “Is this system an essential or dominant forum for public discourse?” For most systems, and some would argue all that exist today, the answer to that latter question will be “No.” Nonetheless, we need to understand what we would do if we ever answered “Yes” to that question. If a particular service or platform does ever become either essential or dominant in supporting or providing public discourse, should we regulate it differently from other non-essential, non-dominant systems? If not why not? And, if so, then how?
/div>Re:
You asked:
Imagine that there is a "White Christian Citizens Social Club (WCCSC)" whose membership includes the vast majority, but not all, of those who live in some town. I do not think that the government will have fulfilled its duty if it only provides notices of new mortgage or hiring programs to the WCCSC or if it provides them with notices earlier than it does to others. Even if the government rightly claimed that all notices provided to the WCCSC were also open to public discovery and inspection as filed in drawer 5 of cabinet 27 in the 2nd sub-basement of City Hall, I don't think that would offset the preferred access provided to the WCCSC. I would be especially concerned if other social clubs used a common, open protocol for distributing news and announcements among themselves and with others. In this case, I think it is clear that it is reasonable to direct the government to make use of common, open protocols, if they exist, and thus avoid giving any unavoidable preference or special access to the members of the WCCSC. In fact, I think it would make sense for the government to encourage the WCCSC to connect using that common protocol in order to reduce the complexity of government publication. On the other hand, if there was a Block Association that represented a string of 5 houses and they had some complicated or unusual method for receiving messages (perhaps messages must be physically posted on a bulletin board in the middle of the block), I think it would be reasonable for the government to honestly claim that they simply didn't have the resources to address such unusual needs.
Now, you might believe that racially discriminatory clubs no longer exist. But, that doesn't matter. The point is that when the government speaks, it should take reasonable care to speak to all citizens and to do so without preference for any of them. Thus, if it speaks to closed groups, like those on Twitter, it should also seek out and speak to more open groups, like those federated using ActivityStreams protocols, Mastodon instances, or whatever.
/div>Re: Re:
You wrote:
I affirm, although you need not believe it, that I am arguing in good faith.
I spent a long career, over 45 years, involved in the creation of tools to enable online communication. I got into the business, not because of a love of computers, but as an act of politics. I saw a need to help develop society's infrastructure for public discourse and for facilitating the preservation and access to the full record of human thought. So, I built tools to address the need. During my career, either for pay on my own, I built some of the first email, groupware, discussion, or PubSub systems. I was one of the earliest to build hypertext systems, etc. But, throughout my career, the thing that has worried me the most was the potential for private actors to envelope what I felt should be public spaces into private spaces for private profit. What you see here is simply my expressing that concern a bit more vocally and publicly than I normally do. My normal approach would be to just go build something that addresses the problem, but, these days, I'm busy and frankly tired of writing code. If you wish to know more about my background, see: https://www.linkedin.com/in/bobwyman/
/div>Re: Re: Re: Re: Re: Re: Wow
Yes, the Constitution says "...congress shall make no law ... abridging the freedom of speech..." The relevant question is: "What is speech?"
I believe that "speech" is that which involves the expression of thoughts, feelings, etc. (i.e. speech has rhetorical intent.) The difference between us seems to be over whether or not we accept that the provision of a communications service, such as Twitter, involves expression or "speech" on the part of the service provider even though that service unquestionably facilitates the properly protected speech of others. I think providing a channel is not speech. Apparently, you think that doing so is speech. Putting aside that distinction, I suspect that you and I are both equal in our ardent support for the First Amendment.
Also, I don't accept the relevance of the various references to newspapers. For newspapers and for all forms of the "press," the mere act of selecting what they will and will not publish is rightly seen as one of the means by which they speak. The press inevitably has perspective and expresses that perspective not only explicitly, in what they choose to publish, but also implicitly, in what they choose not to publish. (In this case, the failure to speak is speech...)
And, by the way, I am not "alt-right." Not even close. I am a Democrat, always have been and expect that I always will be. I'm also frequently horrified by what folk on the Right say and I suspect that many of them have much less respect for the Constitution than they claim. However, I'm convinced that no matter how much I may disagree with what they say and no matter how much I wish they wouldn't, it is essential to defend their right to say it. As I've mentioned before, I think the proper response to bad speech is more speech. That is one of several reasons that I've argued in other comments that we should find a way to popularize the use of technical means, such as annotation systems, that would allow us to more effectively respond to at least online speech, whether or not the publisher of that speech provides facilities to do so.
/div>Mastodon does it right. (Protocols, Not Platforms)
You asked:
I've answered this question several times. Once again, the answer is: Normally No. However, if a platform becomes an essential component of the nation's infrastructure for public discourse, the rules should change. As I've said, I agree now, and always have, that we should have Protocols, Not Platforms. Thus, my concern is with closed, unfederated services that do not use open protocols such as those that support Mastodon. When open protocols are used, it is the protocol, not any single platform, that is the essential infrastructural element. When protocols are the foundation for discourse, then anyone is free to select the service they use. If one service moderates excessively, users can switch to a different service that moderates differently or they can build their own service. This is as it should be. If discourse depends on protocols, not platforms, I don't think that the government would ever rightly interfere with activity on any particular platform.
I would even support a law that required that if the government, or its agents, were to participate in a closed system, by posting announcements or whatever, that it should also be required to at least simultaneously post the same announcements using existing open protocols. This implies, of course, that if the government wants to use Twitter, they should be compelled to publish in such a way that Mastodon instances are given equal and non-discriminatory access to those announcements. The government, through its actions, should not be permitted to give a distinct advantage to a closed, private service if an open alternative exists.
/div>Re: Re: Moderation is okay -- if it is chosen by users (i.e. opt
You asked:
In a democratic system, I believe we should be seek to protect the rights of minorities and not be guided only by the opinions or preferences of the majority. The exclusion of some, simply because most approve of that exclusion, is not a compelling case for exclusion.
It's not a "red herring." I am trying to distinguish between what should be the subject of regulation and what should not.
If we had Protocols, Not Platforms, as proposed by Mike Masnick, any moderation would be optional and implemented at the edges of the network since there would be no central component that could impose global moderation. Even if moderation was built into some implementations of protocol clients, users who disagreed with that moderation would be free to either switch to a new client or even to build their own. (Think about the way that spam filtering is done for email -- its all at the edges since there is no "central authority" able to inspect and filter all email traffic. Individual email services can filter what comes into or out from them, but no service can filter what flows between other services in the same email system.) I believe that some of the attributes inherent to "Protocols, Not Platforms" are things that we should require of platforms whose failure to support open protocols prevents our enjoying their benefits.
/div>Moderation is okay -- if it is chosen by users (i.e. optional)
I am fully aware of the value of content moderation and often regret that better moderation isn't available. (I would really like to be free of all Facebook posts concerning sports or that are "quizzes," even when forwarded by folk I follow.) My objection is to the channel itself imposing moderation without my consent or input. I would not even object to Facebook, Twitter, etc. providing a moderation service to accompany their channel -- as long as I could opt out of it. Ideally, the channel services would others with access to the interfaces needed to build competitive moderation services. (Just as telephony operators are often required to provide others with access, at reasonable cost, to interfaces equivalent to those used by the telephony service when delivering value-added services. I'd like to see a generalization of this rule applied more broadly.)
If content moderation was optional and competitive, I think we'd see a great deal more useful innovation in moderation systems and much happier users.
The problem with non-optional moderation wired into the channel itself is that there is no way that any single operator can select "community standards" that accurately reflect the standards of all the different communities that use the channel. A simple example can be found in practice of moderating pictures that show exposed female chests. While there are many communities that applaud such moderation, and there are even some that would require it by law, here in New York, it is completely legal for women to expose their chests in public. So, how can a service justify banning the portrayal of something that is legal in this state? Is Facebook a higher authority than our State's Legislature or our Courts? I don't think so. On the other hand, individual New Yorkers, who might be offended or disturbed by exposure to this legal behavior, should be free to choose moderation services that shield them from such unwelcome sights. It should be the individual's choice, not the channels' or the platform's choice.
Community standards change over time. If we had had the Internet 100 years ago, we might have seen services that chose to ensure that blacks were excluded from "mixing" with whites or that Jews or Muslims were excluded. In those days, a service provider might have reasonably said that failing to prevent such mixing would hurt the commercial prospects for their service. Many would have accepted that as a reasonable belief. (Even today, we remain burdened with people who would agree with such a claim and who regret that such selectivity isn't the norm...) Nonetheless, it should be clear that this once "reasonable" moderation and selection would have caused a great and enduring harm. The problem here is that there may remain some kinds of statement, or type of speaker, or whatever, that is broadly considered a proper subject of moderation today but that we will discover, at some future time, should not have been moderated. One way to avoid non-optional moderation that enforces implicit and often unrecognized bias would be to ensure that individuals, not channels, are the ones who select the moderation regime.
So, no, I don't object to moderation per se. Rather, I object to the providers of channels which have become important components of our nations' infrastructure for public discourse imposing their idea of appropriate discourse on the rest of us. I believe moderation should be considered a value added service and I'd like it to be a competitively provided service. In any case, if moderation is to be done, it should done optionally and with the user's approval, it should not be required.
/div>Re: Re: Re: Re: Re: Re: Re: I have One Simple Question for you.
Repeatedly complaining that others are repeatedly claiming something is not very useful. It is apparent that you disagree with the idea that corporations should have different (often lesser) rights than people do. That is fine. You have a right to your opinion. But, others also have a right to maintain their disagreement with you. The issue here isn't the repetition, It is that we differ in how the law or Constitution should be interpreted.
Yes, the Supreme Court seems to agree with you that "Corporations are People," however, that remains a controversial opinion on which we should be free to differ -- even if, for the moment, that disagreement has little impact. Things may change. As we've often seen in the past, the Supreme Court may change its position once the full consequences of their past decisions are made clear. It is not guaranteed that the Supreme Court will forever maintain its current interpretation of the Constitution. As does everyone, they occasionally make mistakes and later correct them.
Personally, I find it very suspect that a corporation, other than one that qualifies as "Press," should be granted broad speech rights, particularly when those rights include an ability to restrict the ability of others to speak as they wish. My concern here is for non-press organizations who are in the business of making money by hosting or facilitating the speech of others rather the business of broadcasting or publishing their own speech. Thus, while I would strongly oppose any suggestion that the New York Times should be forced to publish anything that anyone might want published there, I am much less concerned about restricting the ability of a purely facilitative "carrier" service (like Facebook, Twitter, Gmail, etc.) to restrict the legal speech of its users.
Under long-established law, we frequently impose on commercial entities an "obligation to serve" in a non-discriminatory manner. Operators of distribution systems (water, gas, electricity, telephony, etc.) are often required to provide service to all who request it -- whether or not they agree with the purposes for which their services will be used. (A telephone company must provide service to its union members, to those who oppose its policies in rate cases, and even to its competitors.) While such services are allowed to establish reasonable interconnection standards, intended only to preserve their capacity to serve, they can't otherwise restrict the provision of service for legal purposes.
I don't see how the distribution, for profit, of packets of data via social media sites, email systems, or any other interchange systems, differs enough from the distribution of any other good to require that their distributor be granted a right to control the content of what it distributed.
/div>Re: Re: Protocols, Not Platforms. We must find a way to lead...
if by education you mean training in critical thinking, I believe that while it would be certainly be useful, it is insufficient. Something more is needed. We can't expect the average user to critically analyze every piece of content to which they are exposed. They need triggers or signals to indicate which content might warrant extra analysis. So, where do the signals come from? I suggest that the proper response to "bad" speech is more speech.
I'm intrigued by the potential that annotation systems might have as technical means to improve the process of public discourse. If every user, as well as "fact checking" groups, were able to make easily discoverable comments on or flag at least public content, then users might be much better able to discover which content should be more carefully analyzed.
Imagine if web browsers supported an ability, using the W3 Annotation protocol, to create comments, ratings, etc. that users could choose to have displayed next to web pages. The browser might then offer an indication that "100 people approve, 200 disapprove" of the content on a web page or on some part of it. That notification that the content was controversial would be a useful signal for a user who had been educated in the methods of critical thinking and content analysis. That user might then inspect the specific annotations, perhaps filtered in useful ways (e.g. "only show annotations from people or organizations I trust") or employ other methods to assess the apparently controversial content.
Annotation systems, at least for web resources, operate independently of those resources and thus impose no burdens on publishers. An annotation system like Hypothes.is or Dokieli can provide a great deal of commentary on a site with absolutely no impact on or cost for that site. In fact, many whose content is annotated don't even know it is happening.
It seems to me that the "problem" with online speech isn't that people are able to say stupid or bad things, but rather that today they can often do so in ways that afford no opportunity for response. But, at least for content identified by an URL, etc. annotations systems can remove ability to speak without response. I think this would be a good thing.
Whether or not annotations would be a useful technical measure, I must admit that no one has yet figured out how to make such systems popular enough to provide a useful counterweight. Nonetheless, I think it would be useful to create a dialog about how annotation might be used and popularized. From that discussion, we might see at least a partial technical solution emerge.
/div>Re: Protocols, Not Platforms. We must find a way to lead...
Why was the comment to which this is a reply "flagged by the community?" I can see nothing objectionable in it.
In any case, I think it is interesting to note that if we had "Protocols, not Platforms," then the discussion would be very different. There wouldn't be pressure on individual companies to censor content since no single company would control the entire discussion. If Twitter were simply one of several clients for a "Tweeting Protocol," I suspect that we'd see various client providers competing based on the qualities of the services they provided. Some would distinguish themselves by offering more effective or easier to use interfaces, others might offer unique moderation services. In fact, we might even see the development of a number of competing content moderation services which could be optionally selected by users. These moderation services would vary according to the focus of their moderation (sex, politics, use of language, etc.) or the techniques used (automated, manual, etc.). It would be more like the distribution of USENET Newsgroups where some ISPs block various newsgroups but any user who wants to access a blocked feed can find an alternative provider. Competing clients and services would also be able to support a great deal of variety in the provision of credibility signals, links to context, etc. We wouldn't have to rely on any single provider to pursue innovation in the way the discourse was supported.
If there was an open "Tweeting Protocol" that provided for content to be passed through intermediary nodes, as with store-and-forward email, I suspect that there would be many who would argue that those intermediary nodes should not be permitted to impose moderation on messages that flowed through them and might be temporarily stored on their equipment.
I would greatly prefer a world in which public discourse was distributed via open protocols supported by competitive services and interfaces. Unfortunately, supporting open protocols is often not in the commercial interest of incumbent large providers. (We saw, for instance, Facebook's refusal to support federated XMPP. Later, Google also dropped XMPP support.) We have protocols that could be used to provide what Facebook or Twitter provide, but it is exceptionally unlikely that anyone might successfully build an open system that provides an effective alternative to the existing walled services. This path-dependency should concern us all.
/div>Re: Re: Re: Re: Re: Not such a good idea now is it?
The mere popularity of the club as a place to socialize would not be compelling and should have no bearing on the government's regulation. However, if the government came to use that club to make announcements, if the President of the United States and members of Congress used it to make statements not as easily accessible elsewhere or to receive comments from there more readily than from other locations, etc., I might judge that the club had become more than a mere club and suggest that it had undergone a qualitative transformation into something more -- an essential realm of public discourse that we might consider to be one which should not be selective in allowing admission.
It isn't the popularity of the club that would be determinant. It is the role that the club plays in the public discourse of the nation, or of the community, that should be the focus of attention.
/div>Re: Re: Re: I have One Simple Question for you.
Yes, even truly vile speech is protected. As has been demonstrated in cases like that concerning the Skokie Nazi march, even anti-Semitic gutter trash have the right to spew their filth. I do not suggest that we should limit their rights -- as much as I might wish they didn't use them. The problem, of course, is that any attempt to constrain the anti-Semites is likely to eventually have the unintended consequence of limiting the speech of "good" people. Fortunately, most speech is protected.
/div>Re: I have One Simple Question for you.
Certainly the government should not be able to compel "any" privately owned service to host speech, however, the government does compel common carriers, such as telephony providers, to carry speech without moderation. Similarly, I think the government should be free to compel at least some web services to refrain from moderation of legal speech -- but only some, not "any."
If one establishes a purportedly neutral, agenda-free channel for communication, and if that channel is successful in becoming an important part of the infrastructure for public discourse, I believe that the government should be free to declare it a common carrier and constrain its ability to moderate legal speech. On the other hand, something like a web service for "conservative" or "liberal" discussion, or a service hosting topical discussions, such as "dog breeding," should not be compelled to carry content not fitting that service's charter.
My personal feeling is that services like Twitter, Facebook, and a very few others, have grown to the point where they are, in fact, essential elements of our nation's, if not the world's, infrastructure for public discourse. As such, they should be regulated in order to protect that public discourse from which they profit. These channels are distributors of general discourse, and, like many other distribution systems, they have many of the characteristics of "natural monopolies." Thus, they can be reasonably regulated in the same way that we regulate telephone companies, electric and gas distributors, or operators of our water and sewer supplies.
I think the real danger is not in the regulation of a few, exceptional services, but rather in attempts to write or enforce rules that apply to all providers of discourse-based services. The danger is in writing general laws to control exceptional circumstances. It is essential that we recognize that not all communications channels are significant enough to warrant regulation and we should seek to regulate as little as possible even when regulation is justified. Also, we should recognize that even within the services owned by a single private entity, there should be distinctions drawn. For instance, rules that might apply reasonably to Twitter or to Facebook's public channels should not apply to a Facebook group that I host for discussions between fellow alumni of my high school. My "Alumni Group" is provides for a limited, topical discussion, it is not a general channel for public discourse. As such, it should be protected from external control even though other Facebook services are, I think, more reasonably regulated.
So, No, I can't give you a Yes or No answer. The best I can do is say that it depends on the nature and power of the service. But, the mere fact that a service is privately owned is, I think, insufficient to determine if it should or should not be subject to regulation.
/div>Re: Re: Re: Not such a good idea now is it?
You wrote: "We can always have more companies; but there only is one government."
Well, any government that exists today is only the latest in a very long sequence of previous governments...
In any case, this often-cited government v. corporation distinction doesn't ring true for me. I think the reality is that at the time when the US Constitution was written, it simply wasn't conceivable that any non-governmental entity, other than perhaps a Church, would be able to accumulate sufficient power to have significant control over public discourse. But, today, some private entities at least match, if not exceed, the ability of government or churches to control public discourse.
Madison's first draft of what became the first amendment read, in part:
In Madison's first draft, the focus was on protecting the "right to speak," rather than being limited to just who (e.g. the government) might be interfering with that right. My personal feeling is that once any entity, government or not, accumulates sufficient power to deprive or abridge our ability to engage in public discourse, then we should be concerned and seek to redress that power.
The issue here is with the possession and use of power over public discourse, not just with who might be wielding that power.
/div>Re: Not such a good idea now is it?
It is clearly wrong for the government, or some corporation, to have the power to judge and control the content of public discourse. But, it is also quite problematic to endure a system that allows lies and misinformation to be spread as easily as they are today. So, where is the middle ground?
What can be done to mitigate the problem sufficiently so that fools are not so easily tempted to encourage censorship?
/div>Protocols, Not Platforms. We must find a way to lead...
Perhaps I give her to much credit, but I assume that Klobuchar is not an idiot and thus that she recognizes the danger in her proposal. Thus, I am tempted to see this bill as an act of desperation more than as a well-considered approach to a general problem. Klobuchar is not the only one getting desperate. The failure of the technical community to lead by coming up with concrete proposals to address the problem of misinformation and credibility within the constraints of our Constitution is something that I think we will long regret.
So, how many people who read these posts, or write their own, are actually part of the process of solving these issues? If you don't work at Facebook, Twitter, or Parler, are you at least involved in industry forums dedicated to crafting solutions? Are you a member of the W3's CredWeb (https://credweb.org/) working group? If not, why not? If you're working with some other group, what is it and how can others of us get involved in helping to define the protocols, features, or systems that might make it possible to mitigate this problem sufficiently so that we don't have to see desperate proposals like Klobuchar's being taken seriously?
bob wyman
/div>App Stores should be regulated like the monopolie that they are.
App stores, as distributors, are natural monopolies, and are thus not subject to market pressure on either price or quality of service. As natural monopolies they should be regulated in the same way that we regulate distributors of electricity, gas, water, or telephone traffic.
Their revenues should be limited to the actual cost of service provided plus a reasonable return on investment. If they provide no service, as in the case of in-app purchases, they should collect no revenue.
As with telephone network providers, App store's control over the content or function of applications should be limited to that which is necessary to maintain the integrity and function of devices (i.e. coding standards and protection against viruses or hacks.). Also, they should not be able to bar or disadvantage apps that "compete" with their own apps.
bob wyman
/div>Community Standards or Laws and Content Moderation?
Should community standards or local laws influence content moderation? If so, which ones?
In New York City, it has been legal for quite some time for women to appear topless in public. Given this, should content moderators, who might restrict depictions of breasts for readers in some areas, restrain from restricting such images when they are being displayed within the confines of New York City? If not, then do we accept the rule that content moderation must always impose the most restrictive interpretation of what is or is not appropriate or permissible?
For NPR's commentary on New York City law, see: https://www.npr.org/sections/thetwo-way/2015/08/24/434315957/topless-in-new-york-the-legal-case-that -makes-going-top-free-legal-ish
/div>Techdirt has not posted any stories submitted by Bob Wyman.
Submit a story now.
Tools & Services
TwitterFacebook
RSS
Podcast
Research & Reports
Company
About UsAdvertising Policies
Privacy
Contact
Help & FeedbackMedia Kit
Sponsor/Advertise
Submit a Story
More
Copia InstituteInsider Shop
Support Techdirt