Everything That's Wrong With Social Media Companies and Big Tech Platforms, Part 3
from the the-list-keeps-growing dept
I've written two installments in this series (part 1 is here and part 2 is here). And while I could probably turn itemizing complaints about social-media companies into a perpetual gig somewhere — because there's always going to be new material — I think it's best to list only just a few more for now. After that, we ought to step back and weigh what reforms or other social responses we really need. The first six classes of complaints are detailed in Parts 1 and 2, so we begin here in Part 3 with Complaint Number 7.
(7) Social media are bad for us because they're so addictive to us that they add up to a kind of deliberate mind control.
As a source of that generalization we can do no better than to begin with Tristan Harris's July 28, 2017 TED talk, titled "How a handful of tech companies control billions of minds every day."
Harris, a former Google employee, left Google in 2015 to start a nonprofit organization called Time Well Spent. That effort has now been renamed the Center for Humane Technology ( http://www.timewellspent.io now resolves to https://humanetech.com). Harris says his new effort — which also has the support of former Mozilla interface designer Aza Raskin and early Facebook funder Roger McNamee — represents a social movement aimed at making us more aware of the ways in which technology, including social media and other internet offerings, as well as our personal devices, are continually designed and redesigned to make them more addictive.
Yes, there's that notion of addictiveness again — we looked in Part 2 at claims that smartphones are addictive and talked about how to address that problem. But regarding the "mind control" variation of this criticism, it's worth examining Harris's specific claims and arguments to see how they compare to other complaints about social media and big tech generally. In his June 2017 TED talk. Harris begins with the observation that social-media notifications on your smart devices, may lead you to have thoughts you otherwise wouldn't think:
"If you see a notification it schedules you to have thoughts that maybe you didn't intend to have. If you swipe over that notification, it schedules you into spending a little bit of time getting sucked into something that maybe you didn't intend to get sucked into."
But, as I've suggested earlier in this series, this feature of continually tweaking content to attract your attention isn't unique to internet content or to our digital devices. This is something every communications company has always done — it's why ratings services for traditional broadcast radio and TV exist. Market research, together with attempts to deploy that research and to persuade or manipulate audiences, has been at the heart of the advertising industry for far longer than the internet has existed, as Vance Packard's 1957 book THE HIDDEN PERSUADERS suggested decades ago.
One major theme of Packard's THE HIDDEN PERSUADERS is that advertisers increasingly relied less on consumer surveys (derisively labeled "nose-counting") but on "motivational research" — often abbreviated by 1950s practitioners as "MR" — to look past what consumers say they want. Instead, the goal is to how they actually behave, and then gear their advertising content to shape or leverage consumers' unconscious desires. Packard's narratives in THE HIDDEN PERSUADERS are driven by revelations of the disturbing and even scandalous agendas of MR entrepreneurs and the advertising companies that hire them. Even so, Packard is careful in his book, in its penultimate chapter, to address what he calls "the question of validity" — that is, the question of whether "hidden persuaders'" strategies and tactics for manipulating consumers and voters are actually scientifically grounded. Quite properly, Packard acknowledges that the claims of the MR companies may have been oversold, or may have been adopted by companies who simply lack any other strategy for figuring out how to reach and engage consumers.
In spite of Packard's scrupulous efforts to make sure that no claims of advertising's superpowers to sway our thinking are accepted uncritically, our culture nevertheless has accepted at least provisionally the idea that advertising (and its political cousin, propaganda), affects human beings at pre-rational levels. It is this acceptance of the idea that content somehow takes us over that Tristan Harris invokes consistently in his writings and presentations about how social media, the Facebook newsfeed, and internet advertising work on us.
Harris prefers to describe how these online phenomena affect us in deterministic ways:
"Now, if this is making you feel a little bit of outrage, notice that that thought just comes over you. Outrage is a really good way also of getting your attention. Because we don't choose outrage — it happens to us."
"The race for attention [is] the race to the bottom of the brainstem."
Nothing Harris says about the Facebook newsfeed would have seemed foreign to a Madison Avenue advertising executive in, say, 1957. (Vance Packard includes commercial advertising as well as political advertising as centerpieces of what he calls "the large-scale efforts being made, often with impressive success, to channel our unthinking habits, our purchasing decisions, and our thought processes by the use of insights gleaned from psychiatry and the social sciences.") Harris describes Facebook and other social media in ways that reflect time-honored criticisms of advertising generally, and mass media generally.
But remember that what Harris says about internet advertising or Facebook notifications or the Facebook news feed is true of all communications. It is the very nature of communications among human beings that they give us thoughts we would not otherwise have. It is the very nature of hearing things or reading things or watching things that we can't unhear them, or unread them, or unwatch them. This is not something uniquely terrible about internet services. Instead it is something inherent in language and art and all communications. (You can find a good working definition of "communications" in Article 19 of the United Nations' Universal Declaration of Human Rights, which states that individuals have the right "to seek, receive, or impart information.") That some people study and attempt to perfect the effectiveness of internet offerings — advertising or Facebook content or anything else — is not proof that they're up to no good. (They arguably are exercising their human rights!) Similarly, the fact that writers and editors, including me, try to study how words can be more effective when it comes to sticking in your brain is not an assault on your agency.
It should give us pause that so many complaints about Facebook, about social media generally, about internet information services, and about digital devices actively (if maybe also unconsciously) echo complaints that have been made about any new mass medium (or mass-media product). What's lacking in modern efforts to criticize social media in particular — and especially when it comes to big questions like whether social media are damaging to democracy — is the failure of most critics to be looking at their own hypotheses skeptically, seeking falsification (which philosopher Karl Popper rightly notes is a better test of the robustness of a theory) rather than verification.
As for all the addictive harms that are caused by combining Facebook and Twitter and Instagram and other internet services with smartphones, isn't it worth asking critics whether they've considered turning notifications off for the social-media apps?
(8) Social media are bad for us because they get their money from advertising, and advertising — especially effective advertising — is inherently bad for us.
Harris's co-conspirator Roger McNamee, whose authority to make pronouncements on what Facebook and other services are doing wrong derives primarily from his having gotten richer from them, is blunter in his assessment of Facebook as a public-health menace:
"Relative to FB, the combination of an advertising model with 2.1 billion personalized Truman Shows on the ubiquitous smartphone is wildly more engaging than any previous platform ... and the ads have unprecedented effectiveness."
There's a lot to make fun of here--the presumption that 2.1 billion Facebook users are just creating "personalized Truman Shows," for example. Only someone who fancies himself part of an elite that's immune to what Harris calls "persuasion" would presume to draw that conclusion about the hoi polloi. But let me focus instead on the second part--the bit about the ads with "unprecedented effectiveness." Here the idea is, obviously, that advertising may be better for us when it's less effective.
Let's allow for a moment that maybe that claim is true! Even if that's so, advertising has played a central role in Western commerce for at least a couple of centuries, and in world commerce for at least a century, and the idea that we need to make advertising less effective is, I think fairly clearly, a criticism of capitalism generally. Now, capitalism may very well deserve that sort of criticism, but it seems like an odd critique coming from someone who's already profited immensely from that capitalism.
And it also seems odd that it's focused particularly on social media when, as we have the helpful example of THE HIDDEN PERSUADERS to remind us, we've been theoretically aware of the manipulations of advertising for all of this century and at least half of the previous one. If you're going to go after commercialism and capitalism and advertising, you need to go big--you can't just say that advertising suddenly became a threat to us because it's more clearly targeted to us based on our actual interests. (Arguably that's a feature rather than a bug.)
In responding to these criticisms, McNamee says "I have no interest in telling people how to live or what products to use." (I think the meat of his and Harris's criticisms suggests otherwise.) He explains his concerns this way:
"My focus is on two things: protecting the innocent (e.g., children) from technology that harms their emotion development and protecting democracy from interference. I do not believe that tech companies should have the right to undermine public health and democracy in the pursuit of profits."
As is so often the case with entrepreneurial moral panics, the issue ultimately devolves to "protecting the innocent" — some of whom surely are children but some other proportion of whom constitute the rest of us. In an earlier part of his exploration of these issues on the venerable online conferencing system The WELL, McNamee makes clear, in fact, that he really is talking about the rest of us (adults as well as children):
"Facebook has 2.1 billion Truman Shows ... each person lives in a bubble tuned to their emotions ... and FB pushes emotional buttons as needed. Once it identifies an issue that provokes your emotions, it works to get you into groups of like-minded people. Such filter bubbles intensify pre-existing beliefs, making them more rigid and extreme. In many cases, FB helps people get to a state where they are resistant to ideas that conflict with the pre-existing ones, even if the new ideas are demonstrably true."
These generalizations wouldn't need much editing to fit 20th-century criticisms of TV or advertising or comic books or 19th-century criticisms of dime novels or 17th-century criticisms of the theater. What's left unanswered is the question of why this new mass medium is going to doom us when none of the other ones managed to do it.
(9) Social media need to be reformed so they aren't trying to make us do anything or get anything out of us.
It's possible we ultimately may reach some consensus on how social media and big internet platforms generally need to be reformed. But it's important to look closely at each reform proposal to make sure we understand what we're asking for and also that we're clear on what the reforms might take away from us. Once Harris's TED talk gets past the let-me-scare-you-about-Facebook phase, it gets better — Harris has a program for reform in mind. Specifically, he calls for what he calls "three radical changes to our society," which I will paraphrase and summarize here.
First, Harris says, "we need to acknowledge that we are persuadable." Here, unfortunately, he elides the distinction between being persuaded (which involves evaluation and crediting of arguments or points of view) and being influenced or manipulated (which may happen at an unconscious level). (In fairness, Vance Packard's THE HIDDEN PERSUADERS is guilty of the same elision.) But this first proposition isn't radical at all — even if we're sticks-in-the-mud, we normally believe we are persuadable. It may be harder to believe that we are unconsciously swayed by how social media interact with us, but I don't think it's exactly a radical leap. We can take it as a given, I think, that internet advertising and Facebook's and Google's algorithms try to influence us in various ways, and that they sometimes succeed. The next question then becomes whether this influence is necessarily pernicious, but Harris finds passes quickly over this question, assuming the answer is yes.
Second, Harris argues, we need new models and accountability systems, guaranteeing accountability and transparency for the ways in which our internet services and digital devices try to influence us. Here there's very little to argue with. Transparency about user-experience design that makes us more self-aware is all to the good. So that doesn't seem like a particularly radical goal either.
It's in Harris's third proposal — "We need a design renaissance" — that you actually do find something radical. As Harris explains it, we need to redesign our interactions with services and devices so that we're never persuaded to do something that we may not initially want to do. He states, baldly, that "the only form of ethical persuasion that exists is when the goals of the persuader are aligned with the goals of the persuadee." This is a fascinating proposition that, so far as I know, is not particularly well-grounded in fact or in the history of rhetoric or in the history of ethics. It seems clear that sometimes it's necessary to persuade people of ideas that they may be predisposed not to believe, and that, in fact, they may be more comfortable not believing.
Given that fact, it follows that If we are worried about whether Facebook's algorithms lead to "filter bubbles," we should call for (or design) a system around the idea of never persuading anyone whose goals aren't already aligned with yours. Arguably, such a social-media platform might be more prone to filter bubbles rather than less so. One doesn't get the sense, reviewing Harris's presentations or other public writings and statements from his allies like Roger McNamee, either that they've compared current internet communications with previous revolutions driven by new mass-communications platforms, or analyzed their theories in light of the centuries of philosophical inquiry regarding human autonomy, agency, and ethics.
Moving past Harris's TED talk, we next must consider McNamee's recent suggestion that Facebook move from an advertising-supported to for-pay model. In a February 21 Washington Post op-ed, McNamee wrote the following:
"The indictments brought by special counsel Robert S. Mueller III against 13 individuals and three organizations accused of interfering with the U.S. election offer perhaps the most powerful evidence yet that Facebook and its Instagram subsidiary are harming public health and democracy. The best option for the company — and for democracy — is for Facebook to change its business model from one based on advertising to a subscription service."
In a nutshell, the idea here is that the incentives of advertisers, who want to compete for your attention, will necessarily skew how even the most well-meaning version of advertising-supported Facebook interacts with you, and not for the better. So the fix, he argues, is for Facebook to get rid of advertising altogether. "Facebook's advertising business model is hugely profitable," he writes, "but the incentives are perverse."
It's hard to escape the conclusion that McNamee believes either (a) advertising is inherently bad, or (b) advertising made more effective by automated internet platforms is particularly bad. Or both. And maybe advertising is, in fact, bad for us. (That's certainly a theme of Vance Packard's THE HIDDEN PERSUADERS, as well as of more recent work such as Tim Wu's book 2016 book THE ATTENTION MERCHANTS.) But it's hard to escape the conclusion that McNamee, troubled by Brexit and by President Trump's election, wants to kick the economic legs out from under Facebook's (and, incidentally, Google's and Bing's and Yahoo's) economic success. Algorithm-driven serving of ads is bad for you! It creates perverse incentives! And so on.
It's true, of course, that some advertising algorithms have created perverse incentives (so that Candidate Trump's provocative ads were seen as more "engaging" and therefore were sold cheaper — or, alternatively, more expensively — than Candidate Clinton's. I think the criticism of that particular algorithmic approach to pricing advertising is valid. But there are other ways to design algorithmic ad service, and it seems to me that the companies that have been subject to the criticisms are being responsive to them, even in the absence of regulation. This, I think, is the proper way to interpret Mark Zuckerberg's newfound reflection (and maybe contrition) over Facebook's previous approach to its users' experience, and his resolve — honoring without mentioning Tristan Harris's longstanding critique — that "[o]ne of our big focus areas for 2018 is making sure the time we all spend on Facebook is time well spent."
Some Alternative Suggestions for Reform and/or Investigation
It's not too difficult, upon reflection, to wonder whether the problem of "information cocoons" or "filter bubbles" is really as terrible as some critics have maintained. If hyper-addictive filter-bubbles have historically unprecedented power to overcome our free will, surely presumably have this effect even on most assertive, independently thinking, strong-minded individuals — like Tristan Harris or Roger McNamee. Even six-sigma-degree individualists might not escape! But the evidence that this is, in fact, the case, is less than overwhelming. What seems more likely (especially in the United States and in the EU) is that people who are dismayed by the outcome of the Brexit referendum or the U.S. election are trying to find a Grand Unifying Theory to explain why things didn't work out they way they'd expected. And social media are new, and they seem to have been used by mischievous actors who want to skew political processes, so it follows that the problem is rooted in technology generally or in social media or in smartphones in particular.
But nothing I write here should be taken as arguing that social media definitely aren't causing or magnifying harms. I can't claim to know for certain. And it may well be the case, in fact, that some large subset of human beings create "filter bubbles" for themselves regardless of what media technologies they're using. That's not a good thing, and it's certainly worth figuring out how to fix that problem if it's happening, but focusing on how that problem as a presumed phenomenon specific to social media perhaps focuses on a symptom of the human condition rather than a disease grounded in technology.
In this context, then, the question is, what's the fix? There are some good suggestions for short-term fixes, such as the platforms' adopting transparency measures regarding political ads. That's an idea worth exploring. Earlier in this series I've written about other ideas as well (e.g., using grayscale on our iPhones).
There are, of course, more general reforms that aren't specific to any particular platform. To start with, we certainly need to address more fundamental problems — meta-platform problems, if you will — of democratic politics, such as teaching critical thinking. We actually do know how to teach critical thinking — thanks to the ancient Greeks we've got a few thousand years of work done already on that project — but we've lacked the social will to teach it universally. It seems to me that this is the only way by which a cranky individualist minority that's not easily manipulated by social media, or by traditional media, can become the majority. Approaching all media (including radio, TV, newspapers, and other traditional media — not just internet media, or social media) with appropriate skepticism has to be part of any reform policy that will lead to lasting results.
It's easy, however, to believe that education — even the rigorous kind of education that includes both traditional critical-thinking skills and awareness of the techniques that may be used in swaying our opinions — will not be enough. One may reasonably believe that education can never be enough, or that, even when education is sufficient to change behavior (consider the education campaigns that reduced smoking or led to increased use of seatbelts), education all by itself simply takes too long. So, in addition to education reforms, there probably are more specific reforms — or at least a consensus as to best practices — that Facebook, other platforms, advertisers, government, and citizens ought to consider. (It seems likely that, to the extent private companies don't strongly embrace public-spirited best-practices reforms, governments will be willing to impose such reforms in the absence of self-policing.)
One of the major issues that deserve more study is the control and aggregation of user information by social-media platforms and search services. It's indisputable that online platforms have potentiated a major advance in market research — it's trivially easy nowadays for the platforms to aggregate data as to which ads are effective (e.g., by inspiring users to click through to the advertisers' websites). Surely we should be able to opt out, right?
But there's an unsettled public-policy question about what opting out of Facebook means or could mean. In his testimony earlier this year at Senate and House hearings on Facebook, Mark Zuckerberg has consistently stressed that individual users do have some high degree of control over the data (pictures, words, videos, and so on) that they've contributed to Facebook, and that users can choose to remove the data they've contributed. Recent updates in Facebook's privacy policy seem to underscore users' rights in this regard.
It seems clear that Facebook is committing itself at least to what I call Level 1 Privacy: you can erase your contributions from Facebook altogether and "disappear," at least when it comes to information you have personally contributed to the platform. But does it also mean that even other people who've shared my stuff no longer can share it (in effect, allowing me to depart and punch holes in other people's sharing of my stuff when I depart?
If Level 1 Privacy relates to the information (text, pictures, video, etc., that I've posted), that's not the end of the inquiry. There's also what I have called Level 2 Privacy, centering on what Facebook knows about me, or can infer from my having been on the service, even after I've gone. Facebook has had a proprietary interest in drawing inferences from how we interact with their service and using that to inform what content (including but not limited to ads) that Facebook serves to us. That's Facebook's data, not mine, because FB generated it, not me. If I leave Facebook, surely Facebook retains some data about me based on my interactions on the platform. (We also know, in the aftermath of Zuckerberg's testimony before Congress, that Facebook manages to collect data about people who themselves are not users of the service.)
And then there's Level 3 Privacy, which is the question of what Facebook can and should do with this inferential data that it has generated. Should Facebook share it with third parties? What about sharing it with governments? If I depart and leave a resulting hole in Facebook content, are there still ways to connect the dots so that not just Facebook itself, but also third-party actors, including governments, can draw reliable inferences about the now-absent me? In the United States, there arguably may be Fourth Amendment issues involved, as I've pointed out in a different context elsewhere. We may reasonably conclude that there should be limits on how such data can be used and on what inferences can be drawn. This is a public-policy discussion that needs to happen sooner rather than later.
Apart from privacy and personal-data concerns, we ought to consider what we really think about targeted advertising. If the criticism of targeted advertising, "motivational research," and the like historically has been that the ads are pushing us, then the criticism of internet advertising seems to be that internet-based ads are pulling us or even seducing us, based on what can be inferred about our inclinations and preferences. Here I think the immediate task has to be to assess whether the claims made by marketers and advertisers regarding the manipulative effects ads have on us are scientifically rigorous and testable. If the claims stand up to testing, then we have some hard public-policy questions we need to ask about whether and how advertising should be regulated. But if they aren't — if, in fact, our individual intuitions that we retain freedom and autonomy even in the face of internet advertising and all the data that can be gathered about us — then we need to assert that that freedom and autonomy and acknowledge that, just maybe, there's nothing categorically oppressive about being invited to engage in commercial transactions or urged to vote for a particular candidate.
Both the privacy questions and the advertising questions are big, complex questions that don't easily devolve to traditional privacy talk. If in fact we need to tackle these questions pro-actively, I think we must begin by defining what the problems are in ways that all of us (or at least most of us) agree on. Singling out Facebook is the kind of single-root-cause theory of what's wrong with our culture today may appeal to us as human beings — we all like straightforward storylines — but that doesn't mean it's correct. Other internet services harvest our data too. And non-internet companies have done so (albeit in more primitive ways) for generations. It is difficult to say they never should do so, and it's difficult to frame the contours of what best practices should be.
But if we're going to grapple with the question of regulating social-media platforms and other internet services, thinking seriously about what best practices should be, generally speaking, is the task that lies before us now. Offloading the public-policy questions to the platforms themselves — by calling on Facebook or Twitter or Google to censor antisocial content, for example — is the wrong approach, because it dodges the big questions that we need to answer. Plus, it would likely entrench today's well-moneyed internet incumbents.
Nobody elected Mark Zuckerberg or Jack Dorsey (or Tim Cook or Sundar Pichai) to do that for us. The theory of democracy is that we decide the public-policy questions ourselves, or we elect policymakers to do that for us. But that means we each have to do the heavy lifting of figuring out what kinds of reforms we think we want, and what kind of commitments we're willing to make to get the policies right.
Mike Godwin (mnemonic@gmail.com) is a Distinguished Senior Fellow at R Street Institute.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: addictive, internet, persuasion, roger mcnamee, social media, time well spent, tristan harris, vance packard
Companies: facebook, google, twitter, youtube
Reader Comments
Subscribe: RSS
View by: Time | Thread
Idle thought: Alternative to “social media”
I picked this term up while hanging out on mastodon.social: “social interaction network”. Yes, it is a bit more unwieldy than “social media”. That said, it is a more accurate description of services such as Facebook and Twitter.
(Plus it abbreviates to “SIN”, so you can say you’re “SINing” when you’re browsing Twitter and be correct in multiple ways. :)
[ link to this | view in thread ]
ProveAdvertisingHasRights
First show me where in the bill of rights, the constitution or amendments does it state that 'advertising' takes on the persona of a person, that advertising should have rights above and beyond said articles of our republic?
That's where I would start the discussion.
[ link to this | view in thread ]
Social Media Policy Change
[ link to this | view in thread ]
"the big questions that we need to answer" -- Are corporations "persons"?
Rid us of "corporate persons" first. What you worry about here is Techdirt-level ninnying.
[ link to this | view in thread ]
Re:
I hear anti-psychotics are much less prone to inducing side effects these days.
[ link to this | view in thread ]
Re: Re: Just Natural
[ link to this | view in thread ]
Addictive... mind control
And for those who insist that they need to receive notifications I'd bet that 90% (or more) of them can wait until your shift is done.
[ link to this | view in thread ]
Re: Re: Re: Just Natural
The same could be said of any word or phrase that counts as, or is at least used as, a thought-terminating cliché. “SJW” is the prime example here, but SovCit lingo like “natural persons” and “common law” (as well as the “traveling”/“driving” distinction) counts as well.
[ link to this | view in thread ]
I do think that the fundamental problem is with the person. Social media is just the latest fad. Filter bubbles are ultimately created by the person, while the social media just "facilitate" (it be argued, though) it.
I'm interested in the question whether targeted advertising, MR, and such are inherently bad. I personally don't think so. They're merely tools. It is how one use these tools.
Like said in the article, critical thinking needs to be taught, desperately IMO. Can these tools be used to leverage that teaching?
[ link to this | view in thread ]
Re: Re: Re: Re: Just Natural
[ link to this | view in thread ]
[ link to this | view in thread ]
Re: New at 11
[ link to this | view in thread ]
Re:
We are past all of this both technically and politically. Understanding what is being done to people is no longer the issue. Understanding the means of restitution is what remains.
The debate isn't real. It is just sandbagging. The discussion isn't real, it is just misdirection.
If you want to know how it works you read the manuals. Both the one written in 1789, and the ones written by the switch manufacturers. The feature sets from the two are mutually incompatible.
It isn't about "how" or "what" at this point. Now it is about picking a side and organizing for the relief of American civil rights.
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Twitter constantly having interstitials come up asking you to re-enable push notifications is one issue.
Another is their periodic manipulation of the timeline to show content that the user may not want to see. It's obnoxious to see the same emotion-grabbing tweet from a few days ago five or six times while scrolling through my feed, but I have no way of changing Twitter's feed algorithm to show me tweets reverse chronologically or to *stop* showing me news about current events (as opposed to constant "Here's what you missed" or their attempts to be a news outlet with Twitter news).
I get that social media outlets want you to engage with them, but there's a point that it feels like encouraging addiction and mental shortcuts rather than healthy engagement.
[ link to this | view in thread ]
Mass-market technology often has unforeseen side-effects
Just as we had to "grow up" the car culture past things like cruising and street racing (those may still exist, here and there, but they are definitely fringe, these days), the same likely holds true for the internet culture. As in, we're quite a ways away from maturity there, due to the interactive nature of the thing. Cars changed little by little, and change was mostly limited to each model year. The internet responds in real time, by comparison.
One important point that the article seems to have missed is another alternative response to the interests of persuader and persuadee; that persuaders would do well to be less sneaky and/or underhanded in their persuasion. Many seem to have used advertising's reputation for being dishonest as license to be as dishonest as (legally) possible. Got a medical service to sell? Dress your sales performer in scrubs or a lab coat, hang a stethoscope over their neck, and as long as they sound like they are advocating for the product to PROTECT your HEALTH from being at SERIOUS RISK, and you're good to go. That it's all a deception matters naught to them.
How that can be accomplished, I don't know, but one thing to keep in mind that most of the mass-media outlets depend on advertising for income...
[ link to this | view in thread ]
Duh
[ link to this | view in thread ]
Criticizing the essay
What we see with the internet is the loss of barriers and gatekeepers to a mass audience. As such, its time to return to general principles.
First, the larger and more powerful an entity is, the more limits need to be in place. First Amendment says the government, a very powerful entity, should not abridge freedom of speech. We get to companies advertising, and there used to be limits on veracity. Think TV political ads with "paid for by" on them. We used to rank newspapers as reputable (New York Times) or not (National Enquirer). We get to businesses, some are common carriers; that's the essence of net neutrality. We argue about whether businesses should be able to refuse service (see Red Hen and gay wedding cakes recently, and refusing service to non-whites from the past).
Advertising isn't all bad...now where do I find that cat brush I need???
The problem with the filter bubble is, well, I might just want to read people I *don't* like, or don't know I like, because they have new material and I'm bored with the interior of my bubble.
It also helps to see what *everyone* is seeing. TV had been such a unifying force on the language.
Hope these lenses help.
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Just Natural
Historically TD community moderation is 90+ percent "good job" by my account- but I see nothing here worthy of censorship- In fact it seams counter to TD philosophy which was fairly harsh on Citizens United aka 'corporations are people- money is speech'.... least as best I can recall- am I thinking of another site?
This seams like a terrible censorship decision... what am I missing?
[ link to this | view in thread ]
Re:
I'm here reading, have been for more then a decade, and expect to be for years to come, due to the outstanding coverage and insightful views of legal issues that effect us all. I sorta agree with this guy- but I also recognize that it's not at all a fair assessment. TD is a decent news source, and how they achieve that, is best left to their own good sense. Social grace and functional relationships is a thing we all must strive for and I'm sure it's far more difficult for a journalist then any of us could imagine.
Censoring this sort of simple opinion just makes TD look bad; grow some thicker skin for god sake.
[ link to this | view in thread ]
Re: Duh
[ link to this | view in thread ]
Re: Re:
This isn't censorship by the staff. This is the community of visitors seeing what this guy has to say, and clicking the red flag button to send a message that they don't think it's worthwhile to read it.
Additionally, you can click the thing to read what was said. It's not gone, just hidden behind a note that what you're about to click through to may not be worth your time.
Personally, I would have to agree with the community - the comment added nothing to discussion, and wasn't worth reading.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Re: Just Natural
This has nothing to do with the usage of social networks, adds no insight or wit to the discussion and it's part of a tired and reheated extended littany by the same tosser, who shows up every time social networks are mentioned, to fart out the words "natural person".
This does not only not add to the discussion, it actively distracts from it, reduces the value of participation for any regular that's sick and tired of it, and risks misinforming any casual reader who doesn't know any better, requiring active intervention, whether by replying or hiding the comment, on the part of the community that is invested in any way on keeping the comments section (admittedly, in their judgement) valuable.
Name-dropping the widely maligned Citizens United decision isn't helping his case either, because this isn't a CU hate blog, and even if it were, the article under discussion at the moment is about a different subject. OP might as well offer to sell cheap viagra for all that it has contributed to the discussion.
It is in light of those self-evident truths that we decide, unapologetically, and openly, that OP gets the hose again.
[ link to this | view in thread ]
Re: Addictive... mind control
[ link to this | view in thread ]
Re: Social Media Policy Change
[ link to this | view in thread ]
Re: Re: Addictive... mind control
Do you want to watch your server stand with their phone in one hand and your meal in the other, and delay your enjoyment of the burger?
[ link to this | view in thread ]
Re: Re:
You know, it isn’t really censorship if you can still view the flagged comment with a simple click. It’s more of a warning, if anything. Oh, and, uh, you might want to reconsider sticking up for a trollish ad hominem attack that offers nothing to the ongoing debate about the issues discussed in the article.
[ link to this | view in thread ]
Re: Re:
Says the guy who's spent two posts whining about downvotes.
[ link to this | view in thread ]
Re: Duh
Seems like the same could be said about commenting on Techdirt, but here you (we) are.
[ link to this | view in thread ]
Broken link
[ link to this | view in thread ]
People's thought and behavior, in so far as they are rational and/or consistent, can be predicted to a certain degree. But that doesn't make those thoughts any less their own.
To deny us authorship of our own thoughts, to suggest thought and emotion like outrage is just something that "happens to us," is denying the very existence of free will. Denying humanity itself. But we couldn't contemplate the existence of free will, if we didn't have it. Such line of thought proves itself wrong.
[ link to this | view in thread ]
Re: Re: Re:
Your mind seams to focus on the former, without much consideration of the latter... I have the opposite issue- it doesn't come off as trollish at all to me, though I should recognize that aspect or at least the perception of it; rather it seams an honest observation befitting nearly ANY modern news source. Not due to 'fake news', just human nature, social grace, and the hyper consolidated/monopolized state of the world. It's difficult for people to see a truth when their success depends on not seeing it. -true for all.
Is it fair in that sense? certainly not; but I'd stop far short of calling it an attack. It's an observation outside the standard frames of reference. If things are ever to change, people expressing these sorts of viewpoints will necessarily be part of it. No ones going to start telling people the sky is blue until a significant number have removed the rose colored glasses.
I'd prefer if TD was much more realistic on some of these mega-corps and the catastrophic damage they're doing and have done to society; But I'd be the first to admit, I'd probably be a shit journalist that wouldn't last long- there's probably good reason for everything TD does, and the proof is in the pudding; Exceedingly few sites have kept me reading for so long, and delivered so much quality content.
...also- agreed on the Non-censorship technicality. I don't run scripts- so it's a bit more intensive to read hidden comments then just 'clicking'- I have to remove the css style, and then I lose formatting...but anyway, your right on that, and I'll try to be more accurate in future.
[ link to this | view in thread ]
Re: Re: Re:
as I stated- I'm about 90% in agreement with the moderation decisions here; that's remarkably high. I'm fine with overtly dickish people having their post hidden, even if they've got a few points worth considering; we should all try and be decent to each other, and failing that, try harder.
[ link to this | view in thread ]
Re: Criticizing the essay
I wish you a good recovery from your coma.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Re: Re: Just Natural
[ link to this | view in thread ]
Re:
https://en.wikipedia.org/wiki/Determinism
Not having 'free will' would in no way change our perception of having it, or the personal meaning of our choices- those things would all be part of the deterministic process; 'fate'. The important thing to realize, is that it would take time travel to prove or disprove, and as such it's not much use beyond thought experiments...
I'm with you on the 'free will' side though; it seams ridiculous to even suggest that the world could be so consistent- these ideas are vestiges of bronze age mythology. Multiverse is a much more fun thought experiment.
[ link to this | view in thread ]
Re: Re: Duh
Thanks for the belly laugh and humble inclination.
[ link to this | view in thread ]
Re: Re: Re: Re:
The statement that started off this particular reply chain claims Techdirt does something that it does not (at least to my knowledge), then offers precisely no citations, details, or references to back up the claim. So yes, it is the first one.
If you want to make that argument, you will need to back it up with proof, or else you have no argument.
It is an ad hominem attack. Even Graham’s Hierarchy of Disagreement agrees with me.
No, it is an attack on Techdirt’s credibility that does nothing to outline or prove why Techdirt has no credibility on this issue. It offers nothing of substance and tries only to undermine the author and their article by making an bullshit claim.
Then buy Techdirt and dictate its editorial direction yourself. Until then, feel free to criticize Techdirt, but do not expect the people in charge to tweak the site for your personal satisfaction.
And that argument aside, Techdirt can only report on so many stories in a day, and if the writers spent all day talking doom-and-gloom about Google and Twitter, they would lack the time to write about other major stories. (Besides, you can go elsewhere for doomn-and-gloom talk about the tech giants.)
At least you get points for honesty.
[ link to this | view in thread ]
Re: ProveAdvertisingHasRights
[ link to this | view in thread ]
Re: Criticizing the essay
He used the word "moral panics."
(I seem to recall he used "moral panic" in Part 1.)
[ link to this | view in thread ]
Re: Re: ProveAdvertisingHasRights
[ link to this | view in thread ]