The Internet Giant's Dilemma: Preventing Suicide Is Good; Invading People's Private Lives... Not So Much

from the you-make-the-call dept

We've talked a lot in the past about the impossibility of doing content moderation well at scale, but it's sometimes difficult for people to fathom just what we mean by "impossible," with them often assuming -- incorrectly -- that we're just saying it's difficult to do well. But it goes way beyond that. The point is that no matter what choices are made, it will lead to some seriously negative outcomes. And that includes doing no moderation at all. In short there are serious trade-offs to every single choice.

Probably without meaning to, the NY Times recently had a pretty good article somewhat exploring this issue in looking at what Facebook is trying to do prevent suicides. We had actually touched on this subject a year ago, when there were reports that Facebook might stop trying to prevent suicides, as it had the potential to violate the GDPR.

However, as the NY Times article makes clear, Facebook really is in a damned if you do, damned if you don't position on this. As the Times points out, Facebook "ramped up" its efforts to prevent suicides after a few people streamed their suicides live on Facebook. Of course, what that underplays significantly is how much crap Facebook got because these suicides were appearing on its platform. Tabloids, like the Sun in the UK, had entire lists of people who died while streaming on Facebook and demanded to know "what Mark Zuckerberg will do" to respond. When the NY Post wrote about one man committing suicide streamed online... it also asked for a comment from Facebook (I'm curious if reporters ask Ford for a comment when someone commits suicide by leaving their car engine on in a garage?). Then there were the various studies, which the press used to suggest social media leads to suicides (even if that's not what the studies actually said). Or there were the articles that merely "asked the question" of whether or not social media "is to blame" for suicides. If every new study leads to reports asking if social media is to blame for suicides, and every story about suicides streamed online demands comments from Facebook, the company is clearly put under pressure to "do something."

And that "do something" has been to hire a ton of people and point its AI chops at trying to spot people who are potentially suicidal, and then trying to do something about it. But, of course, as the NY Times piece notes, that decision is also fraught with all sorts of huge challenges:

But other mental health experts said Facebook’s calls to the police could also cause harm — such as unintentionally precipitating suicide, compelling nonsuicidal people to undergo psychiatric evaluations, or prompting arrests or shootings.

And, they said, it is unclear whether the company’s approach is accurate, effective or safe. Facebook said that, for privacy reasons, it did not track the outcomes of its calls to the police. And it has not disclosed exactly how its reviewers decide whether to call emergency responders. Facebook, critics said, has assumed the authority of a public health agency while protecting its process as if it were a corporate secret.

And... that's also true and also problematic. As with so many things, context is key. We've seen how in some cases, police respond to calls of possible suicidal ideation by showing up with guns drawn, or even helping the process along. And yet, how is Facebook supposed to know -- even if someone is suicidal -- whether or not it's appropriate to call the police in that particular circumstance (this would be helped a lot if the police didn't respond to so many things by shooting people, but... that's a tangent).

The concerns in the NY Times piece are perfectly on point. We should be concerned when a large company is suddenly thrust into the role of being a public health agency. But, at the same time, we should recognize that this is exactly what tons of people were demanding when they were blaming Facebook for any suicides that were announced/streamed on its platform. And, at the same time, if Facebook actually can help prevent a suicide, hopefully most people recognize that's a good thing.

The end result here is that there aren't any easy answers -- and there are massive (life altering) trade offs involved in each of these decisions or non-decisions. Facebook could continue to do nothing, and then lots of people (and reporters and politicians) would certainly scream about how it's enabling suicides and not caring about the lives of people at risk. Or, it can do what it is doing and try to spot suicidal ideation on its platform, and reach out to officials to try to get help to the right place... and receive criticism for taking on a public health role as a private company.

“While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible,” Emily Cain, a Facebook spokeswoman, said in a statement.

The article also has details of a bunch of attempts by Facebook to alert police to suicide attempts streaming on its platform with fairly mixed results. Sometimes the police were able to prevent it, and in other cases, they arrived too late. Oh, and for what it's worth, the article does note in an aside that Facebook does not provide this service in the EU... thanks to the GDPR.

In the end, this really does demonstrate one aspect of the damned if you do, damned if you don't situation that Facebook and other platforms are put into on a wide range of issues. If users do something bad via your platform, people immediately want to blame the platform for it and demand "action." But what kind of "action" then leads to all sorts of other questions and huge trade-offs, leading to more criticism (sometimes from the same people). This is why expecting any platform to magically "stop all bad stuff" is a fool's errand that will only create more problems. We should recognize that these are nearly impossible challenges. Yes, everyone should work to improve the overall results, but expecting perfection is silly because there is no perfection and every choice will have some negative consequences. Understanding what they actually are and being able to discuss them openly without being shouted down would be helpful.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, dilemma, privacy, public health, social media, suicide, suicide prevention
Companies: facebook


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Stephen T. Stone (profile), 3 Jan 2019 @ 10:52am

    expecting perfection is silly because there is no perfection

    Tell that to the lawmakers who want “secure backdoored encryption”.

    link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Henry F Choke, 3 Jan 2019 @ 10:54am

    Well, they're firm on silencing political opponents.

    You're as ever focused on anomalies, not the everyday problems of mega-corporation "platforms" illegally silencing people who are entirely within common law terms for expressing political views.

    This is another of your "gee it's tough to do right" arguments intended to support globalist corporations in their drive for controlling all speech, by exampling a very minor part of it that raises troubling emotions. It's sheer ploy.

    Facebook really is in a damned if you do, damned if you don't position

    I agree that Facebook should be damned. And broken up too. There's no "must" to allowing corporations have such overwhelming effect, amplifying suicides to the whole world, and we'll all better off if cut it down to size with anti-trust and steeply progressive income tax rates. -- And not allow tax havens as Google is today in the news for.

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 3 Jan 2019 @ 10:58am

      G.F.Y.

      link to this | view in chronology ]

    • icon
      Nathan F (profile), 3 Jan 2019 @ 11:10am

      Re: Well, they're firm on silencing political opponents.

      You're as ever focused on anomalies, not the everyday problems of mega-corporation "platforms" illegally silencing people who are entirely within common law terms for expressing political views.

      If this was GovernmentBook, owned and operated by the US Government, then yes. It would be illegal to silence people for expressing their political views. Facebook however is a privately owned company and you may use their product only in a manner that they have written rules for. If you violate their rules they are perfectly within their rights to revoke your access, even if that rule has something to do with political views.

      Please remember that the First Amendment says Congress shall make no law, as in the government. It says nothing about a private corporation making up rules regarding it.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 3 Jan 2019 @ 12:16pm

        Re: Re: Well, they're firm on silencing political opponents.

        You really should carefully study the Pruneyard decision before insisting that sites like FB are free to do as they please oncerning speech by others. The law is much more nuanced.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 3 Jan 2019 @ 12:24pm

          Re: Re: Re: Well, they're firm on silencing political opponents.

          But there is precedent on the flip side of the coin as well -- check out CBS v. DNC sometime....

          link to this | view in chronology ]

        • icon
          Stephen T. Stone (profile), 3 Jan 2019 @ 1:49pm

          By that token, you should study the court decision that allows the government to tell Facebook what legally-protected speech it absolutely must host “or else”. I believe it was Facebook v. Michael ThisCaseDoesn’tExist.

          link to this | view in chronology ]

      • identicon
        Anonymous Coward, 3 Jan 2019 @ 6:45pm

        Re: Re: Well, they're firm on silencing political opponents.

        Great overview of how MasterCard Visa PayPal et al silence popular opposition to elitist policies. Techdirt of course has little interest in this elephant.

        link to this | view in chronology ]

      • identicon
        Anonymous Coward, 3 Jan 2019 @ 6:47pm

        Re: Re: Well, they're firm on silencing political opponents.

        Link to said overview
        https://youtu.be/hOzNj-nfKBE

        link to this | view in chronology ]

      • identicon
        Anonymous Coward, 4 Jan 2019 @ 2:51am

        Re: Re: Well, they're firm on silencing political opponents.

        ...but then the telephone companies can pull your service based on what you want to talk about since they are also a private corporation.

        The Left doesn't care about corporate rights anymore than they care about free speech. It's all instrumental for them. Whatever hurts people on their enemies list is what their disposable principles are.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 4 Jan 2019 @ 1:41pm

          Re: Re: Re: Well, they're firm on silencing political opponents.

          Replying three times to the same post makes you look crazier than you already are bro.

          link to this | view in chronology ]

    • identicon
      Anonymous Coward, 3 Jan 2019 @ 11:13am

      Re: Well, they're firm on silencing political opponents.

      cut it down to size with anti-trust and steeply progressive income tax rates

      There are smart people who make this argument clearly and convincingly, mostly with regards to corporations much larger and more immediately dangerous than Facebook or Google.

      Please stop making their job harder.

      link to this | view in chronology ]

    • icon
      Gary (profile), 3 Jan 2019 @ 11:15am

      Re: Well, they're firm on TROLLS

      If only you were interested in the articles here.

      Tell us again why you don't have your own blog for *your* opinions on the "Real" topics we should be discussing? Seriously.

      link to this | view in chronology ]

    • icon
      Matthew Cline (profile), 3 Jan 2019 @ 12:45pm

      Re: Well, they're firm on silencing political opponents.

      I agree that Facebook should be damned. And broken up too.

      Broken up how? Create some smaller companies, and randomly distribute the users amongst them?

      ... amplifying suicides to the whole world, ...

      Are you implying that anti-trust would have the effect of reducing the size of the audience of any individual user, and also that this would be a good thing rather than an unfortunate side effect?

      link to this | view in chronology ]

      • icon
        Mason Wheeler (profile), 3 Jan 2019 @ 1:15pm

        Re: Re: Well, they're firm on silencing political opponents.

        Are you implying that anti-trust would have the effect of reducing the size of the audience of any individual user,

        That should be obvious, yes.

        and also that this would be a good thing rather than an unfortunate side effect?

        In most cases, yes. In the case of suicides, definitely. (Especially in the cases where not having an audience causes the person to not end up killing themselves in the first place!)

        link to this | view in chronology ]

        • icon
          Matthew Cline (profile), 3 Jan 2019 @ 1:23pm

          Re: Re: Re: Well, they're firm on silencing political opponents.

          So, what, makes laws that limit the membership size of social sites? If a person wants to gain an Internet audience larger than that, they'd have to create their own private site and grows its audience on their own?

          And if you put some limit on the size of social sites, would that apply to sites like Wikipedia?

          link to this | view in chronology ]

    • icon
      Mike Masnick (profile), 3 Jan 2019 @ 3:54pm

      Re: Well, they're firm on silencing political opponents.

      Well, it's a new year, and so I'll try something different. Despite all evidence to the contrary, let's assume you're seriously this confused and I'll respond to your points.

      You're as ever focused on anomalies

      Can you explain to me what is an "anomaly" in a program that is regularly reporting possible suicide risks and is hiring thousands of people to monitor such? Doesn't sound like an anomaly.

      not the everyday problems of mega-corporation "platforms" illegally silencing people who are entirely within common law terms for expressing political views.

      This is not what "common law" means. Common law is the law as determined by the courts -- case law is another way of putting it. And case law... says the exact opposite of what you do (as does written law). It is not illegal for a platform to deny access to anyone (unless on a very narrow set of protected classes -- and "expressing political views" is not one of them.) If you have a cite to an actual ruling, we could discuss the specifics, but wild confused generalizations claiming it is illegal to remove content from a platform is not the law -- neither in regulations nor in "common law."

      Of course this has been explained to you dozens of time and you have yet to respond to the fact that your analysis is literally wrong.

      This is another of your "gee it's tough to do right" arguments

      No. As stated in the post that you clearly did not actually read, my argument is that idiots who says I'm saying "gee it's tough to do right" are the ones misrepresenting things. I'm not saying it's tough. I'm saying it's literally impossible.

      intended to support globalist corporations in their drive for controlling all speech

      If you don't want "globalist corporations to control all speech" you support CDA 230. Without it, those platforms would be responsible for policing all that speech. And yet, you don't seem to support CDA 230. As with your analysis of "common law," your legal analysis is not just faulty, it's backwards.

      I agree that Facebook should be damned. And broken up too.

      You are not alone in that viewpoint. But what no one who supports that position has done is presented a credible, reasonable plan for what that means. Personally, I'd like to see Facebook (and Google and anyone else) flop through competition from more distributed services. But a general claim of "break them up" is meaningless without a plan. Do you break them up by saying only certain people can connect with others? That takes away the network effects that people value. Do you break them up by separating out other parts of their business (Instagram/Whatsapp?) That might work, but... wouldn't solve any of the concerns people are raising.

      So if you have a serious plan that is anything but "BAH, FACEBOOK BAD, WE SMASH FACEBOOK" it is difficult to take you seriously.

      we'll all better off if cut it down to size with anti-trust and steeply progressive income tax rates

      That's one approach, though it's bizarre given that it comes from you, who regularly spouts Donald Trump talking points. He's, uh, not a supporter of steeply progressive income tax rates.

      And not allow tax havens as Google is today in the news for.

      Sure. I'm all for that as well. Won't have much of an impact though on the issue at hand. So, you have a good suggestion for a minor fix to a side problem that has nothing to do with the issue we're talking about in the post, and the rest of your comment is mostly filled with nonsense and bullshit.

      It's no wonder people keep telling you to stop trolling.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 3 Jan 2019 @ 4:48pm

        Re: Re: Well, they're firm on silencing political opponents.

        Just as well out_of_the_blue's heroes ICE are in the copyright enforcement business now because needs some for the sick burn he just got. Owww!

        link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jan 2019 @ 10:54am

    In short there are serious trade-offs to every single choice.

    ::Chidi dies again::

    link to this | view in chronology ]

  • icon
    Uriel-238 (profile), 3 Jan 2019 @ 11:13am

    The problem with invading private lives...

    ...is that it's too tempting to use the information so gained in unethical ways, from telling companies that a private life might like their products to distributing their nudes (and affairs) though internet gossip channels.

    The original idea behind Google was to create a reservoir of private data that would never be looked at directly, but could be used for statistical analysis. Sadly, between state and market forces, they were tempted to break their own rules.

    If it were possible to create a system in which data invasion could be handled ethically, there are plenty of medical, social and state interests that would be facilitated by such information.

    The problem is that it may be as much of a moral trap as appointing someone dictator-for-life. It's a level of power very hard not to abuse.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jan 2019 @ 11:15am

    Thin end of the wedge

    If it becomes acceptable that that social media sites monitor for suicide,how long before they are expected to monitor for signs of criminal activity?

    link to this | view in chronology ]

    • icon
      Gary (profile), 3 Jan 2019 @ 11:18am

      Re: Thin end of the wedge

      They already are - it's called Copyright. And Blue sincerely supports monitoring us to detect the slightest hint of copyright infraction - because it only hurts pirates, eh?

      link to this | view in chronology ]

  • icon
    Mason Wheeler (profile), 3 Jan 2019 @ 11:24am

    Two thoughts come to mind amid all this:

    1. People are streaming suicides on Facebook because of its immense reach. If Facebook wasn't so enormous, capable of broadcasting to so many people, they almost certainly wouldn't do it. (Afterall, you never heard of people broadcasting suicides on the Internet before Facebook, now did you?) It's being done by people who want to do something shocking to get attention. (The fact that it's a suicide doesn't contradict this point, however irrational it may seem, as it's generally agreed upon that people don't take their own lives while in their right minds.)
    2. The very existence of this article proves that this tactic is working. They're getting lots of attention over it!

    In light of this, consider the start of the article:

    We've talked a lot in the past about the impossibility of doing content moderation well at scale, but it's sometimes difficult for people to fathom just what we mean by "impossible," with them often assuming -- incorrectly -- that we're just saying it's difficult to do well. But it goes way beyond that. The point is that no matter what choices are made, it will lead to some seriously negative outcomes. And that includes doing no moderation at all. In short there are serious trade-offs to every single choice.

    When the cause of the problem is Facebook's enormous scale, and the reason it's impossible for them to deal with the problem effectively is that very same scale, then the conclusion is obvious.

    At this point we've seen enough serious scale-related problems that it's worth taking a serious look at the notion that "too big to succeed is too big to exist."

    link to this | view in chronology ]

    • icon
      Gary (profile), 3 Jan 2019 @ 11:40am

      Re:

      i can't see this as a compelling argument to shut down facebook.

      Around 40,000 people in the US died in cars last year. There are too many cars on the road to make them completely safe. But the cars are a direct cause of the deaths. If those cars weren't there, every single one of them would have lived.

      Facebook is not the immediate cause of death in suicides. Perhaps some of them wouldn't have killed themselves if they couldn't broadcast it live. But the vast majority of suicides are done privately. Why is the existence of Facebook a problem here? Should they disable cams?

      live.me has been used to livestream deaths. Should they be shut down as well? They certainly aren't a big company. Should they have to monitor their users streams to prevent suicide?

      Are you saying that any service that is too big to monitor all live events should be banned? (And conversely, that all live events should be pre-monitored?)

      link to this | view in chronology ]

      • icon
        Mason Wheeler (profile), 3 Jan 2019 @ 2:43pm

        Re: Re:

        But the cars are a direct cause of the deaths.

        No, generally speaking cars are not a direct cause of the deaths. Virtually every car on the road today is ridiculously safe; we're not living in the age of the Pinto anymore. In almost every case, the direct cause of the death was a human being doing something stupid, either driving recklessly, driving while intoxicated, or (in some rare cases) someone who was not driving who carelessly stepped out into the path of a moving vehicle that was close enough that the driver didn't have time to react.

        Are you saying that any service that is too big to [strawman strawman strawman]?

        No, I'm not recommending any specific policies. I'm saying that this is a principle that is worthy of serious consideration in light of past and current experience.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 3 Jan 2019 @ 5:43pm

          Re: Re: Re:

          Doesn't "serious consideration" mean discussion about its specifics, metrics, methods, implications, related policies, etc.?

          link to this | view in chronology ]

        • icon
          Ninja (profile), 4 Jan 2019 @ 4:55am

          Re: Re: Re:

          "In almost every case, the direct cause of the death was a human being doing something stupid,"

          In all cases of suicide streamed in FB it's a human with a psychological and/or psychiatric problem. You are contradicting yourself. Instead of blaming FB why don't we look at how well mental care is faring?

          And it's amusing how worried you are about Facebook when it's already showing clear signs of going Orkut.

          I do agree that we could make it *easier* for new entrants (ie: less taxes on them and regulating data relocating) and incentivize decentralized services. But regulate how big a service may get? That's a no-no.

          link to this | view in chronology ]

          • icon
            Mason Wheeler (profile), 4 Jan 2019 @ 7:23am

            Re: Re: Re: Re:

            In all cases of suicide streamed in FB it's a human with a psychological and/or psychiatric problem.

            Yes. I acknowledged this. I also pointed out that in at least some of the cases, it's attention-seeking behavior that would not happen if there was not a way to get an audience through a giant social network.

            You are contradicting yourself.

            I'm not contradicting myself at all; you're not reading what I'm actually saying.

            Instead of blaming FB why don't we look at how well mental care is faring?

            Because for various societal reasons which are beyond the scope of this discussion, we've made it very easy for someone with severe psychological problems to not get treatment, so how much good would that actually do?

            link to this | view in chronology ]

            • icon
              Leigh Beadon (profile), 4 Jan 2019 @ 9:59am

              Re: Re: Re: Re: Re:

              • at least some of the cases, it's attention-seeking behavior that would not happen if there was not a way to get an audience through a giant social network*

              This is also true of some cases of suicide by jumping off a roof, which wouldn't happen if we didn't allow such tall buildings in such visible public places.

              link to this | view in chronology ]

              • icon
                Wendy Cockcroft (profile), 7 Jan 2019 @ 7:18am

                Re: Re: Re: Re: Re: Re:

                People who seek attention while committing suicide are usually registering a protest. At heart, then, they don't want to die, they want the thing that makes them not want to be alive any more to go away. You may find that the attention-seeking starts well before the self-destruction. Early intervention would be the way forward because it's usually possible to intervene before it gets to the "You really don't care if I die right in front of you" stage.

                link to this | view in chronology ]

    • icon
      btr1701 (profile), 3 Jan 2019 @ 11:44am

      Re:

      > When the cause of the problem is Facebook's enormous
      > scale, and the reason it's impossible for them to deal
      > with the problem effectively is that very same scale,
      > then the conclusion is obvious.


      > At this point we've seen enough serious scale-related
      > problems that it's worth taking a serious look at the
      > notion that "too big to succeed is too big to exist."

      So what's your solution? To say that private citizens lose their right to free expression the moment their voice becomes so loud everyone can hear it?

      Or, conversely, that your right to free speech only exists so long as your voice is so weak no one of consequence will hear it and it will affect nothing?

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 3 Jan 2019 @ 11:46am

      Re:

      At this point we've seen enough serious scale-related problems that it's worth taking a serious look at the notion that "too big to succeed is too big to exist."

      Have you ever stopped to think that the the good side of the big social media sites is that they enabling a strong unifying across humanity, and have started the bumpy ride to a truly peaceful world.

      It is easy to the minority of abusive uses made by a minority of people using these sites, as that is new worthy, while ignoring the strong international communities that are built up round their common interests, as that is not news worthy. Making decision bases on what makes the news is usually a bad idea as it is trying to control the majority because of the actions of a minority.

      link to this | view in chronology ]

      • icon
        Mason Wheeler (profile), 3 Jan 2019 @ 11:55am

        Re: Re:

        Have you ever stopped to think that the the good side of the big social media sites is that they enabling a strong unifying across humanity, and have started the bumpy ride to a truly peaceful world.

        I've thought about it. Then I've looked at the real world and seen that this is simply not the case. Every forum beyond a certain number of regular users (I'm not sure, but I suspect this number is somewhere around 150; look up the concept of the "monkeysphere" if you want to know why) seems to inevitably degenerate into a wretched hive of scum and trollery within a decade, despite the best intentions of any number of stakeholders to try to prevent it from happening.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 3 Jan 2019 @ 12:07pm

          Re: Re: Re:

          Here is one counter example Facebook Machinist group which has 114,254 members when I posted this; there are others.

          link to this | view in chronology ]

          • icon
            Mason Wheeler (profile), 3 Jan 2019 @ 1:17pm

            Re: Re: Re: Re:

            1) I didn't say "members", I said "active users." The two figures are almost certainly very different from one another, probably by at least two orders of magnitude in this case. 2) How long have they been around?

            If you're going to try to refute what I said, please try to refute what I actually said instead of some strawman that sounds vaguely similar to it.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 3 Jan 2019 @ 2:33pm

              Re: Re: Re: Re: Re:

              Well, two orders of magnitude would till mean over 1000 active users, so about an order of magnitude above your arbitrary size, and I suspect a larger active membership that that because it is a self help group for machinists.

              I also follow various YouTube channels, where videos can gain hundreds of comments, from tens to hundreds of subscribers, and still remain civil. Indeed one of the "complaints" as a channel grows is that they cannot keep up with the comments, and not that the comments section has become the a cess pit full of trolls.

              link to this | view in chronology ]

        • icon
          btr1701 (profile), 3 Jan 2019 @ 2:07pm

          Re: Re: Re:

          > seems to inevitably degenerate into a wretched hive of
          > scum and trollery within a decade

          Even if true, so what? If people want to shitpost to each other, let them have at it. What's it to you?

          The greater evil would be for the government to just step in with its heavy boot and stomp it out.

          link to this | view in chronology ]

    • identicon
      Christenson, 3 Jan 2019 @ 11:51am

      Re: Too Big....

      Mike:
      I love the article's expansion on and using a different example of a problem I presented earlier.

      We have sufficient power and information that choices and tradeoffs *have* to be made, and there will be negative consequences for *any* choice. This is true with basically *any* collective choice on a large scale, including the physical environment, where global warming might just kill us all.

      Mason:
      Too Big to succeed is too big to exist.
      Too big to succeed: It happens when the scale overwhelms the context.

      Too big to exist? It's the point of anti-monopoly law -- the prevention of the concentration of power into too few hands. It seems to be what Blue and all the Trolls are actually getting at with their "common law" complaints, and we use such concepts as common carriers to try to prevent such inevitably-abused concentrations. Facebook is powerful enough that it *ought* to be at least a common carrier, just as the network infrastructure *ought* to be a common carrier, aka Network Neutrality. Not that breaking up huge internet companies would be sufficient, mind you.

      link to this | view in chronology ]

    • icon
      Wendy Cockcroft (profile), 7 Jan 2019 @ 7:15am

      Re:

      RE: streaming suicides

      https://www.insideedition.com/headlines/25564-the-dark-disturbing-trend-of-teens-live-stream ing-suicide-and-how-it-can-be-stopped

      People don't take their own lives for trivial reasons.

      Facebook's reach doesn't cause people to kill themselves.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jan 2019 @ 11:59am

    GDPR is easy

    Oh, and for what it's worth, the article does note in an aside that Facebook does not provide this service in the EU... thanks to the GDPR.

    Facebook could easily offer this service in Europe. All it takes is to inform people about it and ask them to check the box when they are interested. Just like the other checkboxes for say, advertisement tracking, shadow profile tracking, emotional manipulation, voting suggestions.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jan 2019 @ 12:27pm

    Easy fix

    If a private company should not wield this kind of power, then the fix is easy. Nationalize Facebook, so that it's an actual arm of the government. Then we can treat it as a public health agency. What could possibly go wrong?

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 3 Jan 2019 @ 1:31pm

      Re: Easy fix

      One could just find them a state actor to achieve the same goal.

      Would a nationalized AOL from 1997, Yahoo from 2001, or Myspace from 2005 still be in that position?

      Every time someone censors people, someone else is there to grab the audience. There's also USENET for those who want unfettered free speech.

      link to this | view in chronology ]

  • icon
    ECA (profile), 3 Jan 2019 @ 12:50pm

    demanding perfection..is imposible.

    Dont jump or I will shoot..just dont work..
    Understanding that the internet Social environment is like 1 million pennies dropped into a group of people all trying to catch pennies...how many will hit the floor?

    This is as evil as seeing all the server break-ins.. and wonder why it cant be protected. Then you remember that having access on the internet is like connecting to 6 billion people, all at once.

    A person asked me how long it would take to Write by hand, to 1 million, and I said 3-5 years.. he didnt quite believe me...I didnt see him for about 1 year, he came in an said he quit counting.

    link to this | view in chronology ]

  • icon
    Jeffrey Nonken (profile), 3 Jan 2019 @ 1:00pm

    "'Impossible' means 'difficult'." - people for whom "literally" means "not literally"

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jan 2019 @ 1:31pm

    Opting out

    NYT wrote "There is no way of opting out [of Facebook's suicide risk scoring system], short of not posting on, or deleting, your Facebook account."

    Do we know that either method would be effective? They could scan posts about you, even if you don't post anything or have an account. The only opt-out confirmed by FB is to be in the EU.

    link to this | view in chronology ]

    • icon
      btr1701 (profile), 3 Jan 2019 @ 2:12pm

      Re: Opting out

      > "There is no way of opting out [of Facebook's suicide
      > risk scoring system], short of not posting on, or
      > deleting, your Facebook account."

      I wonder how many people have trolled it just to fuck with the system, making it think they're a suicide risk just so they can put on a show of challenging the cops and asserting their rights when they show up?

      Kinda like what those "Photography is not a Crime" trolls do.

      link to this | view in chronology ]

      • icon
        Gwiz (profile), 4 Jan 2019 @ 8:03am

        Re: Re: Opting out

        Kinda like what those "Photography is not a Crime" trolls do.

        I find it interesting, knowing that you are law enforcement and have good working knowledge of Constitutional law, that you refer to people exercising their rights as "trolls". Perhaps that mindset by LEOs is the actual cause of the friction and not so much because of the lawful actions of citizens.

         

        Most First Amendment auditors use the same tactics that law enforcement uses in auto theft stings. They set up a situation and wait for a LEO to CHOOSE to violate their rights. Just like in a bait car sting, the perpetrator must choose to violate the law or it's considered entrapment. Is really all that different just because it's a citizen catching a LEO doing something illegal?

        link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jan 2019 @ 3:53pm

    Hey Mike,

    Do you know if your partners at Facebook ever looked into how many of the people who were enrolled in this psychological manipulation study without consent ended up killing themselves as a result? Across that sample size, the number is bound to be >0.

    https://www.forbes.com/sites/gregorymcneal/2014/06/30/controversy-over-facebook-emotional-mani pulation-study-grows-as-timeline-becomes-more-clear/

    link to this | view in chronology ]

    • icon
      Mike Masnick (profile), 3 Jan 2019 @ 5:29pm

      Re:

      Do you know if your partners at Facebook

      I'm confused by your reference to Facebook as a "partner." What do you mean by that? We have no relationship with Facebook and never have. I know that you regularly accuse us of being a shill for Facebook, but like all such accusations, it is based on figments of your imagination.

      As for the rest of your comment, huh?

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 3 Jan 2019 @ 5:51pm

        Re: Re:

        I don't think you're a shill for Facebook, but you definitely do have a serious blind spot when it comes to them. From your repeated refusal years ago to admit that they did anything wrong with their fraudulent accounting practices in their IPO, to more modern posts where you insist against all reason and evidence to the contrary that they aren't actually malicious, but simply "basically good people who ended up in over their heads in problems that got too big too quickly," you have always show a clear bias towards the best possible interpretation of Facebook's behavior, no matter how little justification there has been for that interpretation.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 3 Jan 2019 @ 7:47pm

          Re: Re: Re:

          Pretty sure you said the same thing about Keith Lipscomb...

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 3 Jan 2019 @ 7:48pm

            Re: Re: Re: Re:

            Who?

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 4 Jan 2019 @ 1:46am

              Re: Re: Re: Re: Re:

              Malibu Media's defender of copyright. The one who helped them harass old ladies for downloading illegally filmed pornography and topped the records for most copyright suits filed in 2018. Who the company later fired. Him, right? The shining knight of copyright? Who Colette Pelissier said could do no wrong? Fantastic representative of copyright enforcement, isn't he?

              link to this | view in chronology ]

              • identicon
                Anonymous Coward, 4 Jan 2019 @ 3:37am

                Re: Re: Re: Re: Re: Re:

                Umm... OK. What's that got to do with Mike's Facebook fixation? Anything at all? Or is this just ad-hom for its own sake?

                link to this | view in chronology ]

        • icon
          Mike Masnick (profile), 5 Jan 2019 @ 12:42am

          Re: Re: Re:

          From your repeated refusal years ago to admit that they did anything wrong with their fraudulent accounting practices in their IPO, to more modern posts where you insist against all reason and evidence to the contrary that they aren't actually malicious, but simply "basically good people who ended up in over their heads in problems that got too big too quickly," you have always show a clear bias towards the best possible interpretation of Facebook's behavior, no matter how little justification there has been for that interpretation.

          If you go around believing that people who work at companies are simply out to get you in the most malicious way... um... you might be the one who has an issue, not me.

          The incentives at Facebook are screwed up -- we agreed. The management at Facebook is screwed up and bad. But to argue that they are malicious without any evidence is utter nonsense and it is simply not true.

          They're not. They're just bad at their jobs and in way over their heads. It's a real problem and something to be concerned about, but the solutions to the problem "this thing is too big and they're making bad decisions" are very, very different than the solution to "those are evil assholes out to get everyone." You're going to fuck up a lot of important shit if you insist the latter is true when it is not.

          You won't be happy with what you're pushing for. It'll be much worse for everyone in the end.

          link to this | view in chronology ]

      • identicon
        Anonymous Coward, 4 Jan 2019 @ 12:18pm

        Re: Re:

        > As for the rest of your comment, huh?

        Facebook conducted a psychological experiment on ~700,000 people without consent. The goal of the experiment was to see if it was possible to manipulate the emotional states of those users by intentionally modifying their feeds to show more positive/negative posts.

        Across that number of people, chances of at least a few being in a state of "teetering on the edge of suicide" are high. Facebook most likely knows whether or not they pushed some of those over the edge with their experiment.

        link to this | view in chronology ]

        • icon
          Mike Masnick (profile), 5 Jan 2019 @ 12:44am

          Re: Re: Re:

          I'm aware of that situation.

          I am not at all aware of what point you think you're making with it in regards to this story. It appears you think there's some sort of "gotcha" in that story, but it seems entirely unrelated to the point being made here (and also was discussed to death at the time).

          So unless you care to share something relevant, I will assume you are trolling and move on.

          link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Jan 2019 @ 5:14pm

    In order for Facebook to develop a reliable algorithm for predicting suicide it would need a historical model training dataset. The dataset would need to contain contain all available predictors of suicide (possibly things like mentions of "suicide" and associated terms, indicators of prior mental health problems, indicators of social isolation, social stigma, emotional distress etc) as well as a flag for whether each person did in fact go on to do it. If Facebook is not collecting information on user suicides then it cannot even begin to develop a demonstrably effective predictive model. Even people assessing user content and making judgements can't be evaluated for accuracy with no outcome data.

    For Facebook to do this in a serious way it would need to collaborate with governments to get hold of actual health records and suicide outcome data. It would also need to collaborate with experts, epidemiologists etc, and coordinate with governments in setting up and evaluating the "intervention" strategies.

    But which governments would trust Facebook with this information? And which governments would be interested in such a project anyway? Health care is not even a citizen right in many countries.

    Moreover, it is not really Facebook's business to be doing this. The fact that it is considered possible that they could develop a reliable, automated suicide prediction system is symptomatic of the fact that they are already collecting, and free to use however they want, far more information that than they should be. But that is another discussion...

    Enforcing anti-bullying and harassment rules, and illegal content rules, maybe, but psychographic profiling of users to solve social problems, no. How could that possibly work?

    link to this | view in chronology ]

    • identicon
      Christenson, 4 Jan 2019 @ 10:28am

      Re: How could Enforcing anything possibly work?

      In the fevered imagination of Facebook, it works thus:
      Indicators of impending suicide are well known and documented, and a subject of scientific study.

      Accounts with indicators can be detected and flagged.

      In reality:
      Computers are really bad at context. Suppose I take someone's post with true indications they are about to kill themselves, and re-post, along with saying: "Watch out for this!" or "Danger Will Robinson!" or "One more example". Now how does the computer figure out that it isn't *me* who has the problem?

      And what if I joke sarcastically as follows: "time to kill myself; I've lost all the good comment contests at Techdirt!". I have seen plenty of broken *human* sarcasm and joke meters in Techdirt's comments, some involving famous regulars like Stephen T Stone.

      And how many links to and quotes from bad behavior such as harrassing have we seen here on Techdirt? Same problem: context.

      link to this | view in chronology ]

  • identicon
    Anonymous Cowherd, 5 Jan 2019 @ 10:58am

    Do-Somethingism is bad

    Just because a lot of people demand something be done doesn't meant that something is good.

    link to this | view in chronology ]

    • icon
      Uriel-238 (profile), 5 Jan 2019 @ 11:42am

      "something must be done"

      Just because a lot of people demand something be done doesn't meant that something is good.

      Case in point, FOSTA / SESTA which has pushed sex workers back out to the streets, killed support for trafficked persons and not slowed down pimping even a jot.

      link to this | view in chronology ]

  • icon
    Wendy Cockcroft (profile), 7 Jan 2019 @ 6:14am

    Power and responsibility

    I'm a big believer that with great power comes great responsibility and I believe we all agree that there have been many examples of power being exercised without any responsibility being taken, with horrible results.

    (I'm curious if reporters ask Ford for a comment when someone commits suicide by leaving their car engine on in a garage?)

    How hard is it to build a sensor into the car's dashboard that indicates the level of carbon monoxide in the car's interior and whether or not the engine is switched off, which then triggers the engine to switch off when the safety threshold is exceeded? If you can fit a satnav, you can fit a carbon monoxide sensor.

    RE: Facebook suicides

    I'd recommend a pop-up article triggered by the keywords "suicide" and "kill myself" (and any others that might fit the bill) that provides professional advice on how to distract or delay a suicide attempt along with information on support services in the suicidal person's area that can help the suicidal person. This would appear on the screens of everyone viewing the feed. Viewers would also be able to alert the local police by pressing a call to action button "Alert the police?" The pop-up could be minimised if it's not necessary. Of course, this relies on viewers caring enough to want to stop the suicide, but it's better than nothing. Thoughts?

    link to this | view in chronology ]

    • identicon
      nae such, 17 Jan 2019 @ 8:04am

      Re: Power and responsibility

      regarding car companies including a sensor, i would want to know the numbers of suicides by the car in the garage and compare cost benefit to the public. i wouldn't expect the company to care on this. corporate responsibility is not well known. i suspect the only way to get anything done would be through public action. then we would have to trust our regulators to make a law that was actually worth something. the cost and compare part doesn't sound difficult(^^).

      i don't think adding useful info to a feed would be bad. possibly depress and shame the original poster. the alert the police button i think would find some happy troll abusers though. as the article states in the states that could be lethal. even in other saner parts of the globe if it were abused enough then it would likely start to be ignored.

      i suspect corporate responsibility is the key. certainly without incentives many companies could care less. in many instances it appears that the penalty for incompetence, negligence, or outright intentional lapses are not great enough to have an effect. fines that are cheaper to pay than the original offense. oversight that has no teeth. of course that leads down the rabbit hole into political sins.

      link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.