Unintended Consequences Of EU's New Internet Privacy Rules: Facebook Won't Use AI To Catch Suicidal Users

from the beware-the-innovations-you-kill dept

We've written a few times about the GDPR -- the EU's General Data Protection Regulation -- which was approved two years ago and is set to go into force on May 25th of this year. There are many things in there that are good to see -- in large part improving transparency around what some companies do with all your data, and giving end users some more control over that data. Indeed, we're curious to see how the inevitable lawsuits play out and if it will lead companies to be more considerate in how they handle data.

However, we've also noted, repeatedly, our concerns about the wider impact of the GDPR, which appears to go way too far in some areas, in which decisions were made that may have made sense in a vacuum, but where they could have massive unintended consequences. We've already discussed how the GDPR's codification of the "Right to be Forgotten" is likely to lead to mass censorship in the EU (and possibly around the globe). That fear remains.

But, it's also becoming clear that some potentially useful innovation may not be able to work under the GDPR. A recent NY Times article that details how various big tech companies are preparing for the GDPR has a throwaway paragraph in the middle that highlights an example of this potential overreach. Specifically, Facebook is using AI to try to catch on if someone is planning to harm themselves... but it won't launch that feature in the EU out of a fear that it would breach the GDPR as it pertains to "medical" information. Really.

Last November, for instance, the company unveiled a program that uses artificial intelligence to monitor Facebook users for signs of self-harm. But it did not open the program to users in Europe, where the company would have had to ask people for permission to access sensitive health data, including about their mental state.

Now... you can argue that this is actually a good thing. Maybe we don't want a company like Facebook delving into our mental states. You can probably make a strong case for that. But... there's also something to the idea of preventing someone who may harm or kill themselves from doing so. And that's something that feels like it was not considered much by the drafters of the GDPR. How do you balance these kinds of questions, where there are certain innovations that most people probably want, and which could be incredibly helpful (indeed, potentially saving lives), but don't fit with how the GDPR is designed to "protect" data privacy. Is data protection in this context more important than the life of someone who is suicidal? These are not easy calls, but it's not clear at all that the drafters of the GDPR even took these tradeoff questions into consideration -- and that should worry those of us who are excited about potential innovations to improve our lives, and who worry about what may never see the light of day because of these rules.

That's not to say that companies should be free to do whatever they want. There are, obviously LOTS of reasons to be concerned and worried about just how much data some large companies are collecting on everyone. But it frequently feels like people are acting as if any data collection is bad, and thus needs to be blocked or stopped, without taking the time to recognize just what kind of innovations we may lose.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: ai, data protection, eu, gdpr, innovation, privacy, self-harm, suicide, tradeoffs
Companies: facebook


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Anonymous Coward, 30 Jan 2018 @ 9:46am

    "Open the program"?

    But it did not open the program to users in Europe

    Is the implication that this is something people have to sign up for? I didn't see anything about that in Facebook's message.

    link to this | view in thread ]

  2. identicon
    Anonymous Coward, 30 Jan 2018 @ 9:54am

    My attitude

    If you're innovative enough, the GDPR is a hurdle not a roadblock.

    Specifically in this case, I imagine it can work offline. But I'm sure to have detractors, to which I can only say I can't really argue this one way or the other.

    link to this | view in thread ]

  3. identicon
    Jordan Chandler, 30 Jan 2018 @ 10:08am

    Responsibility

    It's outside their scope of responsibility

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 30 Jan 2018 @ 10:17am

    "That's not to say that companies should be free to do whatever they want." -- OMG! Mr Corporatism Uber Alles agrees with me! My work here is almost done...

    There he is, obliquely recognizing the rights of "natural" persons under common law, which can't be repeated too often, becoming a rare notion. -- Those terms seem to annoy some ACs (but clearly fanboys) here.

    But you're moaning about a sheerly notional non-loss -- because hasn't been done or even possible except in last few years out of the 5000 or so since the Babylonians invented civilization -- while the vastly larger up-side for 99.9% of persons is that Facebook won't be monitoring for those who aren't slavish nebbishes to report them to gov't for "re-education".

    link to this | view in thread ]

  5. identicon
    Anonymous Coward, 30 Jan 2018 @ 10:21am

    Re: My attitude

    >>> "I can only say I can't really argue this one way or the other." -- Sheesh. -- Would you say you MIGHT be able to reach a decision within a year? Or do you need help before taking a tentative position on whether that's possible?

    link to this | view in thread ]

  6. identicon
    Dan, 30 Jan 2018 @ 10:29am

    "Save the children"

    At first I thought ya, bad consequence. As I read the article, I kept hearing the "save the children" mantra. So, I changed my mind. If Facebook is the only thing that catches a suicidal person, I doubt there is much that should be done. I mean, what is Facebook going to do? Perform a automated suicide swat on someone's house?

    link to this | view in thread ]

  7. identicon
    Anonymous Coward, 30 Jan 2018 @ 10:42am

    Re: Re: My attitude

    I've got a position.

    Just I don't necessarily know how to argue it. So I guess it's little tentative until I find out about some mathematical/computer science proof.

    link to this | view in thread ]

  8. icon
    discordian_eris (profile), 30 Jan 2018 @ 10:43am

    Not Facebooks Problem

    Face has zero reason or responsibility to try and prevent suicide, either in the US or in the EU. In the EU articles 3 and 4 of the Charter of Fundamental Rights makes it crystal clear.

    Article 3 - Right to integrity of the person Send with Email 1. Everyone has the right to respect for his or her physical and mental integrity. 2. In the fields of medicine and biology, the following must be respected in particular: - the free and informed consent of the person concerned, according to the procedures laid down by law, - the prohibition of eugenic practices, in particular those aiming at the selection of persons, - the prohibition on making the human body and its parts as such a source of financial gain

    Article 4 - Prohibition of torture and inhuman or degrading treatment or punishment No one shall be subjected to torture or to inhuman or degrading treatment or punishment.

    Suicide is a personal decision and the state has no business interfering with it. Both the US and EU make it clear that informed consent is required for any and all medical procedures and interventions. Forcing people to take medications and/or imprisoning them in psych hospitals is a gross violation of human rights. It actually increases the risk of suicidal behaviours. There are no anti-depressants that are safe for anyone under the age of 25 and all SSRIs increase both suicidal ideation and suicide attempts. Since that is the main way that suicidal people are treated, it is counter-productive and harmful. Neither Facebook or the state has the right to try and force anyone to be be 'treated' for having suicidal thoughts. Just because a person is 'broken' doesn’t mean they have no rights. And it sure as hell isn't the states responsibility to force someone to live who chooses not to. Facebook needs to stick to serving ads and stay the fuck out of peoples business.

    link to this | view in thread ]

  9. icon
    discordian_eris (profile), 30 Jan 2018 @ 10:49am

    Re: "Save the children"

    Yes, that is exactly what they will do. The cops will show up and force an involuntary commitment on anyone they feel is a risk to themselves. Then they will have the joy of being forced to take drugs that actually increase the risk of suicide, especially in anyone under the age of 25. This is why ALL SSRIs have a black box warning on them warning of the risk to anyone under 25.

    link to this | view in thread ]

  10. icon
    Stan (profile), 30 Jan 2018 @ 10:52am

    A New GDPR Right

    Thanx, GDPR, for codifying the "Right to Self-Termination Without Busybodies Asking You to Reconsider."

    link to this | view in thread ]

  11. identicon
    Anonymous Coward, 30 Jan 2018 @ 11:12am

    I knew we could count on Masnick to try to spin EU privacy protections as "evil" in some way.

    Poor, poor little Facebook/Google. How ever will they survive?

    link to this | view in thread ]

  12. identicon
    JarHead, 30 Jan 2018 @ 11:17am

    Now... you can argue that this is actually a good thing. Maybe we don't want a company like Facebook delving into our mental states.

    I'm one of this school of thought. In this particular instance, I'd say GDPR works as intended, and this is not unintended consequences. I'm hoping that this is the intended consequences of GDPR drafters.

    You can probably make a strong case for that. But... there's also something to the idea of preventing someone who may harm or kill themselves from doing so.

    Legalization for busybodies to shove their moral compass to another? Thanks, but no.

    Everybody has the right to self destruct, limited to the right to self destruct of others and the well being of others. Meaning, you want to commit suicide. Fine, go ahead, as long as you don't injure anyone else. Do it with a knife then go ahead. Killing yourself with a bomb then we have a problem.

    link to this | view in thread ]

  13. icon
    DannyB (profile), 30 Jan 2018 @ 11:38am

    Re: "Save the children"

    If Facebook is the only thing that catches a suicidal person, perhaps the problem is that they are suicidal precisely because they use Facebook!

    You know how every time you read any article about Facebook and it seems creepy and makes the hairs on the back of your neck stand up? That's 200 million years of evolution telling you to RUN, DON'T WALK but RUN away NOW. Except on the Internet.

    link to this | view in thread ]

  14. icon
    Rick O'Shea (profile), 30 Jan 2018 @ 11:39am

    unintended consequences is right...

    I can visualize the gun lobby slavering over Facebook Ads Manager questions like:


    Select individuals with:
    ☐ suicidal tendencies
    ☐ low self esteem
    ☐ actualization anxiety


    They may ostensibly be collecting the information to avoid self-harm, but the real question is how wide that information will spread beyond the Chinese walls of the organization. I, for one, wouldn't trust Facebook to not capitalize on such information. Such is the nature of corporate America.

    link to this | view in thread ]

  15. icon
    JoeCool (profile), 30 Jan 2018 @ 11:40am

    Re: Responsibility

    Maybe, but if they don't catch it, they'll get the blame. People will call them callous - uncaring. It's one of those damned if you do, damned if you don't situations.

    link to this | view in thread ]

  16. identicon
    Anonymous Coward, 30 Jan 2018 @ 11:48am

    I've had friends in the US report how they've had uncomfortable incidents where they simply mentioned certain keywords related to self harm, resulting in FB call the cops on them, effectively just swatting them for saying the wrong combination of flagged words in a post.

    link to this | view in thread ]

  17. identicon
    Anonymous Coward, 30 Jan 2018 @ 11:50am

    Not sure if even facebook has thought this through. Let's see how this can go horribly wrong:

    Ex. 1> mistaken identity or arriving at the wrong address, this never happens right? (http://www.post-gazette.com/local/region/2018/01/03/Meadville-federal-lawsuit-wrong-man-Eugene-Wrig ht-police-injected-drugs-Meadville-Medical-Center/stories/201801030163)

    Ex. 2> innocent bystanders are never hurt (http://www.miamiherald.com/news/local/crime/article90905442.html)

    Ex. 3> Cops and good guys always are on the same thought process, never a "misunderstanding" between them(http://www.kansas.com/news/local/crime/article192244734.html)

    Must be I am missing something, but am sure we will not mind a few "broken eggs" for technological progress

    link to this | view in thread ]

  18. icon
    orbitalinsertion (profile), 30 Jan 2018 @ 12:19pm

    Their AI should not be reading my shit, period.

    That's where the problem starts.

    Let FB do a suicide watch? Are you insane? And quite frankly, if they can "do" (for various values of "do") that, then they can: Catch all the bad guys, identify exactly who is dangerous and who is not, identify exactly what is "bad" speech in every jurisdiction, identify exactly what is fake news, etc.

    I'm sorry, but fsck people's "AI"s and their data farming. Call it "innovation", because don't. It's about as real, useful, and good as the whole fake-ass financial sector, or advertising and marketing.

    link to this | view in thread ]

  19. identicon
    Anonymous Coward, 30 Jan 2018 @ 12:24pm

    Re: A New GDPR Right

    With what I have read in the rules, most of it is relatively benign, light touch and to an extent codifying logical solutions for the most egregiously sloppy treatment of data.

    While there are some real backbone challenges of implementing "right to be forgotten" and "right to access", the real fear seems to be users using the rights!

    Facebooks problem is more correctly tied to the question: To what extend have any user signed up for Facebooks Healthcare? And how about patient-doctor confidentiality?

    Research is all well and good and none of GDPR is actually preventing it. But when breaking with fundamental principles like professional confidentiality (whether legal, medical or otherwise), you better do it through proper channels and with a worthwhile pursuit. Very few want Equifax-like leaks of such data.

    link to this | view in thread ]

  20. identicon
    Anonymous Coward, 30 Jan 2018 @ 12:50pm

    Re:

    And we knew we could count on you to shit the thread. Honestly you’d hold your breath until you passed out, if Mike said oxygen was kinda nice to have around.

    link to this | view in thread ]

  21. identicon
    Anonymous Coward, 30 Jan 2018 @ 1:35pm

    Re:

    "Their AI should not be reading my shit, period."

    "Let FB do a suicide watch? Are you insane?"

    Hear, hear.

    Can you imagine how abusable it is? What happens if FB contacts law enforcement without checking to see if someone is actually in danger? What happens if it turns into some FB equivalent of "swatting"? You'll either see hacks of people's accounts, or fake accounts setup to do this sort of thing and tie up law enforcement depending on how FB's AI handles this situation.

    link to this | view in thread ]

  22. identicon
    Anonymous Coward, 30 Jan 2018 @ 2:38pm

    Re: Re: Responsibility

    There are two reasons why Facebook would be trying to stop people from killing themselves:

    1.) To keep their engagement statistics up.

    2.) Because they realized the psychological experiments they performed on hundreds of thousands of people without consent, led some of their users to suicide.

    link to this | view in thread ]

  23. identicon
    Anonymous Coward, 30 Jan 2018 @ 4:04pm

    Summary makes two basic assumptions:

    1: That Facebook 'self-harm detection' actually works.
    2: That it will not be abused by Facebook itself, or third parties.

    For the first instance, we don't have any idea how this prevention system works. We don't know its parameters, what data it collects, how it uses it, how it determines "self harm". We don't know its success rate or how it could be abused. It's a black box that automatically decides a case for intervention in someone's life without their consent.

    If you think that Facebook won't be adding "possible mental health issues" to its vast treasure chest of personal data about their users you're god damned naive. That's a good enough reason to prevent it. We have no idea how this data might be abused in the future.

    We can assume it will be sold to advertisers, sure, but what about health insurers or employers? What about using it for 'nudge' psychology which we know Facebook has experimented with in the past?

    Social media as a whole is making a big PR push at the minute, because the damage and abuses it can cause are slowly bubbling to the surface, notwithstanding the massive privacy invasions and reckless profiteering. We can expect to see more of this sort of "all watched over by machines of loving grace" stuff in the future. I suspect a lot of it is just fluff.

    link to this | view in thread ]

  24. identicon
    Rekrul, 30 Jan 2018 @ 4:46pm

    AI shouldn't be predicting anything. That's like arresting someone because an AI predicted that they were going to commit a crime.

    AI has come a long way, but it's still a long way from being reliable. AI can't even reliably tell spam from non-spam in your email, but people want to trust it to reliably predict when a person is thinking of harming themselves?

    What if they make a post about a movie that includes suicide? What if they post a piece of fiction that includes suicide? What if they simply post the wrong words?

    link to this | view in thread ]

  25. identicon
    Anonymous Coward, 30 Jan 2018 @ 6:01pm

    Re:

    What a surprise, the one in favor of allowing people to demand their criminal history being wiped off the Internet can't resist showing off his e-boner to everyone else...

    link to this | view in thread ]

  26. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 30 Jan 2018 @ 6:32pm

    TD is becoming a bad source.

    "Maybe we don't want a company like Facebook delving into our mental states. You can probably make a strong case for that. But..."

    This would be the point where you should have stopped, fleshed out that strong case, and at least used it as a proper counter balance. Frankly, that strong case alone would be much less negligent reporting then this.

    AI digging through user data will ruin far more lives then suicide- I'd argue strongly that it will lead to far MORE suicides long term... it's an uncomfortable area to argue given recent events (logan paul); which is exactly why nyt has chosen that context to frame this manipulative planted story you've lapped up and regurgitated with gusto. What- you're against facebook AI mining user data? You must be Pro suicide... Nuance challenged people desperately trying to justify their FB addictions will love this... Ny times, hosted by amazon, both major advertisers on facebook- it's really not hard to see incentives here; no stupid conspiracies necessary.

    I can't help but think- Does that pay well? Or is it just an exercise in maintaining corporate value by not pissing off potential advertisers or M&A teams? Or are you already hopelessly tangled in the very webs of dark knowledge, parallel construction, extortion and neo-slavery that big data+ AI seeks to vastly expand and automate? Maybe that last one's over the top- maybe you just had a rushed crappy day...but there's a odor I sense here and it doesn't smell right at all, maybe if I point it out you could clean it up. hope springs eternal.

    You've reported on both snowden leaks, and the vault stuff- and then at some point- radio silence on many important topics- like someone had your gonads in a vice and their hand on the crank. Prime example= Intel IME (the ring -3 hardware backdoor that's not a backdoor- because intent, I guess...) got hacked; major news on every respectable tech site- and TD's busy pushing this oblivious (like you forgot the very same stories you reported on already) propaganda narrative about fbi/apple and phone encryption= subtly leading people to the very incorrect conclusion that their cellphones are secure, while advertising for Apple, and giving the FBI the perfect storm of public commentary against backdooring encryption- EXACTLY what they need to push for increased access by established means that have NOTHING to do with encryption (why break the lock when you can just take the key)- ffs NIST was already caught the last time they backdoored encryption with Dual_EC_DRBG... And the ULTIMATE f'ing IRONY is that the worst case scenario you warn about in these slight of hand propaganda phone/encryption pieces was literally playing out in realtime with the IME hack! A universal backdoor- on the loose in the wild- That's not even the first time that's happened....

    Boingboing covered it- f'n boingboing has somehow become a better site for uncovering 'tech dirt' then TechDirt.

    I used to love this site, I guess I still do on some level or I wouldn't bother typing all this- but you guys need to get your shit together- Stop being afraid of coming off as paranoid and just report the damn facts people need to know... or talk to your handlers and make it clear how much collateral damage they're causing- your reputation is being ruined by this transparent and largely ineffective bullshit- doesn't matter how many shills line up in the comments. 2018- people are waking up slowly but surely. Fake news is everywhere- not trumpkins 'fake news'- more like MIT prof Noam Chomsky's work. No one is immune to human nature.

    link to this | view in thread ]

  27. identicon
    Anonymous Coward, 31 Jan 2018 @ 12:52am

    Re: "Save the children"

    Exactly, Facebook contacts "emergency services", which is the police in most places. The police are not trained to handle these things and treat at-risk people like dangerous criminals who must be suppressed. They are going into the situation assuming they are potentially facing an armed threat because the details are not known to by FB to pass on.
    FB isn't responsible for helping suicidal people, but I applaud the effort because I think it is driven by a genuine desire to do good. I think they need to re-evaluate the best way to help those at-risk, including more anonymity for anyone identified as needing help.
    I have suffered from depression since around age 12, I have attempted suicide twice, and been through a few bouts of cutting. Proper mental healthcare, as in attentive medication management and therapy are incredibly helpful, but some times difficult to obtain. But even with proper treatment, situational problems can seem insurmountable; this is when most of us need people to reach out and actively support us. If it's left up to us to do the reaching out (basically the well-intentioned 'call me if you need to talk' is useless) we end up doing self-destructive things to signal the call for help that might go completely unseen. I would have appreciated getting phone calls, or contacted online from anyone who was concerned about my well-being, wouldn't have to be someone I knew. Just as long as it was someone who recognized something was wrong and reached out to me to talk. I know how terribly lonely suicidal people feel; even if surrounded with loving family, they don't always know what is going on inside your head and can misinterpret your attitude to mean you need space when it's the exact opposite.
    I'm rooting for FB to get this right.

    link to this | view in thread ]

  28. icon
    PaulT (profile), 31 Jan 2018 @ 12:54am

    Re: Re: Re: Responsibility

    3) Because they're big and people will attack them for anything even tangentially related to their platform and

    4) Even the grandstanding politicians who use them as a scapegoat for everything wrong with society will struggle to use "they're trying to stop teenagers killing themselves" as effective ammunition.

    People are currently trying to attack social media platforms for everything from people being gullible enough to base their votes on outlandishly ridiculous fictions masquerading as news to people stupid enough to be literally eating poisonous chemicals because someone else dared them to. You don't have to come up with any additional conspiracy theories to explain why FB would think that being visible active in preventing teen suicide might be a good idea.

    link to this | view in thread ]

  29. identicon
    Anonymous Coward, 31 Jan 2018 @ 1:12am

    Re:

    but people want to trust it to reliably predict when a person is thinking of harming themselves?

    That is a people problem, not an AI problem. The computer output in cases like this should be treated as an indication that needs further investigation, rather than a reliable prediction.

    link to this | view in thread ]

  30. icon
    PaulT (profile), 31 Jan 2018 @ 1:48am

    Re:

    "That's like arresting someone because an AI predicted that they were going to commit a crime."

    No, it's really not. There are 2 things involved here. One is the AI prediction. The other is the action taken based on the prediction. The problem in your Minority Report example is that the person is arrested before they committed the crime. There's a wealth of other actions that can be taken based on the prediction that are not problematic in any way. If the reaction was simply to prioritise resources to enable police to catch the guy in the act, the AI prediction would not be a problem in any way.

    "What if they make a post about a movie that includes suicide? What if they post a piece of fiction that includes suicide? What if they simply post the wrong words?"

    I would hope that the AI is simply flagging the account up for investigation by a human rather than taking action directly. But, given that, surely an AI flagging such things is better than waiting around and hoping that one of the person's "friends" reports them instead?

    Again, the prediction is not a problem, it's the action taken based on that information. If someone loses his rent money on a prediction about a horse race that turned out to be wrong, it's the action of betting the whole of his rent that's the problem, not that the guy he spoke to tried to make a prediction on the outcome of the race.

    link to this | view in thread ]

  31. identicon
    Anonymous Coward, 31 Jan 2018 @ 6:30am

    Vital interest

    The GDPR does allow processing of personal data if it's in the vital interest of the data subject or another person. And it allows for processing of special categories of personal data, including health data, if it's in the vital interest of the data subject or another person and where the data subject is physically or legally incapable of giving consent.

    So clearly the drafters took the interest of saving a person's life into consideration. But as is clear from the comments here, not everyone appreciates a company like Facebook monitoring what they say to determine their health status. So I don't see what's wrong with asking people for their consent before doing so.

    It shouldn't be hard. Facebook could just add a "life saver" setting or whatev that you can turn on or off, with the information that if you turn it on whatever you write on Facebook and/or in messages will be monitored for signs that you may hurt yourself and what Facebook might do to help you if it detects such signs. At least it will be a conscious decision people make on whether or not they want to give Facebook that kind of power.

    link to this | view in thread ]

  32. identicon
    Anonymous Cowherd, 31 Jan 2018 @ 9:14am

    Freedom of self-determination

    People who actually want corporate "innovations" to "improve" their lives are free to give permission for them to do that.

    Others would rather they mind their own business.

    link to this | view in thread ]

  33. identicon
    Anonymous Coward, 31 Jan 2018 @ 9:59am

    The GDPR does allow processing ... vital interest

    There's a loophole so big you could drive a genocide through it.

    What would Iran, Egypt, NK...etc...etc...etc find to be in their vital interest?

    It is grotesque error to conflate intent of collection, or the collector, with the scope of potential for use.

    link to this | view in thread ]

  34. identicon
    Rekrul, 9 Feb 2018 @ 6:19pm

    Re: Re:

    No, it's really not. There are 2 things involved here. One is the AI prediction. The other is the action taken based on the prediction. The problem in your Minority Report example is that the person is arrested before they committed the crime. There's a wealth of other actions that can be taken based on the prediction that are not problematic in any way. If the reaction was simply to prioritise resources to enable police to catch the guy in the act, the AI prediction would not be a problem in any way.

    Facebook's page on this mentions "first responders" and "wellness checks". So in other words, they send police and other doctors to check up on the person. I'm too lazy to search right now, but haven't there been stories right here on Techdirt of "wellness checks" going horribly wrong? I know you can find news reports of such things on YouTube.

    And how exactly do these wellness checks work in such cases? Is a simple denial of suicidal thoughts enough to satisfy the police, or does the person also have to submit to psyche evaluation? In other words, are they considered guilty until proven innocent?

    I would hope that the AI is simply flagging the account up for investigation by a human rather than taking action directly. But, given that, surely an AI flagging such things is better than waiting around and hoping that one of the person's "friends" reports them instead?

    I'd agree with you, but...

    How many times have you seen people go overboard and report jokes or completely innocent things just because they're afraid of "missing something" and decide to err on the side of caution? Having an AI flag even more posts for them to look at increases the pool of material for them misinterpret. Preventing suicide is a noble cause, but given the history of people freaking out over jokes and other harmless stuff, what assurance is there that a perfectly happy, well-adjusted person won't have their life turned upside-down by someone who misinterpreted a joke or sarcastic remark and labeled them possibly suicidal? In an ideal world, they'd be checked on, declared OK and that would be the end of it. However it's not an ideal world and an accusation of suicidal thoughts could lead to very real consequences such as family and friends forever being overly critical of everything they say, gossip behind their backs, etc.

    It's the same problem as with keyword flagging in the intelligence community. This very site has argued that collecting and going through everything leads to a needle in a haystack scenario. Wouldn't using AI to flag every post that might be suspicious lead to the same outcome?

    link to this | view in thread ]

  35. icon
    PaulT (profile), 12 Feb 2018 @ 12:51am

    Re: Re: Re:

    "Facebook's page on this mentions "first responders" and "wellness checks"."

    Yes they do. But that has nothing to do with the scenario you were discussing. In fact, it's the opposite type of scenario.

    Person suspected of being a criminal = you wait until they have committed the crime before you react. Person suspected of being suicidal = you really want to intervene before they do kill themselves. They are extraordinarily different things, which is perhaps why you're confusing yourself by conflating them.

    "haven't there been stories right here on Techdirt of "wellness checks" going horribly wrong?"

    Yes, and the answer to that is "stop giving police military hardware and people on the force itching to use it at any given opportunity" and/or "train officers in how to de-escalate situations without using one of their toys", not "never tell authorities that someone may be in danger".

    "And how exactly do these wellness checks work in such cases?"

    I don't know. We don't even know whether human interaction is involved or if they just send automated messages. We don't then know how reports are dealt with from then on. But, that a procedural issue that's unrelated to whether or not Facebook should be providing their leads.

    "How many times have you seen people go overboard and report jokes or completely innocent things just because they're afraid of "missing something" and decide to err on the side of caution? "

    How many times have you seen a devastating suicide (or worse - some people don't only want to take themselves out), only for people to then realise all the warning signs they wish they had acted upon that could easily have saved lives?

    I get what you're saying, but Facebook are doing the right thing by flagging something, even if it doesn't guarantee accuracy or success. Their other option - do absolutely nothing - only encourages people to blame them for the full tragedy later on.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.