|
Post by robeiae on Nov 30, 2017 9:37:25 GMT -5
money.cnn.com/2017/11/27/technology/facebook-ai-suicide-prevention/index.htmlThis strikes me as something based mostly on good intentions (there's some concerns for potential liability in all of this, I think) that is nonetheless horribly, horribly wrong-headed and inappropriate. Will there be a new "terms of service" agreement for users, so they can consent to having "AI" monitor their posts and evaluate their mental state?
|
|
|
Post by Vince524 on Dec 1, 2017 12:50:27 GMT -5
I'm pretty sure they already monitor in some ways FB, because I'll see adds for things writer, KISS, STAR TREK/WARS, Superhero and Bacon related in terms of news. It also boosts posts from people I interact with more than the ones I don't.
Having said that, what does flagging a post mean? Is a bot going to email you and say, are you okay? Are they going to suspend an account? How does that help?
What is it looking for? If a person actually puts something on their timeline like, 'I'm going to kill myself' I'd wager more than 75% of those are sarcasm, like I'm going to kill my husband for using the last of the toilet paper. It's hyperbole.
I'd expect people who are really in a bad place aren't saying those words, until it's too late. They're probably posting things about how they can't get out of bed, or another day that sucked and how horrible their life is, etc.
I think the idea of helping to prevent suicide is great. Like you said, good intentions. But how will this work? What if their formula is wrong? Will something happen that will become a public record? Or even if it's right?
A person who has thoughts of suicide isn't helped if their outed for their thoughts against their will.
|
|
|
Post by Optimus on Dec 1, 2017 23:33:15 GMT -5
money.cnn.com/2017/11/27/technology/facebook-ai-suicide-prevention/index.htmlThis strikes me as something based mostly on good intentions (there's some concerns for potential liability in all of this, I think) that is nonetheless horribly, horribly wrong-headed and inappropriate. Will there be a new "terms of service" agreement for users, so they can consent to having "AI" monitor their posts and evaluate their mental state? Agreed. I also don't think it will work anyway. The internet turned Microsoft's AI into a racist in record time. No telling what it'll do to FB's "AI-based suicide prevention efforts."
|
|