Post by robeiae on Dec 16, 2018 11:17:13 GMT -5
Or even "Is there a line?"
From November, as article about how FB uses "AI" to monitor people's accounts to protect them from self-harm: www.npr.org/2018/11/17/668408122/facebook-increasingly-reliant-on-a-i-to-predict-suicide-risk
How does this work? Well:
But:
And:
For sure, my first question when I saw that big number of reports was "how many actually meant something, concerned an actual emergency that needed attention?" Because if there wasn't an actual emergency, aren't we looking at seriously intrusive behavior from Facebook, which is not even a government?
More recently, there's the Pete Davidson saga that "culminated" on Instagram yesterday: www.cnn.com/2018/12/15/entertainment/pete-davidson-wellness-check/index.html
I don't know Davidson, of course, but my initial take on that sort of post is that it's simple vaguebooking, especially given the last line. And I'm guessing that every instance of vaguebooking on twitter and Instagram (I don't know about FB, given its protocols) doesn't lead to a wellness check. Nor should it, imo.
And yet...we don't want to not help when someone needs help, right? Still, I think there needs to be some sort of line in this regard, but I'll admit to not knowing where to draw it.
Thoughts?
From November, as article about how FB uses "AI" to monitor people's accounts to protect them from self-harm: www.npr.org/2018/11/17/668408122/facebook-increasingly-reliant-on-a-i-to-predict-suicide-risk
"In the very first month when we started it, we had about 100 imminent-response cases," which resulted in Facebook contacting local emergency responders to check on someone. But that rate quickly increased.
"To just give you a sense of how well the technology is working and rapidly improving ... in the last year we've had 3,500 reports," she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn't include Europe, where the system hasn't been deployed. (That number also doesn't include wellness checks that originate from people who report suspected suicidal behavior online.)
"To just give you a sense of how well the technology is working and rapidly improving ... in the last year we've had 3,500 reports," she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn't include Europe, where the system hasn't been deployed. (That number also doesn't include wellness checks that originate from people who report suspected suicidal behavior online.)
Davis says the AI works by monitoring not just what a person writes online, but also how his or her friends respond. For instance, if someone starts streaming a live video, the AI might pick up on the tone of people's replies.
But Facebook's Davis says releasing too many details about how the AI works might be counterproductive.
"That information could could allow people to play games with the system," Davis says. "So I think what we are very focused on is working very closely with people who are experts in mental health, people who are experts in suicide prevention to ensure that we do this in a responsible, ethical, sensitive and thoughtful way."
"That information could could allow people to play games with the system," Davis says. "So I think what we are very focused on is working very closely with people who are experts in mental health, people who are experts in suicide prevention to ensure that we do this in a responsible, ethical, sensitive and thoughtful way."
[Doctor and research fellow Mason Marks] isn't sure this AI system is the right solution, in part because Facebook has refused to share key data, such as the AI's accuracy rate. How many of those 3,500 "wellness checks" turned out to be actual emergencies? The company isn't saying.
He says scrutiny of the system is especially important because this "black box of algorithms," as he calls it, has the power to trigger a visit from the police.
He says scrutiny of the system is especially important because this "black box of algorithms," as he calls it, has the power to trigger a visit from the police.
More recently, there's the Pete Davidson saga that "culminated" on Instagram yesterday: www.cnn.com/2018/12/15/entertainment/pete-davidson-wellness-check/index.html
New York police said they performed a "wellness check" on "Saturday Night Live" cast member Pete Davidson after he posted an alarming note on Instagram on Saturday in which he said, "I really don't want to be on this earth anymore."
[snip]
Earlier Saturday, Davidson posted this message on Instagram: "i really don't want to be on this earth anymore. i'm doing my best to stay here for you but i actually don't know how much longer i can last. all i've ever tried to do was help people. just remember i told you so."
He followed the words with a heart emoji. The Instagram account was later deleted.
[snip]
Earlier Saturday, Davidson posted this message on Instagram: "i really don't want to be on this earth anymore. i'm doing my best to stay here for you but i actually don't know how much longer i can last. all i've ever tried to do was help people. just remember i told you so."
He followed the words with a heart emoji. The Instagram account was later deleted.
And yet...we don't want to not help when someone needs help, right? Still, I think there needs to be some sort of line in this regard, but I'll admit to not knowing where to draw it.
Thoughts?