|
Post by robeiae on Apr 7, 2017 17:57:40 GMT -5
|
|
|
Post by Christine on Apr 7, 2017 18:33:32 GMT -5
|
|
|
Post by Optimus on Apr 8, 2017 17:04:07 GMT -5
"Red Flag of Bullshit #1" There is no credible evidence anywhere that facial features are significantly correlated with criminal behavior. The entire premise of this supposed AI Neural Network is pure fantasy. "Red Flag of Bullshit #2" This "study" was not published in a peer-reviewed journal. It was posted on arXiv.org (pronounced "archive") which is a web archive hosted by Cornell where researchers can post purported "e-print" articles for people to review. Many of the papers are supposedly good but, given that there is no peer-review, there are also several loony turds on there too. I would lump this turd of a study into the small, but growing, pile of bullshit on arXiv. "Red Flag of Bullshit #3" The actual study itself. They took a bunch of mugshots and compared them to a bunch of non-mugshots. As almost anyone knows, mugshots tend to be pretty ugly. So, this algorithm does nothing more than compare ugly facial features to relatively attractive facial features. They've basically invented an "ugly detector," nothing more. Partly true. Barring any program bugs, an algorithm will execute whatever commands it is programmed to execute. That does not mean that the algorithm isn't based on a total bullshit premise (see above). If an algorithm is bullshit to begin with, its results - while consistent - will also be bullshit. This is nothing more than a fancy version of a lie detector. Both detect things other than what they are purported to detect (lie detectors actually detect stress, not lies. This algorithm actually detects "ugly," not criminality) and are applied in non-evidence-based bullshit ways. Lie Detectors have been abused by idiots and are now (thankfully) discredited. I imagine the same would happen to the algorithm from this study, if the day comes that some gullible fool ever takes it seriously.
|
|
|
Post by robeiae on Apr 8, 2017 17:38:44 GMT -5
Lol. You know, I was originally going to call this thread "A thread for Don and Opty." Then start the first post with something like this: ...so Don can say "See! Told you so," and Opty can have fun ripping apart the study. Now I just have to wait for Don... That said, here's the ProPublica piece: www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingThis stuff is being used and it is a problem, imo.
|
|
|
Post by Optimus on Apr 8, 2017 17:58:07 GMT -5
Lol. You know, I was originally going to call this thread "A thread for Don and Opty." Then start the first post with something like this: ...so Don can say "See! Told you so," and Opty can have fun ripping apart the study. Now I just have to wait for Don... That said, here's the ProPublica piece: www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingThis stuff is being used and it is a problem, imo. I was only referring to the specific algorithm from that study but the use of such things is problematic (like a modern-day lie detector) and the example in your article supports what I said earlier: "Barring any program bugs, an algorithm will execute whatever commands it is programmed to execute. That does not mean that the algorithm isn't based on a total bullshit premise (see above).
If an algorithm is bullshit to begin with, its results - while consistent - will also be bullshit."
|
|
|
Post by Don on Apr 8, 2017 18:39:20 GMT -5
See! Told you so.
I was waiting for Opty to debunk the "science" first.
Regardless of how this study was actually financed, it's a great argument for the separation of science and state. Most of congress would vote to fund a project like this in a heartbeat if they thought it would work, regardless of where the survey was published.
And what better argument for one version of "science" than a GIGO algorithm that can be "proven" to be "objective"... as long as the underlying bias of the arguments embedded in the algorithm aren't closely examined. They could implement procedures based on this bullshit... as Opty pointed out, see also: polygraph.
Oh, and a couple other see alsos: Lysenkoism and eugenics.
|
|
|
Post by robeiae on Apr 12, 2017 15:50:45 GMT -5
|
|
|
Post by ben on Apr 12, 2017 21:05:11 GMT -5
While we're bashing every word in the article, let me knock "algorithm" out of the park. I've noticed it more and more abused in the popular press in recent years to mean any task that a computer might be made to do, or something similarly vague. Usually what's being referred to is more likely a heuristic.
Now I'm reading that MIT Technology Review article and cringing at how the word is being used there...
Oh, whatever Chaucer said about form of change or whatever. "Back in my day, people weren't electrocuted until they were convicted of a capital crime. Nowadays, anyone who gets an electrical shock and lives says they were electrocuted."
|
|
|
Post by Optimus on Apr 13, 2017 1:56:10 GMT -5
While we're bashing every word in the article, let me knock "algorithm" out of the park. I've noticed it more and more abused in the popular press in recent years to mean any task that a computer might be made to do, or something similarly vague. Usually what's being referred to is more likely a heuristic. Now I'm reading that MIT Technology Review article and cringing at how the word is being used there... Oh, whatever Chaucer said about form of change or whatever. "Back in my day, people weren't electrocuted until they were convicted of a capital crime. Nowadays, anyone who gets an electrical shock and lives says they were electrocuted." I'm not really seeing where the MIT article blatantly misused the word "algorithm." Granted, I only kind of skimmed. What did I miss?
|
|
|
Post by Don on Apr 13, 2017 4:06:02 GMT -5
Oversimplification:
A cookbook is full of algorithms. A chef creates heuristically.
Deep learning technology is about chefs, not fry cooks.
That article refers to heuristic techniques as algorithms.
|
|
|
Post by Don on Apr 14, 2017 4:12:15 GMT -5
More problems with the idea that AI is somehow automatically objective. The last thing we need is some supposedly objective "super-brain" justifying The Bell Curve. The wizard of AI, in the final analysis, has not escaped the man behind the curtain. Until "AI" transcends the man behind the curtain, we should probably, at least mentally, keep the quote marks wrapped tightly around the concept, to remind us of the biases of the guy wiggling the levers.
|
|
|
Post by Optimus on Apr 14, 2017 15:33:44 GMT -5
|
|
|
Post by Don on Apr 14, 2017 16:44:02 GMT -5
Don't get me wrong. I think heuristic-based AI has phenomenal potential in purely objective arenas. In the languages and the arts, the path's a lot steeper, IMO. And the psychology? of pre-crime detection goes even beyond that, into the realm of magic.
Take architecture as an example. Want an energy-efficient, cost-effective design, with amazing traffic flow, whisper-quiet ventilation, and gratifying use of natural light? AI will be doing that before long. Want the Sistine Chapel? Good luck with that.
|
|
|
Post by robeiae on Apr 14, 2017 16:46:02 GMT -5
Tangent: Is it just me or is that one of worst articles ever written. There are too many errors in it to even count. And the writer is a "news editor"? Screw it, someone just end me now.
|
|
|
Post by ben on Apr 16, 2017 12:32:09 GMT -5
While we're bashing every word in the article, let me knock "algorithm" out of the park. I've noticed it more and more abused in the popular press in recent years to mean any task that a computer might be made to do, or something similarly vague. Usually what's being referred to is more likely a heuristic. Now I'm reading that MIT Technology Review article and cringing at how the word is being used there... Oh, whatever Chaucer said about form of change or whatever. "Back in my day, people weren't electrocuted until they were convicted of a capital crime. Nowadays, anyone who gets an electrical shock and lives says they were electrocuted." I'm not really seeing where the MIT article blatantly misused the word "algorithm." Granted, I only kind of skimmed. What did I miss? Don pretty much nailed it, but here's an interesting video. At a higher level, [you can say] it's an algorithm that simulates neurons in an artificial neuron network, but the training of the weights is where the "magic" is that makes it do something useful, and the way you train it is an "algorithm" of iterated inputs and checking the outputs against desired outputs. but that doesn't explain why each neuron has the weight value it does. For most things that are called algorithms, y ou have good knowledge of how it works. It's like training animals. You can do it successfully for some tasks, but still no one knows exactly how the brain/mind learns. A fun video: www.youtube.com/watch?v=u9FPqkuoEJ8
|
|