Post by evileeyore on Feb 8, 2020 19:19:39 GMT
From @conceptualjames (James Lindsay) Twitter feed via ThreadReaderApp:
Critical theories (like Social Justice) and the paperclip maximizer problem, a thread:
In philosophy regarding AI, there's what's known as the "paperclip maximizer problem." It's that a low-level AI that maximizes the number of paperclips in the environment/universe is a danger.
The point of the paperclip-maximizer thought experiment is that even a really dumb/low-level AI that is programmed only to do one simple thing, maximize paperclips, could cause an apocalypse by diverting ever more resources to its one task without knowing to stop.
Critical theories like Critical Social Justice are actually the paperclip maximizer in social philosophy and activism form. Because they are imbued with critical methods at their heart, all they do is critique ("ruthlessly," per Marx), and they divert all resources to the task.
Critical methods like Critical Social Justice only really know how to do one thing: complain for "change" and "justice." The people involved are ultimately just tools of the "justice"-maximizing meme. They can say they want more justice, but critical methods mean it's never found
So, Critical Social Justice can propose a solution to something, like an MLK essay for all students or more representation or whatever, and then it will problematize the solution because that's literally all it does: problematize systems. It doesn't know how to build or stop.
Eventually, it will divert more and more resources to achieving "justice" through problematizing everything, including it's own solutions (to wit: "people of color" as an idea is now being problematized as an invention of whiteness that allows whites privilege to ignore race).
In the paperclip-maximizer problem, philosophers speculate that a just-barely sufficiently smart AI will, in the effort to maximize paperclips, learn to resist all attempts to stop it or turn it off (as they're against its one goal). Amazing how well that maps onto Theory.
In my reading of Theory, which is getting pretty broad now, one of the most heavily Theorized general concepts is ways that attempting to criticize or stop Theory are all part of the problem Theory is trying to solve. It really is a just-smart-enough paperclip optimizer.
In philosophy regarding AI, there's what's known as the "paperclip maximizer problem." It's that a low-level AI that maximizes the number of paperclips in the environment/universe is a danger.
The point of the paperclip-maximizer thought experiment is that even a really dumb/low-level AI that is programmed only to do one simple thing, maximize paperclips, could cause an apocalypse by diverting ever more resources to its one task without knowing to stop.
Critical theories like Critical Social Justice are actually the paperclip maximizer in social philosophy and activism form. Because they are imbued with critical methods at their heart, all they do is critique ("ruthlessly," per Marx), and they divert all resources to the task.
Critical methods like Critical Social Justice only really know how to do one thing: complain for "change" and "justice." The people involved are ultimately just tools of the "justice"-maximizing meme. They can say they want more justice, but critical methods mean it's never found
So, Critical Social Justice can propose a solution to something, like an MLK essay for all students or more representation or whatever, and then it will problematize the solution because that's literally all it does: problematize systems. It doesn't know how to build or stop.
Eventually, it will divert more and more resources to achieving "justice" through problematizing everything, including it's own solutions (to wit: "people of color" as an idea is now being problematized as an invention of whiteness that allows whites privilege to ignore race).
In the paperclip-maximizer problem, philosophers speculate that a just-barely sufficiently smart AI will, in the effort to maximize paperclips, learn to resist all attempts to stop it or turn it off (as they're against its one goal). Amazing how well that maps onto Theory.
In my reading of Theory, which is getting pretty broad now, one of the most heavily Theorized general concepts is ways that attempting to criticize or stop Theory are all part of the problem Theory is trying to solve. It really is a just-smart-enough paperclip optimizer.