Giving this a brief look, and responding in part to this and in part to my previous impressions of such worldviews...
They don’t mean “AI is destroying the world”, they mean “tech bros and greedy capitalists are destroying the world, and AI is their current fig leaf. AI is impotent, just autocomplete garbage that will never accomplish anything impressive or meaningful.”
This mindset is saying, “Why are these crazy techies trying to spin this science-fiction story? This could never happen, and would be horrible if it did.”
I want a term for the aspect of this viewpoint which is purely reactive, deliberately anti-forward-looking. Anti-extrapolation? Tech-progress denying?
A viewpoint that is allergic to the question, “What might happen next?”
This viewpoint is heavily entangled with bad takes on economic policies as well, as a result of failure to extrapolate.
Also tends to be correlated with anger at existing systems without wanting to engage in architecting better alternatives. Again, because to design a better system requires lots of prediction and extrapolation. How would it work if we designed a feedback machanism like this vs that? Well, we have to run mental simulations and look for edge cases to mentally test, mathematically explore the evolution of the dynamics.
A vibes-based worldview, that shrinks from analyzing gears. This phenomenon is not particularly correlated with a political stance, some subset of every political party will have many such people in it.
Can such people be fired up to take useful actions on behalf of the future? Probably. I don’t think the answer is as simple as changing terminology or carefully modelling their current viewpoints and bridging the inferential divides. If the conceptual bridge you build for them is built of gears, they will be extremely reluctant to cross it.
While looking at more gear-based leftist takes on AI, I found this piece by Daniel Morley, published in the magazine of the Trotskyist “Revolutionary Communist International”. While it contains some fundamental misunderstandings (I personally cringed at conflation of consciousness and intelligence), it shows the writer has done a surprising amount of technical due diligence (it briefly touches on overfitting and adversarial robustness). While it’s thesis boils down to “AI will be bad under capitalism (because technological unemployment and monopolies) but amazing under communism (because AI can help us automate the economy), so let us overthrow capitalism faster”, it at least has a thesis derived from coherent principles and a degree of technical understanding. Also it cited references and made a quite tasteful use of Stable Diffusion for illustrations, so that was nice.
Anyways I guess my somewhat actionable point here is that the non-postmodernist Marxists seem to be at least somewhat thinking (as opposed to angry-vibing) about AI.
From today’s perspective, Marx is just another old white cishet tech bro. (something something swims left)
I never expected that one day I would miss the old-style Marxists, but God forgive me, I do. We disagreed on many things, but at least we were able to have an intelligent debate.
Political positions are inherently high-dimensional, and “leftward” is constantly being rotated around according to where the set of people and institutions considered to be “the left” seem to be moving to.
I don’t think the answer is as simple as changing terminology or carefully modelling their current viewpoints and bridging the inferential divides.
Indeed, and I think that-this-is-the-case is the message I want communicators to grasp: I have very little reach, but I have significant experience talking to people like this, and I want to transfer some of the knowledge from that experience to people who can use it better.
The thing I’ve found most useful is to be able to express that significant parts of their viewpoint are reasonable. Eg, one thing I’ve tried is “AI isn’t just stealing our work, it’s also stealing our competence”. Hasn’t stuck, though. I find it helpful to point out that yes, climate change sure is a (somewhat understated) accurate description of what doom looks like.
I do think “allergies” are a good way to think about it, though. They’re not unable to consider what might happen if AI keeps going as it is, they’re part of a culture that is trying to apply antibodies to AI. And those antibodies include active inference wishcasting like “AI is useless”. They know it’s not completely useless, but the antibody requires them to not acknowledge that in order for its effect to bind; and their criticisms aren’t wrong, just incomplete—the problems they raise with AI are typically real problems, but not high impact ones so much as ones they think will reduce the marketability of AI.
Zvi has an expansion on the vibes-based vs gears-based thinking model that I have found useful for thinking about politics: his take on Simulacra levels.
Giving this a brief look, and responding in part to this and in part to my previous impressions of such worldviews...
They don’t mean “AI is destroying the world”, they mean “tech bros and greedy capitalists are destroying the world, and AI is their current fig leaf. AI is impotent, just autocomplete garbage that will never accomplish anything impressive or meaningful.”
This mindset is saying, “Why are these crazy techies trying to spin this science-fiction story? This could never happen, and would be horrible if it did.”
I want a term for the aspect of this viewpoint which is purely reactive, deliberately anti-forward-looking. Anti-extrapolation? Tech-progress denying?
A viewpoint that is allergic to the question, “What might happen next?”
This viewpoint is heavily entangled with bad takes on economic policies as well, as a result of failure to extrapolate.
Also tends to be correlated with anger at existing systems without wanting to engage in architecting better alternatives. Again, because to design a better system requires lots of prediction and extrapolation. How would it work if we designed a feedback machanism like this vs that? Well, we have to run mental simulations and look for edge cases to mentally test, mathematically explore the evolution of the dynamics.
A vibes-based worldview, that shrinks from analyzing gears. This phenomenon is not particularly correlated with a political stance, some subset of every political party will have many such people in it.
Can such people be fired up to take useful actions on behalf of the future? Probably. I don’t think the answer is as simple as changing terminology or carefully modelling their current viewpoints and bridging the inferential divides. If the conceptual bridge you build for them is built of gears, they will be extremely reluctant to cross it.
While looking at more gear-based leftist takes on AI, I found this piece by Daniel Morley, published in the magazine of the Trotskyist “Revolutionary Communist International”. While it contains some fundamental misunderstandings (I personally cringed at conflation of consciousness and intelligence), it shows the writer has done a surprising amount of technical due diligence (it briefly touches on overfitting and adversarial robustness). While it’s thesis boils down to “AI will be bad under capitalism (because technological unemployment and monopolies) but amazing under communism (because AI can help us automate the economy), so let us overthrow capitalism faster”, it at least has a thesis derived from coherent principles and a degree of technical understanding. Also it cited references and made a quite tasteful use of Stable Diffusion for illustrations, so that was nice.
Anyways I guess my somewhat actionable point here is that the non-postmodernist Marxists seem to be at least somewhat thinking (as opposed to angry-vibing) about AI.
From today’s perspective, Marx is just another old white cishet tech bro. (something something swims left)
I never expected that one day I would miss the old-style Marxists, but God forgive me, I do. We disagreed on many things, but at least we were able to have an intelligent debate.
Political positions are inherently high-dimensional, and “leftward” is constantly being rotated around according to where the set of people and institutions considered to be “the left” seem to be moving to.
Indeed, and I think that-this-is-the-case is the message I want communicators to grasp: I have very little reach, but I have significant experience talking to people like this, and I want to transfer some of the knowledge from that experience to people who can use it better.
The thing I’ve found most useful is to be able to express that significant parts of their viewpoint are reasonable. Eg, one thing I’ve tried is “AI isn’t just stealing our work, it’s also stealing our competence”. Hasn’t stuck, though. I find it helpful to point out that yes, climate change sure is a (somewhat understated) accurate description of what doom looks like.
I do think “allergies” are a good way to think about it, though. They’re not unable to consider what might happen if AI keeps going as it is, they’re part of a culture that is trying to apply antibodies to AI. And those antibodies include active inference wishcasting like “AI is useless”. They know it’s not completely useless, but the antibody requires them to not acknowledge that in order for its effect to bind; and their criticisms aren’t wrong, just incomplete—the problems they raise with AI are typically real problems, but not high impact ones so much as ones they think will reduce the marketability of AI.
Zvi has an expansion on the vibes-based vs gears-based thinking model that I have found useful for thinking about politics: his take on Simulacra levels.