people who dislike AI, and therefore could be taking risks from AI seriously, are instead having reactions like this. https://blue.mackuba.eu/skythread/?author=brooklynmarie.bsky.social&post=3lcywmwr7b22i why? if we soberly evaluate what this person has said about AI, and just, like, think about why they would say such a thing—well, what do they seem to mean? they typically say “AI is destroying the world”, someone said that in the comments; but then roll their eyes at the idea that AI is powerful. They say the issue is water consumption—why would someone repeat that idea? Under what framework is that a sensible combination of things to say? what consensus are they trying to build? what about the article are they responding to?
I think there are straightforward answers to these questions that are reasonable and good on behalf of the people who say these things, but are not as effective by their own standards as they could be, and which miss upcoming concerns. I could say more about what I think, but I’d rather post this as leading questions, because I think the reading of the person’s posts you’d need to do to go from the questions I just asked to my opinions will build more of the model I want to convey than saying it directly.
But I think the fact that articles like this get reactions like this is an indication that orgs like Anthropic or PauseAI are not engaging seriously with detractors, and trying seriously to do so seems to me like a good idea. It’s not my top priority ask for Anthropic, but it’s not very far down the virtual list.
But it’s just one of many reactions of this category I’ve seen that seem to me to indicate that people engaging with a rationalist-type negative attitude towards their observations of AI are not communicating successfully with people who have an ordinary-person-type negative attitude towards what they’ve seen of AI. I suspect that at least a large part of the issue is that rationalists have built up antibodies to a certain kind of attitude and auto-ignore it, despite what I perceive to be its popularity, and as a result don’t build intuitive models about how to communicate with such a person.
Giving this a brief look, and responding in part to this and in part to my previous impressions of such worldviews...
They don’t mean “AI is destroying the world”, they mean “tech bros and greedy capitalists are destroying the world, and AI is their current fig leaf. AI is impotent, just autocomplete garbage that will never accomplish anything impressive or meaningful.”
This mindset is saying, “Why are these crazy techies trying to spin this science-fiction story? This could never happen, and would be horrible if it did.”
I want a term for the aspect of this viewpoint which is purely reactive, deliberately anti-forward-looking. Anti-extrapolation? Tech-progress denying?
A viewpoint that is allergic to the question, “What might happen next?”
This viewpoint is heavily entangled with bad takes on economic policies as well, as a result of failure to extrapolate.
Also tends to be correlated with anger at existing systems without wanting to engage in architecting better alternatives. Again, because to design a better system requires lots of prediction and extrapolation. How would it work if we designed a feedback machanism like this vs that? Well, we have to run mental simulations and look for edge cases to mentally test, mathematically explore the evolution of the dynamics.
A vibes-based worldview, that shrinks from analyzing gears. This phenomenon is not particularly correlated with a political stance, some subset of every political party will have many such people in it.
Can such people be fired up to take useful actions on behalf of the future? Probably. I don’t think the answer is as simple as changing terminology or carefully modelling their current viewpoints and bridging the inferential divides. If the conceptual bridge you build for them is built of gears, they will be extremely reluctant to cross it.
While looking at more gear-based leftist takes on AI, I found this piece by Daniel Morley, published in the magazine of the Trotskyist “Revolutionary Communist International”. While it contains some fundamental misunderstandings (I personally cringed at conflation of consciousness and intelligence), it shows the writer has done a surprising amount of technical due diligence (it briefly touches on overfitting and adversarial robustness). While it’s thesis boils down to “AI will be bad under capitalism (because technological unemployment and monopolies) but amazing under communism (because AI can help us automate the economy), so let us overthrow capitalism faster”, it at least has a thesis derived from coherent principles and a degree of technical understanding. Also it cited references and made a quite tasteful use of Stable Diffusion for illustrations, so that was nice.
Anyways I guess my somewhat actionable point here is that the non-postmodernist Marxists seem to be at least somewhat thinking (as opposed to angry-vibing) about AI.
From today’s perspective, Marx is just another old white cishet tech bro. (something something swims left)
I never expected that one day I would miss the old-style Marxists, but God forgive me, I do. We disagreed on many things, but at least we were able to have an intelligent debate.
Political positions are inherently high-dimensional, and “leftward” is constantly being rotated around according to where the set of people and institutions considered to be “the left” seem to be moving to.
I don’t think the answer is as simple as changing terminology or carefully modelling their current viewpoints and bridging the inferential divides.
Indeed, and I think that-this-is-the-case is the message I want communicators to grasp: I have very little reach, but I have significant experience talking to people like this, and I want to transfer some of the knowledge from that experience to people who can use it better.
The thing I’ve found most useful is to be able to express that significant parts of their viewpoint are reasonable. Eg, one thing I’ve tried is “AI isn’t just stealing our work, it’s also stealing our competence”. Hasn’t stuck, though. I find it helpful to point out that yes, climate change sure is a (somewhat understated) accurate description of what doom looks like.
I do think “allergies” are a good way to think about it, though. They’re not unable to consider what might happen if AI keeps going as it is, they’re part of a culture that is trying to apply antibodies to AI. And those antibodies include active inference wishcasting like “AI is useless”. They know it’s not completely useless, but the antibody requires them to not acknowledge that in order for its effect to bind; and their criticisms aren’t wrong, just incomplete—the problems they raise with AI are typically real problems, but not high impact ones so much as ones they think will reduce the marketability of AI.
Zvi has an expansion on the vibes-based vs gears-based thinking model that I have found useful for thinking about politics: his take on Simulacra levels.
people who dislike AI, and therefore could be taking risks from AI seriously, are instead having reactions like this. https://blue.mackuba.eu/skythread/?author=brooklynmarie.bsky.social&post=3lcywmwr7b22i why? if we soberly evaluate what this person has said about AI, and just, like, think about why they would say such a thing—well, what do they seem to mean? they typically say “AI is destroying the world”, someone said that in the comments; but then roll their eyes at the idea that AI is powerful. They say the issue is water consumption—why would someone repeat that idea? Under what framework is that a sensible combination of things to say? what consensus are they trying to build? what about the article are they responding to?
I think there are straightforward answers to these questions that are reasonable and good on behalf of the people who say these things, but are not as effective by their own standards as they could be, and which miss upcoming concerns. I could say more about what I think, but I’d rather post this as leading questions, because I think the reading of the person’s posts you’d need to do to go from the questions I just asked to my opinions will build more of the model I want to convey than saying it directly.
But I think the fact that articles like this get reactions like this is an indication that orgs like Anthropic or PauseAI are not engaging seriously with detractors, and trying seriously to do so seems to me like a good idea. It’s not my top priority ask for Anthropic, but it’s not very far down the virtual list.
But it’s just one of many reactions of this category I’ve seen that seem to me to indicate that people engaging with a rationalist-type negative attitude towards their observations of AI are not communicating successfully with people who have an ordinary-person-type negative attitude towards what they’ve seen of AI. I suspect that at least a large part of the issue is that rationalists have built up antibodies to a certain kind of attitude and auto-ignore it, despite what I perceive to be its popularity, and as a result don’t build intuitive models about how to communicate with such a person.
Giving this a brief look, and responding in part to this and in part to my previous impressions of such worldviews...
They don’t mean “AI is destroying the world”, they mean “tech bros and greedy capitalists are destroying the world, and AI is their current fig leaf. AI is impotent, just autocomplete garbage that will never accomplish anything impressive or meaningful.”
This mindset is saying, “Why are these crazy techies trying to spin this science-fiction story? This could never happen, and would be horrible if it did.”
I want a term for the aspect of this viewpoint which is purely reactive, deliberately anti-forward-looking. Anti-extrapolation? Tech-progress denying?
A viewpoint that is allergic to the question, “What might happen next?”
This viewpoint is heavily entangled with bad takes on economic policies as well, as a result of failure to extrapolate.
Also tends to be correlated with anger at existing systems without wanting to engage in architecting better alternatives. Again, because to design a better system requires lots of prediction and extrapolation. How would it work if we designed a feedback machanism like this vs that? Well, we have to run mental simulations and look for edge cases to mentally test, mathematically explore the evolution of the dynamics.
A vibes-based worldview, that shrinks from analyzing gears. This phenomenon is not particularly correlated with a political stance, some subset of every political party will have many such people in it.
Can such people be fired up to take useful actions on behalf of the future? Probably. I don’t think the answer is as simple as changing terminology or carefully modelling their current viewpoints and bridging the inferential divides. If the conceptual bridge you build for them is built of gears, they will be extremely reluctant to cross it.
While looking at more gear-based leftist takes on AI, I found this piece by Daniel Morley, published in the magazine of the Trotskyist “Revolutionary Communist International”. While it contains some fundamental misunderstandings (I personally cringed at conflation of consciousness and intelligence), it shows the writer has done a surprising amount of technical due diligence (it briefly touches on overfitting and adversarial robustness). While it’s thesis boils down to “AI will be bad under capitalism (because technological unemployment and monopolies) but amazing under communism (because AI can help us automate the economy), so let us overthrow capitalism faster”, it at least has a thesis derived from coherent principles and a degree of technical understanding. Also it cited references and made a quite tasteful use of Stable Diffusion for illustrations, so that was nice.
Anyways I guess my somewhat actionable point here is that the non-postmodernist Marxists seem to be at least somewhat thinking (as opposed to angry-vibing) about AI.
From today’s perspective, Marx is just another old white cishet tech bro. (something something swims left)
I never expected that one day I would miss the old-style Marxists, but God forgive me, I do. We disagreed on many things, but at least we were able to have an intelligent debate.
Political positions are inherently high-dimensional, and “leftward” is constantly being rotated around according to where the set of people and institutions considered to be “the left” seem to be moving to.
Indeed, and I think that-this-is-the-case is the message I want communicators to grasp: I have very little reach, but I have significant experience talking to people like this, and I want to transfer some of the knowledge from that experience to people who can use it better.
The thing I’ve found most useful is to be able to express that significant parts of their viewpoint are reasonable. Eg, one thing I’ve tried is “AI isn’t just stealing our work, it’s also stealing our competence”. Hasn’t stuck, though. I find it helpful to point out that yes, climate change sure is a (somewhat understated) accurate description of what doom looks like.
I do think “allergies” are a good way to think about it, though. They’re not unable to consider what might happen if AI keeps going as it is, they’re part of a culture that is trying to apply antibodies to AI. And those antibodies include active inference wishcasting like “AI is useless”. They know it’s not completely useless, but the antibody requires them to not acknowledge that in order for its effect to bind; and their criticisms aren’t wrong, just incomplete—the problems they raise with AI are typically real problems, but not high impact ones so much as ones they think will reduce the marketability of AI.
Zvi has an expansion on the vibes-based vs gears-based thinking model that I have found useful for thinking about politics: his take on Simulacra levels.