On the one hand it deeply resonates with my own observations. Many of my friends from the community seem to be stuck on the addictive loop of proclaiming the end of the world every time a new model comes out. I think it’s even more dangerous, as it becomes a social activity: “I am more worried than you about the end of the world, because I am smarter/more agentic than you, and I am better at recognizing the risk that this represents for our tribe.” gets implicitly tossed around in a cycle where the members keep trying to one-up each other. This only ends when their claims get so absurd as to say the world will end next month, but even this absurdity seems to keep getting eroded over time.
Like someone else said here in the comments, if was reading about this issue in some unrelated doomsday cult from a book, I would immediately dismiss them as a bunch of lunatics. “How many doomsday cults have existed in history? Even if yours is based on at least some solid theoretical foundations, what happened to the previous thousands of doomsday cults that also were, and were wrong?”
On the other hand I have to admit that the arguments in your post are a bit weak. They allow you to prove too much. To any objection, you could say “Well, see, you are only objecting to this because you have been thinking about AI risk for too long, and thus you are not able to reason about the issue properly”. Even though I personally think you might be right, I cannot use this to help anyone else in good faith, and most likely they will just see through it.
So yes. Conflicting.
In any case, I think some introspection in the community would be ideal. Many members will say “I have nothing to do with this, I’m a purely technical person, yada yada” and it might be true for them! But is it true in general? Is thinking about AI risk causing harm to some members of the community, and inducing cult-like behaviors? If so, I don’t think this is something we should turn a blind eye to. If anything because we should all recognize that such a situation would in itself be detrimental to AI risk research.
[Your arguments] allow you to prove too much. To any objection, you could say “Well, see, you are only objecting to this because you have been thinking about AI risk for too long, and thus you are not able to reason about the issue properly”.
Um. That’s a thing I suppose someone could do with some variation of these frames, sure. That’s not a move I’m at all interested in though. I really would prefer no one does this. It warps the point into something untrue and unkind.
I’m much more interested in something like:
There’s this specific internal system design a person can fall into.
It’s a pretty loud feature of the general rationalist cluster.
If you (a general reader, not you mkualqulera per se) are subject to this pattern and you want out, here’s a way out.
Also, people who are in such a pattern but don’t want out (or are too stuck in it to see they’re in it) are in fact making the real thing harder to solve. So noticing and getting out of this pattern really is a priority if you care about the real thing.
Now, if someone freaks out at me for pointing this out and makes some bizarre assumptions about what I’m saying (like, say, that I’m claiming there’s no AI problem or that I’m saying any action to deal with it is delusional), at that point I consider it way more likely that they’re “drunk”, and I’m much more likely to ignore what they have to say. Their ravings and condemnation land for me like a raging alcoholic who’s super pissed I implied they have a problem with an addiction.
But none of this is about me winning arguments with people. It’s about pointing out a mechanism for those who want to see it.
And for those for whom it doesn’t apply, or to whom it does but they’re determined not to look? Well, cool, good on them! Truly.
(Also, I like the kind of conflict you’re wrestling with. I don’t want to try to argue you out of that. I just wanted to clarify this part a bit.)
I am very conflicted about this post.
On the one hand it deeply resonates with my own observations. Many of my friends from the community seem to be stuck on the addictive loop of proclaiming the end of the world every time a new model comes out. I think it’s even more dangerous, as it becomes a social activity: “I am more worried than you about the end of the world, because I am smarter/more agentic than you, and I am better at recognizing the risk that this represents for our tribe.” gets implicitly tossed around in a cycle where the members keep trying to one-up each other. This only ends when their claims get so absurd as to say the world will end next month, but even this absurdity seems to keep getting eroded over time.
Like someone else said here in the comments, if was reading about this issue in some unrelated doomsday cult from a book, I would immediately dismiss them as a bunch of lunatics. “How many doomsday cults have existed in history? Even if yours is based on at least some solid theoretical foundations, what happened to the previous thousands of doomsday cults that also were, and were wrong?”
On the other hand I have to admit that the arguments in your post are a bit weak. They allow you to prove too much. To any objection, you could say “Well, see, you are only objecting to this because you have been thinking about AI risk for too long, and thus you are not able to reason about the issue properly”. Even though I personally think you might be right, I cannot use this to help anyone else in good faith, and most likely they will just see through it.
So yes. Conflicting.
In any case, I think some introspection in the community would be ideal. Many members will say “I have nothing to do with this, I’m a purely technical person, yada yada” and it might be true for them! But is it true in general? Is thinking about AI risk causing harm to some members of the community, and inducing cult-like behaviors? If so, I don’t think this is something we should turn a blind eye to. If anything because we should all recognize that such a situation would in itself be detrimental to AI risk research.
Um. That’s a thing I suppose someone could do with some variation of these frames, sure. That’s not a move I’m at all interested in though. I really would prefer no one does this. It warps the point into something untrue and unkind.
I’m much more interested in something like:
There’s this specific internal system design a person can fall into.
It’s a pretty loud feature of the general rationalist cluster.
If you (a general reader, not you mkualqulera per se) are subject to this pattern and you want out, here’s a way out.
Also, people who are in such a pattern but don’t want out (or are too stuck in it to see they’re in it) are in fact making the real thing harder to solve. So noticing and getting out of this pattern really is a priority if you care about the real thing.
Now, if someone freaks out at me for pointing this out and makes some bizarre assumptions about what I’m saying (like, say, that I’m claiming there’s no AI problem or that I’m saying any action to deal with it is delusional), at that point I consider it way more likely that they’re “drunk”, and I’m much more likely to ignore what they have to say. Their ravings and condemnation land for me like a raging alcoholic who’s super pissed I implied they have a problem with an addiction.
But none of this is about me winning arguments with people. It’s about pointing out a mechanism for those who want to see it.
And for those for whom it doesn’t apply, or to whom it does but they’re determined not to look? Well, cool, good on them! Truly.
(Also, I like the kind of conflict you’re wrestling with. I don’t want to try to argue you out of that. I just wanted to clarify this part a bit.)
I have been following this group for ten years. It is just another doomsday cult.