If this post had just said “I think some people may feel strongly about AI x-risk for reasons that ultimately come down to some sort of emotional/physical pain whose origins have nothing to do with AI; here is why I think this, and here are some things you can do that might help find out whether you’re one of them and to address the underlying problem if so”, then I would consider it very valuable and deserving of attention and upvotes and whatnot. I think it’s very plausible that this sort of thing is driving at least some AI-terror. I think it’s very plausible that a lot of people on LW (and elsewhere) would benefit from paying more attention to their bodies.
… But that’s not what this post does. It says you have to be “living in a[...] illusion” to be terrified by apocalyptic prospects. It says that if you are “feeling stressed” about AI risks then you are “hallucinating”. It says that “what LW is actually about” is not actual AI risk and what to do about it (but, by implication, this alleged “game” of which Eliezer Yudkowsky is the “gamesmaster” that works by engaging everyone’s fight-or-flight reactions to induce terror). It says that, for reasons beyond my understanding, it is impossible to make actual progress on whatever real AI risk problems there might be while in this stressed-because-of-underlying-issues state of mind. It says that “the reason” (italics mine) AI looks like a big threat is because the people to whom it seems like a big threat are “projecting [their] inner hell onto the external world”. And it doesn’t offer the slightest shred of evidence for any of this; we are just supposed to, I dunno, feel in our bodies that Valentine is telling us the truth, or something like that.
I don’t think this is good epistemics. Maybe there is actually really good evidence that the mechanism Valentine describes here is something like the only way that stress ever arises in human beings. (I wouldn’t be hugely surprised to find that it’s true for the stronger case of terror, and I could fairly easily be convinced that anyone experiencing terror over something that isn’t an immediate physical threat is responding suboptimally to their situation. Valentine is claiming a lot more than that, though.) But in that case I want to see the really good evidence, and while I haven’t gathered any actual statistics on how often people claiming controversial things with great confidence but unwilling to offer good evidence for them turn out to be right and/or helpful, I’m pretty sure that many of them don’t. Even more so when they also suggest that attempts to argue with them about their claims are some sort of deflection (or, worse, attempts to keep this destructive “game” going) that doesn’t merit engaging with.
Full disclosure #1: I do not myself feel the strong emotional reaction to AI risk that many people here do. I do not profess to know whether (as Valentine might suggest) this indicates that I am less screwed up psychologically than people who feel that strong emotional reaction, or whether (as Eliezer Yudkowsky might suggest) it indicates that I don’t understand the issues as fully as they do. I suspect that actually it’s neither of those (though either might happen to be true[1]) but just that different people get more or less emotionally involved in things in ways that don’t necessarily correlate neatly with their degree of psychological screwage or intellectual appreciation of the things in question.
[1] For that matter, the opposite of either might be true, in principle. I might be psychologically screwed up in ways that cut me off from strong emotions I would otherwise feel. I might have more insight into AI risk than the people who feel more strongly that helps me see why it’s not so worrying, or why being scared doesn’t help with it. I think these are both less likely than their opposites, for what it’s worth.
Full disclosure #2: Valentine’s commenting guidelines discourage commenting unless you “feel the truth of [that Valentine and you are exploring the truth together] in your body” and require “reverent respect”. I honestly do not know, and don’t know how I could tell with confidence, whether Valentine and I are exploring the truth together; at any rate, I do not have the skill (if that’s what it is) of telling what someone else is doing by feeling things in my body. I hope I treat everyone with respect; I don’t think I treat anyone with reverence, nor do I wish to. If any of that is unacceptable to Valentine, so be it.
Clarification for the avoidance of doubt: I don’t have strong opinions on just what probability we should assign to (e.g.) the bulk of the human race being killed-or-worse as a result of the actions of an AI system within the next century, nor on what psychological response is healthiest for any given probability. The criticisms above are not (at least, not consciously) some sort of disguise for an underlying complaint that Valentine is trying to downplay an important issue, nor for anger that he is revealing that an emperor I admire has no clothes. My complaint is exactly what I say it is: I think this sort of bulveristic “I know you’re only saying this because of your psychological problems, which I shall now proceed to reveal to you; it would be a total waste of time to engage with your actual opinions because they are merely expressions of psychological damage, and providing evidence for my claims is beneath me”[2] game is not only rude (which Valentine admits, and I agree that it is sometimes helpful or even necessary to be rude) but usually harmful and very much not the sort of thing I want to see more of on Less Wrong.
[2] I do not claim that Valentine is saying exactly those things. But that is very much the general vibe.
(Also somewhat relevant, though not especially to any of what I’ve written above, and dropped here without further comment: “Existential Angst Factory”.)
Whew! That’s a lot. I’m not going to try to answer all of that.
In short: I think you’re correctly following LW norms. You’re right that I wasn’t careful about tone, and by the norms here it’s good to note that.
And also, that wasn’t what this piece was about.
I intended it as an invitation. Not as a set of claims to evaluate.
If you look where I’m pointing, and you recognize some of yourself in it (which it sounds like you don’t!), then the suggestions I gesture toward (like Irene Lyon, and maybe loosening the mental grip on the doomy thoughts) might seem worth exploring.
I have no intention of putting an argument together, with evidence and statistics and the like, validating the mechanisms I’m talking about. That would actually go in the opposite direction of making an audible invitation.
But! I think your contribution is good. It’s maybe a little more indignant than necessary. But it’s… mmm… fitting, I’ll say.
I think it would be more-graceful of you to just admit that it is possible that there may be more than one reason for people to be in terror of the end of the world, and likewise qualify your other claims to certainty and universality.
That’s the main point of what gjm wrote. I’m sympathetic to the view you’re trying to communicate, Valentine; but you used words that claim that what you say is absolute, immutable truth, and that’s the worst mind-killer of all. Everything you wrote just above seems to me to be just equivocation trying to deny that technical yet critical point.
I understand that you think that’s just a quibble, but it really, really isn’t. Claiming privileged access to absolute truth on LessWrong is like using the N-word in a speech to the NAACP. It would do no harm to what you wanted to say to use phrases like “many people” or even “most people” instead of the implicit “all people”, and it would eliminate a lot of pushback.
If this post had just said “I think some people may feel strongly about AI x-risk for reasons that ultimately come down to some sort of emotional/physical pain whose origins have nothing to do with AI; here is why I think this, and here are some things you can do that might help find out whether you’re one of them and to address the underlying problem if so”, then I would consider it very valuable and deserving of attention and upvotes and whatnot. I think it’s very plausible that this sort of thing is driving at least some AI-terror. I think it’s very plausible that a lot of people on LW (and elsewhere) would benefit from paying more attention to their bodies.
… But that’s not what this post does. It says you have to be “living in a[...] illusion” to be terrified by apocalyptic prospects. It says that if you are “feeling stressed” about AI risks then you are “hallucinating”. It says that “what LW is actually about” is not actual AI risk and what to do about it (but, by implication, this alleged “game” of which Eliezer Yudkowsky is the “gamesmaster” that works by engaging everyone’s fight-or-flight reactions to induce terror). It says that, for reasons beyond my understanding, it is impossible to make actual progress on whatever real AI risk problems there might be while in this stressed-because-of-underlying-issues state of mind. It says that “the reason” (italics mine) AI looks like a big threat is because the people to whom it seems like a big threat are “projecting [their] inner hell onto the external world”. And it doesn’t offer the slightest shred of evidence for any of this; we are just supposed to, I dunno, feel in our bodies that Valentine is telling us the truth, or something like that.
I don’t think this is good epistemics. Maybe there is actually really good evidence that the mechanism Valentine describes here is something like the only way that stress ever arises in human beings. (I wouldn’t be hugely surprised to find that it’s true for the stronger case of terror, and I could fairly easily be convinced that anyone experiencing terror over something that isn’t an immediate physical threat is responding suboptimally to their situation. Valentine is claiming a lot more than that, though.) But in that case I want to see the really good evidence, and while I haven’t gathered any actual statistics on how often people claiming controversial things with great confidence but unwilling to offer good evidence for them turn out to be right and/or helpful, I’m pretty sure that many of them don’t. Even more so when they also suggest that attempts to argue with them about their claims are some sort of deflection (or, worse, attempts to keep this destructive “game” going) that doesn’t merit engaging with.
Full disclosure #1: I do not myself feel the strong emotional reaction to AI risk that many people here do. I do not profess to know whether (as Valentine might suggest) this indicates that I am less screwed up psychologically than people who feel that strong emotional reaction, or whether (as Eliezer Yudkowsky might suggest) it indicates that I don’t understand the issues as fully as they do. I suspect that actually it’s neither of those (though either might happen to be true[1]) but just that different people get more or less emotionally involved in things in ways that don’t necessarily correlate neatly with their degree of psychological screwage or intellectual appreciation of the things in question.
[1] For that matter, the opposite of either might be true, in principle. I might be psychologically screwed up in ways that cut me off from strong emotions I would otherwise feel. I might have more insight into AI risk than the people who feel more strongly that helps me see why it’s not so worrying, or why being scared doesn’t help with it. I think these are both less likely than their opposites, for what it’s worth.
Full disclosure #2: Valentine’s commenting guidelines discourage commenting unless you “feel the truth of [that Valentine and you are exploring the truth together] in your body” and require “reverent respect”. I honestly do not know, and don’t know how I could tell with confidence, whether Valentine and I are exploring the truth together; at any rate, I do not have the skill (if that’s what it is) of telling what someone else is doing by feeling things in my body. I hope I treat everyone with respect; I don’t think I treat anyone with reverence, nor do I wish to. If any of that is unacceptable to Valentine, so be it.
Clarification for the avoidance of doubt: I don’t have strong opinions on just what probability we should assign to (e.g.) the bulk of the human race being killed-or-worse as a result of the actions of an AI system within the next century, nor on what psychological response is healthiest for any given probability. The criticisms above are not (at least, not consciously) some sort of disguise for an underlying complaint that Valentine is trying to downplay an important issue, nor for anger that he is revealing that an emperor I admire has no clothes. My complaint is exactly what I say it is: I think this sort of bulveristic “I know you’re only saying this because of your psychological problems, which I shall now proceed to reveal to you; it would be a total waste of time to engage with your actual opinions because they are merely expressions of psychological damage, and providing evidence for my claims is beneath me”[2] game is not only rude (which Valentine admits, and I agree that it is sometimes helpful or even necessary to be rude) but usually harmful and very much not the sort of thing I want to see more of on Less Wrong.
[2] I do not claim that Valentine is saying exactly those things. But that is very much the general vibe.
(Also somewhat relevant, though not especially to any of what I’ve written above, and dropped here without further comment: “Existential Angst Factory”.)
Whew! That’s a lot. I’m not going to try to answer all of that.
In short: I think you’re correctly following LW norms. You’re right that I wasn’t careful about tone, and by the norms here it’s good to note that.
And also, that wasn’t what this piece was about.
I intended it as an invitation. Not as a set of claims to evaluate.
If you look where I’m pointing, and you recognize some of yourself in it (which it sounds like you don’t!), then the suggestions I gesture toward (like Irene Lyon, and maybe loosening the mental grip on the doomy thoughts) might seem worth exploring.
I have no intention of putting an argument together, with evidence and statistics and the like, validating the mechanisms I’m talking about. That would actually go in the opposite direction of making an audible invitation.
But! I think your contribution is good. It’s maybe a little more indignant than necessary. But it’s… mmm… fitting, I’ll say.
I’ll leave it at that.
I think it would be more-graceful of you to just admit that it is possible that there may be more than one reason for people to be in terror of the end of the world, and likewise qualify your other claims to certainty and universality.
That’s the main point of what gjm wrote. I’m sympathetic to the view you’re trying to communicate, Valentine; but you used words that claim that what you say is absolute, immutable truth, and that’s the worst mind-killer of all. Everything you wrote just above seems to me to be just equivocation trying to deny that technical yet critical point.
I understand that you think that’s just a quibble, but it really, really isn’t. Claiming privileged access to absolute truth on LessWrong is like using the N-word in a speech to the NAACP. It would do no harm to what you wanted to say to use phrases like “many people” or even “most people” instead of the implicit “all people”, and it would eliminate a lot of pushback.
(I see that this comment has received a lot of downvotes None of them is from me.)