Personally, I sometimes have the opposite metacognitive concern: that I’m not freaking out enough about AI risk. The argument goes: if I don’t have a strong emotional response, doesn’t it mean I’m lying to myself about believing that AI risk is real? I even did a few exercises in which I tried to visualize either the doom or some symbolic representation of the doom in order to see whether it triggers emotion or, conversely, exposes some self-deception, something that rings fake. The mental state that triggered was interesting, more like a feeling of calm meditative sadness than panic. Ultimately, I think you’re right when you say, if something doesn’t threaten me on the timescale of minutes, it shouldn’t send me into fight-or-flight. And, it doesn’t.
I also tentatively agree that it feels like there’s something unhealthy in the panicky response to Yudkowsky’s recent proclamation of doom, and it might lead to muddled thinking. For example, it seems like everyone around here are becoming convinced of shorter and shorter timelines, without sufficient evidence IMO. But, I don’t know whether your diagnosis is correct. Most of the discourse about AI risk around here is not producing any real progress on the problem. But, occasionally it does. And I’m not sure whether the root of the problem is psychological/memetic (as you claim) or just that it’s a difficult problem that only a few can meaningfully contribute to.
…if I don’t have a strong emotional response, doesn’t it mean I’m lying to myself about believing that AI risk is real?
Just to be clear, I’m not talking about strong emotional responses per se. I’m talking about the body freaking out — which often produces strong emotions.
I’m way less concerned about heart-wrenching grief than I am about nervousness, for instance.
Most of the discourse about AI risk around here is not producing any real progress on the problem. But, occasionally it does. And I’m not sure whether the root of the problem is psychological/memetic (as you claim) or just that it’s a difficult problem that only a few can meaningfully contribute to.
That’s fair.
Though I do think the immense difficulty with coordination around AI risk stuff totally is a memetic thing, and that AI risk is a hard enough problem that a focus on tackling it directly with what amounts to a shrug toward the memetic problem is kind of pushing the door on its hinges.
Just to be clear, I’m not talking about strong emotional responses per se. I’m talking about the body freaking out — which often produces strong emotions.
There are a few different psychological theories about how emotions get produced, and how much other physical reactions influence and/or are influenced by that.
So… this isn’t a particularly useful distinction, and I didn’t see much of it in-depth in the post proper.
It’s easier to be more composed about a problem, when you think you have the kernel of a solution. I mean, aren’t you the founder of the Infra-Bayesian school of thought?
Personally, I sometimes have the opposite metacognitive concern: that I’m not freaking out enough about AI risk. The argument goes: if I don’t have a strong emotional response, doesn’t it mean I’m lying to myself about believing that AI risk is real? I even did a few exercises in which I tried to visualize either the doom or some symbolic representation of the doom in order to see whether it triggers emotion or, conversely, exposes some self-deception, something that rings fake. The mental state that triggered was interesting, more like a feeling of calm meditative sadness than panic. Ultimately, I think you’re right when you say, if something doesn’t threaten me on the timescale of minutes, it shouldn’t send me into fight-or-flight. And, it doesn’t.
I also tentatively agree that it feels like there’s something unhealthy in the panicky response to Yudkowsky’s recent proclamation of doom, and it might lead to muddled thinking. For example, it seems like everyone around here are becoming convinced of shorter and shorter timelines, without sufficient evidence IMO. But, I don’t know whether your diagnosis is correct. Most of the discourse about AI risk around here is not producing any real progress on the problem. But, occasionally it does. And I’m not sure whether the root of the problem is psychological/memetic (as you claim) or just that it’s a difficult problem that only a few can meaningfully contribute to.
Just to be clear, I’m not talking about strong emotional responses per se. I’m talking about the body freaking out — which often produces strong emotions.
I’m way less concerned about heart-wrenching grief than I am about nervousness, for instance.
That’s fair.
Though I do think the immense difficulty with coordination around AI risk stuff totally is a memetic thing, and that AI risk is a hard enough problem that a focus on tackling it directly with what amounts to a shrug toward the memetic problem is kind of pushing the door on its hinges.
There are a few different psychological theories about how emotions get produced, and how much other physical reactions influence and/or are influenced by that.
So… this isn’t a particularly useful distinction, and I didn’t see much of it in-depth in the post proper.
If this wasn’t a useful distinction for you, then why comment on it? To tell me not to have made it at all?
Good point, just something I noticed, but now that you mention it it’s not very useful.EDIT: wait, no, I was commenting on it to point out that you don’t seem to have made the distinction yourself in the post proper.
It’s easier to be more composed about a problem, when you think you have the kernel of a solution. I mean, aren’t you the founder of the Infra-Bayesian school of thought?