More broadly, TurnTrout, I’ve noticed you using this whole “look, if something positive happened, LW would totally rip on it! But if something is presented negatively, everyone loves it!” line of reasoning a few times (e.g., I think this logic came up in your comment about Evan’s recent paper). And I sort of see you taking on some sort of “the people with high P(doom) just have bad epistemics” flag in some of your comments.
A few thoughts (written quickly, prioritizing speed over precision):
I think that epistemics are hard & there are surely several cases in which people are biased toward high P(doom). Examples: Yudkowsky was one of the first thinkers/writers about AI, some people might have emotional dispositions that lead them toward anxious/negative interpretations in general, some people find it “cool” to think they’re one of the few people who are able to accurately identify the world is ending, etc.
I also think that there are plenty of factors biasing epistemics in the “hopeful” direction. Examples: The AI labs have tons of money and status (& employ large fractions of the community’s talent), some people might have emotional dispositions that lead them toward overly optimistic/rosy interpretations in general, some people might find it psychologically difficult to accept premises that lead them to think the world is ending, etc.
My impression (which could be false) is that you seem to be exclusively or disproportionately critical of poor arguments when they come from the “high P(doom)” side.
I also think there’s an important distinction between “I personally think this argument is wrong” and “look, here’s an example of propaganda + poor community epistemics.” In general, I suspect community epistemics are better when people tend to respond directly to object-level points and have a relatively high bar for saying “not only do I think you’re wrong, but also here are some ways in which you and your allies have poor epistemics.” (IDK though, insofar as you actually believe that’s what’s happening, it seems good to say aloud, and I think there’s a version of this that goes too far and polices speech reproductively, but I do think that statements like “community epistemics have been compromised by groupthink and fear” are pretty unproductive and could be met with statements like “community epistemics have been compromised by powerful billion-dollar companies that have clear financial incentives to make people overly optimistic about the trajectory of AI progress.”
I am quite worried about tribal dynamics reducing the ability for people to engage in productive truth-seeking discussions. I think you’ve pointed out how some of the stylistic/tonal things from the “high P(doom)//alignment hard” side have historically made discourse harder, and I agree with several of your critiques. More recently, though, I think that the “low P(doom)//alignment not hard” side seem to be falling into similar traps (e.g., attacking strawmen of those they disagree with, engaging some sort of “ha, the other side is not only wrong but also just dumb/unreasonable/epistemically corrupted” vibe that predictably makes people defensive & makes discourse harder.
My guess is that it’s relatively epistemically corrupting and problematic to spend a lot of time engaging with weak arguments.
I think it’s tempting to make the mistake of thinking that debunking a specific (bad) argument is the same as debunking a conclusion. But actually, these are extremely different operations. One requires understanding a specific argument while the other requires level headed investigation of the overall situation. Separately, there are often actually good intuitions underlying bad arguments and recovering this intuition is an important part of truth seeking.
I think my concerns here probably apply to a wide variety of people thinking about AI x-risk. I worry about this for myself.
Thanks for this, I really appreciate this comment (though my perspective is different on many points).
My impression (which could be false) is that you seem to be exclusively or disproportionately critical of poor arguments when they come from the “high P(doom)” side.
It’s true that I spend more effort critiquing bad doom arguments. I would like to note that when e.g. I read Quintin I generally am either in agreement or neutral. I bet there are a lot of cases where you would think “that’s a poor argument” and I’d say “hm I don’t think Akash is getting the point (and it’d be good if someone could give a better explanation).”
However, it’s definitely not true that I never critique optimistic arguments which I consider poor. For example, I don’t get why Quintin (apparently) thinks that spectral bias is a reason for optimism, and I’ve said as much on one of his posts. I’ve said something like “I don’t know why you seem to think you can use this mathematical inductive bias to make high-level intuitive claims about what gets learned. This seems to fall into the same trap that ‘simplicity’ theorizing does.” I probably criticize or express skepticism of certain optimistic arguments at least twice a week, though not always on public channels. And I’ve also pushed back on people being unfair, mean, or mocking of “doomers” on private channels.
I do think that statements like “community epistemics have been compromised by groupthink and fear” are pretty unproductive and could be met with statements like “community epistemics have been compromised by powerful billion-dollar companies that have clear financial incentives to make people overly optimistic about the trajectory of AI progress.”
I think both statements are true to varying degrees (the former more than the latter in the cases I’m considering). They’re true and people should say them. The fact that I work at a lab absolutely affects my epistemics (though I think the effect is currently small). People should totally consider the effect which labs are having on discourse.
have a relatively high bar for saying “not only do I think you’re wrong, but also here are some ways in which you and your allies have poor epistemics.”
I do consider myself to have a high bar for this, and the bar keeps getting passed, so I say something. EDIT: Though I don’t mean for my comments to imply “someone and their allies” have bad epistemics. Ideally I’d like to communicate “hey, something weird is in the air guys, can’t you sense it too?”. However, I think I’m often more annoyed than that, and so I don’t communicate that how I’d like.
More broadly, TurnTrout, I’ve noticed you using this whole “look, if something positive happened, LW would totally rip on it! But if something is presented negatively, everyone loves it!” line of reasoning a few times (e.g., I think this logic came up in your comment about Evan’s recent paper). And I sort of see you taking on some sort of “the people with high P(doom) just have bad epistemics” flag in some of your comments.
A few thoughts (written quickly, prioritizing speed over precision):
I think that epistemics are hard & there are surely several cases in which people are biased toward high P(doom). Examples: Yudkowsky was one of the first thinkers/writers about AI, some people might have emotional dispositions that lead them toward anxious/negative interpretations in general, some people find it “cool” to think they’re one of the few people who are able to accurately identify the world is ending, etc.
I also think that there are plenty of factors biasing epistemics in the “hopeful” direction. Examples: The AI labs have tons of money and status (& employ large fractions of the community’s talent), some people might have emotional dispositions that lead them toward overly optimistic/rosy interpretations in general, some people might find it psychologically difficult to accept premises that lead them to think the world is ending, etc.
My impression (which could be false) is that you seem to be exclusively or disproportionately critical of poor arguments when they come from the “high P(doom)” side.
I also think there’s an important distinction between “I personally think this argument is wrong” and “look, here’s an example of propaganda + poor community epistemics.” In general, I suspect community epistemics are better when people tend to respond directly to object-level points and have a relatively high bar for saying “not only do I think you’re wrong, but also here are some ways in which you and your allies have poor epistemics.” (IDK though, insofar as you actually believe that’s what’s happening, it seems good to say aloud, and I think there’s a version of this that goes too far and polices speech reproductively, but I do think that statements like “community epistemics have been compromised by groupthink and fear” are pretty unproductive and could be met with statements like “community epistemics have been compromised by powerful billion-dollar companies that have clear financial incentives to make people overly optimistic about the trajectory of AI progress.”
I am quite worried about tribal dynamics reducing the ability for people to engage in productive truth-seeking discussions. I think you’ve pointed out how some of the stylistic/tonal things from the “high P(doom)//alignment hard” side have historically made discourse harder, and I agree with several of your critiques. More recently, though, I think that the “low P(doom)//alignment not hard” side seem to be falling into similar traps (e.g., attacking strawmen of those they disagree with, engaging some sort of “ha, the other side is not only wrong but also just dumb/unreasonable/epistemically corrupted” vibe that predictably makes people defensive & makes discourse harder.
See also “Other people are wrong” vs “I am right”, reversed stupidity is not intelligence, and the cowpox of doubt.
My guess is that it’s relatively epistemically corrupting and problematic to spend a lot of time engaging with weak arguments.
I think it’s tempting to make the mistake of thinking that debunking a specific (bad) argument is the same as debunking a conclusion. But actually, these are extremely different operations. One requires understanding a specific argument while the other requires level headed investigation of the overall situation. Separately, there are often actually good intuitions underlying bad arguments and recovering this intuition is an important part of truth seeking.
I think my concerns here probably apply to a wide variety of people thinking about AI x-risk. I worry about this for myself.
Thanks for this, I really appreciate this comment (though my perspective is different on many points).
It’s true that I spend more effort critiquing bad doom arguments. I would like to note that when e.g. I read Quintin I generally am either in agreement or neutral. I bet there are a lot of cases where you would think “that’s a poor argument” and I’d say “hm I don’t think Akash is getting the point (and it’d be good if someone could give a better explanation).”
However, it’s definitely not true that I never critique optimistic arguments which I consider poor. For example, I don’t get why Quintin (apparently) thinks that spectral bias is a reason for optimism, and I’ve said as much on one of his posts. I’ve said something like “I don’t know why you seem to think you can use this mathematical inductive bias to make high-level intuitive claims about what gets learned. This seems to fall into the same trap that ‘simplicity’ theorizing does.” I probably criticize or express skepticism of certain optimistic arguments at least twice a week, though not always on public channels. And I’ve also pushed back on people being unfair, mean, or mocking of “doomers” on private channels.
I think both statements are true to varying degrees (the former more than the latter in the cases I’m considering). They’re true and people should say them. The fact that I work at a lab absolutely affects my epistemics (though I think the effect is currently small). People should totally consider the effect which labs are having on discourse.
I do consider myself to have a high bar for this, and the bar keeps getting passed, so I say something. EDIT: Though I don’t mean for my comments to imply “someone and their allies” have bad epistemics. Ideally I’d like to communicate “hey, something weird is in the air guys, can’t you sense it too?”. However, I think I’m often more annoyed than that, and so I don’t communicate that how I’d like.