I think what I was really trying to get at in my original comment was that that particular argument seems aimed at people who already think that it would be robustly good to slow down dangerous technologies. But the people who would most benefit from this post are those who do not already think this; for them it doesn’t help much and might actively hurt.
This is kind of a strange comment to me. The argument, and indeed the whole post, is clearly written to people in the ecosystem (“my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored”), for which differential technological progress is a pretty common concept and relied upon in lots of arguments. It’s pretty clear that this post is written to point out an undervalued position to those people.
Sometimes I feel like people in the AI x-risk ecosystem who interface with policy and DC replace their epistemologies with a copy of the epistemology they find in various parts of the policy-control machine in DC, in order to better predict them and perform the correct signals — asking themselves what people in DC would think, rather than what they themselves would think. I don’t know why you think this post was aimed at those people, or why you point out that the post is making false inferences about its audience when the post is pretty clear that it’s primary audience is the people directly in the ecosystem (“The conversation near me over the years has felt a bit like this”).
I just do not think that the post is written for people who think “slowing down AI capabilities is robustly good.” If people thought that, then why do they need this post? Surely they don’t need somebody to tell them to think about it?
So it seems to me like the best audience for this post would be those (including those at some AI companies, or those involved in policy, which includes people reading this post) who currently think something else, for example that the robustly good thing is for their chosen group to be ahead so that they can execute whatever strategy they think they alone can do correctly.
The people I’ve met who don’t want to think about slowing down AI capabilities just don’t seem to think that slowing down AI progress would be robustly good, because that just wouldn’t be a consistent view! They often seem to have some view that nothing is robustly good, or maybe some other thing (“get more power”) is robustly good. Such people just won’t really be swayed by the robust priors thing, or maybe they’d be swayed in the other direction.
I see. You’re not saying “staffers of the US government broadly won’t find this argument persuasive”, you’re saying “there are some people in the AI x-risk ecosystem who don’t think slowing down is robustly good, and won’t find this particular argument persuasive”.
I have less of a disagreement with that sentence.
I’ll add that:
I think most of the arguments in the post are relevant to those people, and Katja only says that these moods are “playing a role” which does not mean all people agree with them.
You write “If people thought that, then why do they need this post? Surely they don’t need somebody to tell them to think about it?”. Sometimes people need help noticing the implications of their beliefs, due to all sorts of motivated cognitions. I don’t think the post relies on that and it shouldn’t be the primary argument, but I think it’s honestly helpful for some people (and was a bit helpful for me to read it).
This is kind of a strange comment to me. The argument, and indeed the whole post, is clearly written to people in the ecosystem (“my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored”), for which differential technological progress is a pretty common concept and relied upon in lots of arguments. It’s pretty clear that this post is written to point out an undervalued position to those people.
Sometimes I feel like people in the AI x-risk ecosystem who interface with policy and DC replace their epistemologies with a copy of the epistemology they find in various parts of the policy-control machine in DC, in order to better predict them and perform the correct signals — asking themselves what people in DC would think, rather than what they themselves would think. I don’t know why you think this post was aimed at those people, or why you point out that the post is making false inferences about its audience when the post is pretty clear that it’s primary audience is the people directly in the ecosystem (“The conversation near me over the years has felt a bit like this”).
I just do not think that the post is written for people who think “slowing down AI capabilities is robustly good.” If people thought that, then why do they need this post? Surely they don’t need somebody to tell them to think about it?
So it seems to me like the best audience for this post would be those (including those at some AI companies, or those involved in policy, which includes people reading this post) who currently think something else, for example that the robustly good thing is for their chosen group to be ahead so that they can execute whatever strategy they think they alone can do correctly.
The people I’ve met who don’t want to think about slowing down AI capabilities just don’t seem to think that slowing down AI progress would be robustly good, because that just wouldn’t be a consistent view! They often seem to have some view that nothing is robustly good, or maybe some other thing (“get more power”) is robustly good. Such people just won’t really be swayed by the robust priors thing, or maybe they’d be swayed in the other direction.
I see. You’re not saying “staffers of the US government broadly won’t find this argument persuasive”, you’re saying “there are some people in the AI x-risk ecosystem who don’t think slowing down is robustly good, and won’t find this particular argument persuasive”.
I have less of a disagreement with that sentence.
I’ll add that:
I think most of the arguments in the post are relevant to those people, and Katja only says that these moods are “playing a role” which does not mean all people agree with them.
You write “If people thought that, then why do they need this post? Surely they don’t need somebody to tell them to think about it?”. Sometimes people need help noticing the implications of their beliefs, due to all sorts of motivated cognitions. I don’t think the post relies on that and it shouldn’t be the primary argument, but I think it’s honestly helpful for some people (and was a bit helpful for me to read it).
Yeah, I agree with all this.
Thread success!