I just do not think that the post is written for people who think “slowing down AI capabilities is robustly good.” If people thought that, then why do they need this post? Surely they don’t need somebody to tell them to think about it?
So it seems to me like the best audience for this post would be those (including those at some AI companies, or those involved in policy, which includes people reading this post) who currently think something else, for example that the robustly good thing is for their chosen group to be ahead so that they can execute whatever strategy they think they alone can do correctly.
The people I’ve met who don’t want to think about slowing down AI capabilities just don’t seem to think that slowing down AI progress would be robustly good, because that just wouldn’t be a consistent view! They often seem to have some view that nothing is robustly good, or maybe some other thing (“get more power”) is robustly good. Such people just won’t really be swayed by the robust priors thing, or maybe they’d be swayed in the other direction.
I see. You’re not saying “staffers of the US government broadly won’t find this argument persuasive”, you’re saying “there are some people in the AI x-risk ecosystem who don’t think slowing down is robustly good, and won’t find this particular argument persuasive”.
I have less of a disagreement with that sentence.
I’ll add that:
I think most of the arguments in the post are relevant to those people, and Katja only says that these moods are “playing a role” which does not mean all people agree with them.
You write “If people thought that, then why do they need this post? Surely they don’t need somebody to tell them to think about it?”. Sometimes people need help noticing the implications of their beliefs, due to all sorts of motivated cognitions. I don’t think the post relies on that and it shouldn’t be the primary argument, but I think it’s honestly helpful for some people (and was a bit helpful for me to read it).
I just do not think that the post is written for people who think “slowing down AI capabilities is robustly good.” If people thought that, then why do they need this post? Surely they don’t need somebody to tell them to think about it?
So it seems to me like the best audience for this post would be those (including those at some AI companies, or those involved in policy, which includes people reading this post) who currently think something else, for example that the robustly good thing is for their chosen group to be ahead so that they can execute whatever strategy they think they alone can do correctly.
The people I’ve met who don’t want to think about slowing down AI capabilities just don’t seem to think that slowing down AI progress would be robustly good, because that just wouldn’t be a consistent view! They often seem to have some view that nothing is robustly good, or maybe some other thing (“get more power”) is robustly good. Such people just won’t really be swayed by the robust priors thing, or maybe they’d be swayed in the other direction.
I see. You’re not saying “staffers of the US government broadly won’t find this argument persuasive”, you’re saying “there are some people in the AI x-risk ecosystem who don’t think slowing down is robustly good, and won’t find this particular argument persuasive”.
I have less of a disagreement with that sentence.
I’ll add that:
I think most of the arguments in the post are relevant to those people, and Katja only says that these moods are “playing a role” which does not mean all people agree with them.
You write “If people thought that, then why do they need this post? Surely they don’t need somebody to tell them to think about it?”. Sometimes people need help noticing the implications of their beliefs, due to all sorts of motivated cognitions. I don’t think the post relies on that and it shouldn’t be the primary argument, but I think it’s honestly helpful for some people (and was a bit helpful for me to read it).
Yeah, I agree with all this.
Thread success!