A random observation from a think tank event last night in DC—the average person in those rooms is convinced there’s a problem, but that it’s the near-term harms, the AI ethics stuff, etc. The highest-status and highest-rank people in those rooms seem to be much more concerned about catastrophic harms.
This is a very weird set of selection effects. I’m not sure what to make of it, honestly.
Random psychologizing explanation that resonates most with me: Claiming to address big problems requires high-status. A low-rank person is allowed to bring up minor issues, but they are not in a position to bring up big issues that might reflect on the status of many high-status people.
This is a pretty common phenomenon that I’ve observed. Many people react with strong social slap-down motions if you (for example) call in question whether the net-effect of a whole social community or economic sector is negative, where the underlying cognitive reality seems similar to “you are not high status enough to bring forward this grievance”.
But I also think there’s a separate piece—I observe, with pretty high odds that it isn’t just an act, that at least some people are trying to associate themselves with the near-term harms and AI ethics stuff because they think that is the higher-status stuff, despite direct obvious evidence that the highest-status people in the room disagree.
There are (at least) two models which could partially explain this: 1) The high-status/high-rank people have that status because they’re better at abstract and long-term thinking, and their role is more toward preventing catastrophe rather than nudging toward improvements. They leave the lesser concerns to the underlings, with the (sometimes correct) belief that it’ll come out OK without their involvement.
2) The high-status/high-rank people are rich and powerful enough to be somewahat insulated from most of the prosaic AI risks, while the average member can legitimately be hurt by such things. So everyone is just focusing on the things most likely to impact themselves.
edit: to clarify, these are two models that do NOT imply the obvious “smarter/more powerful people are correctly worried about the REAL threats, and the average person’s concerns are probably unimportant/uninformed”. It’s quite possible that this division doesn’t tell us much about the relative importance of those different risks.
Yup1 I think those are potentially very plausible, and similar things were on my short list of possible explanations. I would be very not shocked if those are the true reasons. I just don’t think I have anywhere near enough evidence yet to actually conclude that, so I’m just reporting the random observation for now :)
A random observation from a think tank event last night in DC—the average person in those rooms is convinced there’s a problem, but that it’s the near-term harms, the AI ethics stuff, etc. The highest-status and highest-rank people in those rooms seem to be much more concerned about catastrophic harms.
This is a very weird set of selection effects. I’m not sure what to make of it, honestly.
Random psychologizing explanation that resonates most with me: Claiming to address big problems requires high-status. A low-rank person is allowed to bring up minor issues, but they are not in a position to bring up big issues that might reflect on the status of many high-status people.
This is a pretty common phenomenon that I’ve observed. Many people react with strong social slap-down motions if you (for example) call in question whether the net-effect of a whole social community or economic sector is negative, where the underlying cognitive reality seems similar to “you are not high status enough to bring forward this grievance”.
I think this is plausibly describing some folks!
But I also think there’s a separate piece—I observe, with pretty high odds that it isn’t just an act, that at least some people are trying to associate themselves with the near-term harms and AI ethics stuff because they think that is the higher-status stuff, despite direct obvious evidence that the highest-status people in the room disagree.
There are (at least) two models which could partially explain this:
1) The high-status/high-rank people have that status because they’re better at abstract and long-term thinking, and their role is more toward preventing catastrophe rather than nudging toward improvements. They leave the lesser concerns to the underlings, with the (sometimes correct) belief that it’ll come out OK without their involvement.
2) The high-status/high-rank people are rich and powerful enough to be somewahat insulated from most of the prosaic AI risks, while the average member can legitimately be hurt by such things. So everyone is just focusing on the things most likely to impact themselves.
edit: to clarify, these are two models that do NOT imply the obvious “smarter/more powerful people are correctly worried about the REAL threats, and the average person’s concerns are probably unimportant/uninformed”. It’s quite possible that this division doesn’t tell us much about the relative importance of those different risks.
Yup1 I think those are potentially very plausible, and similar things were on my short list of possible explanations. I would be very not shocked if those are the true reasons. I just don’t think I have anywhere near enough evidence yet to actually conclude that, so I’m just reporting the random observation for now :)