A random observation from a think tank event last night in DC—the average person in those rooms is convinced there’s a problem, but that it’s the near-term harms, the AI ethics stuff, etc. The highest-status and highest-rank people in those rooms seem to be much more concerned about catastrophic harms.
This is a very weird set of selection effects. I’m not sure what to make of it, honestly.
Random psychologizing explanation that resonates most with me: Claiming to address big problems requires high-status. A low-rank person is allowed to bring up minor issues, but they are not in a position to bring up big issues that might reflect on the status of many high-status people.
This is a pretty common phenomenon that I’ve observed. Many people react with strong social slap-down motions if you (for example) call in question whether the net-effect of a whole social community or economic sector is negative, where the underlying cognitive reality seems similar to “you are not high status enough to bring forward this grievance”.
But I also think there’s a separate piece—I observe, with pretty high odds that it isn’t just an act, that at least some people are trying to associate themselves with the near-term harms and AI ethics stuff because they think that is the higher-status stuff, despite direct obvious evidence that the highest-status people in the room disagree.
There are (at least) two models which could partially explain this: 1) The high-status/high-rank people have that status because they’re better at abstract and long-term thinking, and their role is more toward preventing catastrophe rather than nudging toward improvements. They leave the lesser concerns to the underlings, with the (sometimes correct) belief that it’ll come out OK without their involvement.
2) The high-status/high-rank people are rich and powerful enough to be somewahat insulated from most of the prosaic AI risks, while the average member can legitimately be hurt by such things. So everyone is just focusing on the things most likely to impact themselves.
edit: to clarify, these are two models that do NOT imply the obvious “smarter/more powerful people are correctly worried about the REAL threats, and the average person’s concerns are probably unimportant/uninformed”. It’s quite possible that this division doesn’t tell us much about the relative importance of those different risks.
Yup1 I think those are potentially very plausible, and similar things were on my short list of possible explanations. I would be very not shocked if those are the true reasons. I just don’t think I have anywhere near enough evidence yet to actually conclude that, so I’m just reporting the random observation for now :)
Does “highest status” here mean highest expertise in a domain generally agreed by people in that domain, and/or education level, and/or privileged schools, and/or from more economically powerful countries etc? It is also good to note that sometimes the “status” is dynamic, and may or may not imply anything causal with their decision making or choice on priorities.
One scenario is “higher status” might correlates with better resources to achieve those statuses, and a possibility is as a result they haven’t experienced or they are not subject to many near-term harms. In other words, it is not really about the difference between “average” and “high status”’s people’s intelligence, but more about what kind of world they are exposed to.
I do think it is good to hear all different perspectives to stay curious/open-minded.
edit: I just saw Dragon nicely listed two potential reasons, with scenario 2 mentioning something similar with my comment here. But something slightly specific in my thinking, is that these choices made by “average” and “high status” people may or may not be conscious, but rather from the experience from their lives and the world they are exposed to.
Does “highest status” here mean highest expertise in a domain generally agreed by people in that domain, and/or education level, and/or privileged schools, and/or from more economically powerful countries etc?
I mean, functionally all of those things. (Well, minus the country dynamic. Everyone at this event I talked to was US, UK, or Canadian, which is all sorta one team for purposes of status dynamics at that event)
A random observation from a think tank event last night in DC—the average person in those rooms is convinced there’s a problem, but that it’s the near-term harms, the AI ethics stuff, etc. The highest-status and highest-rank people in those rooms seem to be much more concerned about catastrophic harms.
This is a very weird set of selection effects. I’m not sure what to make of it, honestly.
Random psychologizing explanation that resonates most with me: Claiming to address big problems requires high-status. A low-rank person is allowed to bring up minor issues, but they are not in a position to bring up big issues that might reflect on the status of many high-status people.
This is a pretty common phenomenon that I’ve observed. Many people react with strong social slap-down motions if you (for example) call in question whether the net-effect of a whole social community or economic sector is negative, where the underlying cognitive reality seems similar to “you are not high status enough to bring forward this grievance”.
I think this is plausibly describing some folks!
But I also think there’s a separate piece—I observe, with pretty high odds that it isn’t just an act, that at least some people are trying to associate themselves with the near-term harms and AI ethics stuff because they think that is the higher-status stuff, despite direct obvious evidence that the highest-status people in the room disagree.
There are (at least) two models which could partially explain this:
1) The high-status/high-rank people have that status because they’re better at abstract and long-term thinking, and their role is more toward preventing catastrophe rather than nudging toward improvements. They leave the lesser concerns to the underlings, with the (sometimes correct) belief that it’ll come out OK without their involvement.
2) The high-status/high-rank people are rich and powerful enough to be somewahat insulated from most of the prosaic AI risks, while the average member can legitimately be hurt by such things. So everyone is just focusing on the things most likely to impact themselves.
edit: to clarify, these are two models that do NOT imply the obvious “smarter/more powerful people are correctly worried about the REAL threats, and the average person’s concerns are probably unimportant/uninformed”. It’s quite possible that this division doesn’t tell us much about the relative importance of those different risks.
Yup1 I think those are potentially very plausible, and similar things were on my short list of possible explanations. I would be very not shocked if those are the true reasons. I just don’t think I have anywhere near enough evidence yet to actually conclude that, so I’m just reporting the random observation for now :)
Does “highest status” here mean highest expertise in a domain generally agreed by people in that domain, and/or education level, and/or privileged schools, and/or from more economically powerful countries etc? It is also good to note that sometimes the “status” is dynamic, and may or may not imply anything causal with their decision making or choice on priorities.
One scenario is “higher status” might correlates with better resources to achieve those statuses, and a possibility is as a result they haven’t experienced or they are not subject to many near-term harms. In other words, it is not really about the difference between “average” and “high status”’s people’s intelligence, but more about what kind of world they are exposed to.
I do think it is good to hear all different perspectives to stay curious/open-minded.
edit: I just saw Dragon nicely listed two potential reasons, with scenario 2 mentioning something similar with my comment here. But something slightly specific in my thinking, is that these choices made by “average” and “high status” people may or may not be conscious, but rather from the experience from their lives and the world they are exposed to.
I mean, functionally all of those things. (Well, minus the country dynamic. Everyone at this event I talked to was US, UK, or Canadian, which is all sorta one team for purposes of status dynamics at that event)