This title doesn’t reflect the article. Humanity is AMAZING at coordination. I daily use dozens of items that I have no possibility of making by myself. I have literally never spent a day foraging for sustenance. The famous https://en.wikipedia.org/wiki/I,_Pencil describes a masterpiece of coordination, in a product so mundane that most people don’t even notice it. No other species comes close.
What humanity is bad at is scaling of risk assessment. We just don’t have the mechanisms to give the proper weight to x-risks. That’s not coordination, that’s just scope insensitivity.
Ok, but the relevant standard might not be “do they coordinate more than mice?”, but some absolute threshold like “do they wear masks during a respiratory pandemic?” or “do they not fabricate scientific data relevant to what medications to deploy?”.
What humanity is bad at is scaling of risk assessment. We just don’t have the mechanisms to give the proper weight to x-risks. That’s not coordination, that’s just scope insensitivity.
Classic ‘tragedies of the commons’ happen not because of improper risk assessment, but because of concentrated benefits and distributed costs. Humans aren’t cooperate-bot; they have sophisticated mechanisms for coordinating on whether or not to coordinate. And so it is interesting to ask: what will those mechanisms say when it comes to things related to x-risks, in the absence of meaningful disagreement on risks?
And then, when we add in disagreeing assessments, how does the picture look?
Ok, but the relevant standard might not be “do they coordinate more than mice?”, but some absolute threshold like “do they wear masks during a respiratory pandemic?” or “do they not fabricate scientific data relevant to what medications to deploy?”.
There certainly _are_ cases where cooperation is limited due to distributed benefits and concentrated costs—humans aren’t perfect, just very good. Humans do at least as well as mice on those topics, and far better on some other topics.
But x-risk isn’t a concentrated cost, it’s a distributed infinite cost. And a disputed probability and timeframe, but that’s about risk assessment and scope insensitivity, not about imposing costs on others and not oneself.
But x-risk isn’t a concentrated cost, it’s a distributed infinite cost.
This depends on how ‘altruistic’ your values are. For some people, the total value to them of all other humans (ever) is less than the value to them of their own life, and so something that risks blowing up the Earth reads similarly to their decision-making process as something that risks just blowing up themselves. And sometimes, one or both of those values are negative. [As a smaller example, consider the pilots who commit suicide by crashing their plane into the ground—at least once with 150 passengers in the back!]
That said, I made a simple transposition error, it’s supposed to be “concentrated benefits and distributed costs.”
This title doesn’t reflect the article. Humanity is AMAZING at coordination. I daily use dozens of items that I have no possibility of making by myself. I have literally never spent a day foraging for sustenance. The famous https://en.wikipedia.org/wiki/I,_Pencil describes a masterpiece of coordination, in a product so mundane that most people don’t even notice it. No other species comes close.
What humanity is bad at is scaling of risk assessment. We just don’t have the mechanisms to give the proper weight to x-risks. That’s not coordination, that’s just scope insensitivity.
Ok, but the relevant standard might not be “do they coordinate more than mice?”, but some absolute threshold like “do they wear masks during a respiratory pandemic?” or “do they not fabricate scientific data relevant to what medications to deploy?”.
Classic ‘tragedies of the commons’ happen not because of improper risk assessment, but because of concentrated benefits and distributed costs. Humans aren’t cooperate-bot; they have sophisticated mechanisms for coordinating on whether or not to coordinate. And so it is interesting to ask: what will those mechanisms say when it comes to things related to x-risks, in the absence of meaningful disagreement on risks?
And then, when we add in disagreeing assessments, how does the picture look?
There certainly _are_ cases where cooperation is limited due to distributed benefits and concentrated costs—humans aren’t perfect, just very good. Humans do at least as well as mice on those topics, and far better on some other topics.
But x-risk isn’t a concentrated cost, it’s a distributed infinite cost. And a disputed probability and timeframe, but that’s about risk assessment and scope insensitivity, not about imposing costs on others and not oneself.
This depends on how ‘altruistic’ your values are. For some people, the total value to them of all other humans (ever) is less than the value to them of their own life, and so something that risks blowing up the Earth reads similarly to their decision-making process as something that risks just blowing up themselves. And sometimes, one or both of those values are negative. [As a smaller example, consider the pilots who commit suicide by crashing their plane into the ground—at least once with 150 passengers in the back!]
That said, I made a simple transposition error, it’s supposed to be “concentrated benefits and distributed costs.”