“For one thing, if we use that logic, then everything distracts from everything. You could equally well say that climate change is a distraction from the obesity epidemic, and the obesity epidemic is a distraction from the January 6th attack, and so on forever. In reality, this is silly—there is more than one problem in the world! For my part, if someone tells me they’re working on nuclear disarmament, or civil society, or whatever, my immediate snap reaction is not to say “well that’s stupid, you should be working on AI x-risk instead”, rather it’s to say “Thank you for working to build a better future. Tell me more!”
Disagree with this point—cause prioritization is super important. For a radical example: imagine the government spending billions to rescue one man from Mars while neglecting much more cost-efficient causes. Bad actors use the trick of focusing on unimportant but controversial issues to keep everyone from noticing how they are being exploited routinely. Demanding sane prioritization of public attention is extremely important and valid. The problem is we as a society don’t have norms and common knowledge around it (And even memes specifically against it, like whataboutism), but the fact it’s not being done consistently doesn’t mean that we shouldn’t.
I once wrote a longer and more nuanced version that addresses this (copied from footnote 1 of my Response to Blake Richards post last year):
One could object that I’m being a bit glib here. Tradeoffs between cause areas do exist. If someone decides to donate 10% of their income to charity, and they spend it all on climate change, then they have nothing left for heart disease, and if they spend it all on heart disease, then they have nothing left for climate change. Likewise, if someone devotes their career to reducing the risk of nuclear war, then they can’t also devote their career to reducing the risk of catastrophic pandemics and vice-versa, and so on. So tradeoffs exist, and decisions have to be made. How? Well, for example, you could just try to make the world a better place in whatever way seems most immediately obvious and emotionally compelling to you. Lots of people do that, and I don’t fault them for it. But if you want to make the decision in a principled, other-centered way, then you need to dive into the field of Cause Prioritization, where you (for example) try to guess how many expected QALYs could be saved by various possible things you can do with your life / career / money, and pick one at or near the top of the list. Cause Prioritization involves (among other things) a horrific minefield of quantifying various awfully-hard-to-quantify things like “what’s my best-guess probability distribution for when AGI will arrive?”, or “exactly how many suffering chickens are equivalently bad to one suffering human?”, or “how do we weigh better governance in Spain against preventing malaria deaths?”. Well anyway, I’d be surprised if Blake has arrived at his take here via one of these difficult and fraught Cause-Prioritization-type analyses. And I note that there are people out there who do try to do Cause Prioritization, and AFAICT they very often wind up putting AGI Safety right near the top of their lists.
I wonder whether Blake’s intuitions point in a different direction than Cause Prioritization analyses because of scope neglect? As an example of what I’m referring to: suppose (for the sake of argument) that out-of-control AGI accidents have a 10% chance of causing 8 billion deaths in the next 20 years, whereas dumb AI has 100% chance of exacerbating income inequality and eroding democratic norms in the next 1 year. A scope-sensitive, risk-neutral Cause Prioritization analysis would suggest prioritizing the former, but the latter might feel intuitively more panic-inducing.
Then maybe Blake would respond: “No you nitwit, it’s not that I have scope-neglect, it’s that your hypothetical is completely bonkers. Out-of-control AGI accidents do not have a 10% chance of causing 8 billion deaths in the next 20 years; instead, they have a 1-in-a-gazillion chance of causing 8 billion deaths in the next 20 years.” And then I would respond: “Bingo! That’s the crux of our disagreement! That’s the thing we need to hash out—is it more like 10% or 1-in-a-gazillion?” And this question is unrelated to the topic of bad actors misusing dumb AI.
[For the record: The 10% figure was just an example. For my part, if you force me to pick a number, my best guess would be much higher than 10%.]
Why didn’t I put something like the above into this post? Because my sense was that this is a digression in this context—it’s not cutting to the heart of where Melanie was coming from. Like, I’m in favor of cause prioritization as much as anyone, but I didn’t have the impression that Melanie is signed onto the project of Cause Prioritization and trying in good faith to push forward that project.
Advocating for a cause is also implicitly raising the status of the group of people / movements associated with that cause, and that’s zero-sum just as much as philanthropic dollars are.
But whereas practically nobody cares about what it means for philanthropic dollars to be zero-sum (except us nerds who care about Cause Prioritization), everybody sure cares a whole lot about what it means for inter-group status competition to be zero-sum.
But that’s all kinda going on in the background, usually covered up by rationalizations, I think. Not sure what to do about it. :(
“For one thing, if we use that logic, then everything distracts from everything. You could equally well say that climate change is a distraction from the obesity epidemic, and the obesity epidemic is a distraction from the January 6th attack, and so on forever. In reality, this is silly—there is more than one problem in the world! For my part, if someone tells me they’re working on nuclear disarmament, or civil society, or whatever, my immediate snap reaction is not to say “well that’s stupid, you should be working on AI x-risk instead”, rather it’s to say “Thank you for working to build a better future. Tell me more!”
Disagree with this point—cause prioritization is super important. For a radical example: imagine the government spending billions to rescue one man from Mars while neglecting much more cost-efficient causes. Bad actors use the trick of focusing on unimportant but controversial issues to keep everyone from noticing how they are being exploited routinely. Demanding sane prioritization of public attention is extremely important and valid. The problem is we as a society don’t have norms and common knowledge around it (And even memes specifically against it, like whataboutism), but the fact it’s not being done consistently doesn’t mean that we shouldn’t.
I once wrote a longer and more nuanced version that addresses this (copied from footnote 1 of my Response to Blake Richards post last year):
Why didn’t I put something like the above into this post? Because my sense was that this is a digression in this context—it’s not cutting to the heart of where Melanie was coming from. Like, I’m in favor of cause prioritization as much as anyone, but I didn’t have the impression that Melanie is signed onto the project of Cause Prioritization and trying in good faith to push forward that project.
I was thinking about it more last night and am now thinking it’s kinda political. In short:
Advocating for a cause is also implicitly raising the status of the group of people / movements associated with that cause, and that’s zero-sum just as much as philanthropic dollars are.
But whereas practically nobody cares about what it means for philanthropic dollars to be zero-sum (except us nerds who care about Cause Prioritization), everybody sure cares a whole lot about what it means for inter-group status competition to be zero-sum.
But that’s all kinda going on in the background, usually covered up by rationalizations, I think. Not sure what to do about it. :(