“I believe that AI takeover has a high probability of killing billions and should be strongly avoided, and would be a serious and irreversible decision by our society that’s likely to be a mistake even if it doesn’t lead to billions of deaths.”
I mean, this feels like it is of completely the wrong magnitude. “Killing billions” is just vastly vastly vastly less bad than “completely eradicating humanity’s future”, which is actually what is going on.
Like, my attitude towards AI and x-risk would be hugely different if the right abstraction would be “a few billion people die”. Like, OK, that’s like a few decades of population growth. Basically nothing in the big picture. And I think this is also true by the vast majority of common-sense ethical views. People care about the future of humanity. “Saving the world” is hugely more important than preventing the marginal atrocity. Outside of EA I have never actually met a welfarist who only cares about present humans. People of course think we are supposed to be good stewards of humanity’s future, especially if you select on the people who are actually involved in global scale decisions.
Normal people who are not bought into super crazy computationalist stuff understand that humanity’s extinction is much worse than just a few billion people dying, and the thing that is happening is much more like extinction than it is like a few billion people dying.
(I mostly care about long term future and scope sensitive resource use like habryka TBC.)
Sure, we can amend to:
“I believe that AI takeover would eliminate humanity’s control over its future, has a high probability of killing billions, and should be strongly avoided.”
We could also say something like “AI takeover seems similar to takeover by hostile aliens with potentially unrecognizable values. It would eliminate humanity’s control over its future and has a high probability of killing billions.”
And I think this is also true by the vast majority of common-sense ethical views. People care about the future of humanity. “Saving the world” is hugely more important than preventing the marginal atrocity. Outside of EA I have never actually met a welfarist who only cares about present humans. People of course think we are supposed to be good stewards of humanity’s future, especially if you select on the people who are actually involved in global scale decisions.
Hmmm, I agree with this as stated, but it’s not clear to me that this is scope sensitive. As in, suppose that the AI will eventually leave humans in control of earth and the solar system. Do people typically this is an extremely bad? I don’t think so, though I’m not sure.
And, I think trading for humans to eventually control the solar system is pretty doable. (Most of the trade cost is in preventing an earlier slaughter and violence which was useful for takeover or avoiding delay.)
At a more basic level, I think the situation is just actually much more confusing than human extinction in a bunch of ways.
(Separately, under my views misaligned AI takeover seems worse than human extinction due to (e.g.) biorisk. This is because primates or other closely related seem very likely to re-evolve into an intelligent civilization and I feel better about this civilization than AIs.)
I mean, this feels like it is of completely the wrong magnitude. “Killing billions” is just vastly vastly vastly less bad than “completely eradicating humanity’s future”, which is actually what is going on.
Like, my attitude towards AI and x-risk would be hugely different if the right abstraction would be “a few billion people die”. Like, OK, that’s like a few decades of population growth. Basically nothing in the big picture. And I think this is also true by the vast majority of common-sense ethical views. People care about the future of humanity. “Saving the world” is hugely more important than preventing the marginal atrocity. Outside of EA I have never actually met a welfarist who only cares about present humans. People of course think we are supposed to be good stewards of humanity’s future, especially if you select on the people who are actually involved in global scale decisions.
Normal people who are not bought into super crazy computationalist stuff understand that humanity’s extinction is much worse than just a few billion people dying, and the thing that is happening is much more like extinction than it is like a few billion people dying.
(I mostly care about long term future and scope sensitive resource use like habryka TBC.)
Sure, we can amend to:
“I believe that AI takeover would eliminate humanity’s control over its future, has a high probability of killing billions, and should be strongly avoided.”
We could also say something like “AI takeover seems similar to takeover by hostile aliens with potentially unrecognizable values. It would eliminate humanity’s control over its future and has a high probability of killing billions.”
Hmmm, I agree with this as stated, but it’s not clear to me that this is scope sensitive. As in, suppose that the AI will eventually leave humans in control of earth and the solar system. Do people typically this is an extremely bad? I don’t think so, though I’m not sure.
And, I think trading for humans to eventually control the solar system is pretty doable. (Most of the trade cost is in preventing an earlier slaughter and violence which was useful for takeover or avoiding delay.)
At a more basic level, I think the situation is just actually much more confusing than human extinction in a bunch of ways.
(Separately, under my views misaligned AI takeover seems worse than human extinction due to (e.g.) biorisk. This is because primates or other closely related seem very likely to re-evolve into an intelligent civilization and I feel better about this civilization than AIs.)