Until recently, people with P(doom) of, say, 10%, have been natural allies of people with P(doom) of >80%. But the regulation that the latter group thinks is sufficient to avoid xrisk with high confidence has, on my worldview, a significant chance of either causing x-risk from totalitarianism, or else causing x-risk via governments being worse at alignment than companies would have been.
I agree. Moreover, a p(doom) of 10% vs. 80% means a lot for people like me who think the current generation of humans have substantial moral value (i.e., people who aren’t fully committed to longtermism).
In the p(doom)=10% case, burdensome regulations that appreciably delay AI, or greatly reduce the impact of AI, have a large chance of causing the premature deaths of people who currently exist, including our family and friends. This is really bad if you care significantly about people who currently exist.
This consideration is sometimes neglected in these discussions, perhaps because it’s seen as a form of selfish partiality that we should toss aside. But in my opinion, morality is allowed to be partial. Morality is whatever we want it to be. And I don’t have a strong urge to sacrifice everyone I know and love for the sake of slightly increasing (in my view) the chance of the human species being preserved.
(The additional considerations of potential totalitarianism, public choice arguments, and the fact that I think unaligned AIs will probably have moral value, make me quite averse to very strong regulatory controls on AI.)
So, it sounds like you’d be in favor of a 1-year pause or slowdown then, but not a 10-year?
(Also, I object to your side-swipe at longtermism. Longtermism according to wikipedia is “Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time.” “A key moral priority” doesn’t mean “the only thing that has substantial moral value.” If you had instead dunked on classic utilitarianism, I would have agreed.)
So, it sounds like you’d be in favor of a 1-year pause or slowdown then, but not a 10-year?
That depends on the benefits that we get from a 1-year pause. I’d be open to the policy, but I’m not currently convinced that the benefits would be large enough to justify the costs.
Also, I object to your side-swipe at longtermism
I didn’t side-swipe at longtermism, or try to dunk on it. I think longtermism is a decent philosophy, and I consider myself a longtermist in the dictionary sense as you quoted. I was simply talking about people who aren’t “fully committed” to the (strong) version of the philosophy.
Personally I think a 1-year pause right around the time of AGI would give us something like 50% of the benefits of a 10-year pause. That’s just an initial guess, not super stable. And quantitatively I think it would improve overall chances of AGI going well by double-digit percentage points at least. Such that it makes sense to do a 1-year pause even for the sake of an elderly relative avoiding death from cancer, not to mention all the younger people alive today.
And quantitatively I think it would improve overall chances of AGI going well by double-digit percentage points at least.
Makes sense. By comparison, my own unconditional estimate of p(doom) is not much higher than 10%, and so it’s hard on my view for any intervention to have a double-digit percentage point effect.
The crude mortality rate before the pandemic was about 0.7%. If we use that number to estimate the direct cost of a 1-year pause, then this is the bar that we’d need to clear for a pause to be justified. I find it plausible that this bar could be met, but at the same time, I am also pretty skeptical of the mechanisms various people have given for how a pause will help with AI safety.
I agree that 0.7% is the number to beat for people who mostly focus on helping present humans and who don’t take acausal or simulation argument stuff or cryonics seriously. I think that even if I was much more optimistic about AI alignment, I’d still think that number would be fairly plausibly beaten by a 1-year pause that begins right around the time of AGI.
What are the mechanisms people have given and why are you skeptical of them?
(Surely cryonics doesn’t matter given a realistic action space? Usage of cryonics is extremely rare and I don’t think there are plausible (cheap) mechanisms to increase uptake to >1% of population. I agree that simulation arguments and similar considerations maybe imply that “helping current humans” is either incoherant or unimportant.)
Somewhat of a nitpick, but the relevant number would be p(doom | strong AGI being built) (maybe contrasted with p(utopia | strong AGI)) , not overall p(doom).
May I strongly recommend that you try to become a Dark Lord instead?
I mean, literally. Stage some small bloody civil war with expected body count of several millions, become dictator, provide everyone free insurance coverage for cryonics, it will be sure more ethical than 10% of chance of killing literally everyone from the perspective of most of ethical systems I know.
I don’t think staging a civil war is generally a good way of saving lives. Moreover, ordinary aging has about a 100% chance of “killing literally everyone” prematurely, so it’s unclear to me what moral distinction you’re trying to make in your comment. It’s possible you think that:
Death from aging is not as bad as death from AI because aging is natural whereas AI is artificial
Death from aging is not as bad as death from AI because human civilization would continue if everyone dies from aging, whereas it would not continue if AI kills everyone
In the case of (1) I’m not sure I share the intuition. Being forced to die from old age seems, if anything, worse than being forced to die from AI, since it is long and drawn-out, and presumably more painful than death from AI. You might also think about this dilemma in terms of act vs. omission, but I am not convinced there’s a clear asymmetry here.
In the case of (2), whether AI takeover is worse depends on how bad you think an “AI civilization” would be in the absence of humans. I recently wrote a post about some reasons to think that it wouldn’t be much worse than a human civilization.
In any case, I think this is simply a comparison between “everyone literally dies” vs. “everyone might literally die but in a different way”. So I don’t think it’s clear that pushing for one over the other makes someone a “Dark Lord”, in the morally relevant sense, compared to the alternative.
I think the perspective that you’re missing regarding 2. is that by building AGI one is taking the chance of non-consensually killing vast amounts of people and their children for some chance of improving one’s own longevity.
Even if one thinks it’s a better deal for them, a key point is that you are making the decision for them by unilaterally building AGI. So in that sense it is quite reasonable to see it as an “evil” action to work towards that outcome.
non-consensually killing vast amounts of people and their children for some chance of improving one’s own longevity.
I think this misrepresents the scenario since AGI presumably won’t just improve my own longevity: it will presumably improve most people’s longevity (assuming it does that at all), in addition to all the other benefits that AGI would provide the world. Also, both potential decisions are “unilateral”: if some group forcibly stops AGI development, they’re causing everyone else to non-consensually die from old age, by assumption.
I understand you have the intuition that there’s an important asymmetry here. However, even if that’s true, I think it’s important to strive to be accurate when describing the moral choice here.
I agree that potentially the benefits can go to everyone. The point is that as the person pursuing AGI you are making the choice for everyone else.
The asymmetry is that if you do something that creates risk for everyone else, I believe that does single you out as an aggressor? While conversely, enforcing norms that prevent such risky behavior seems justified. The fact that by default people are mortal is tragic, but doesn’t have much bearing here. (You’d still be free to pursue life-extension technology in other ways, perhaps including limited AI tools).
Ideally, of course, there’d be some sort of democratic process here that let’s people in aggregate make informed (!) choices. In the real world, it’s unclear what a good solution here would be. What we have right now is the big labs creating facts that society has trouble catching up with, which I think many people are reasonably uncomfortable with.
I agree. Moreover, a p(doom) of 10% vs. 80% means a lot for people like me who think the current generation of humans have substantial moral value (i.e., people who aren’t fully committed to longtermism).
In the p(doom)=10% case, burdensome regulations that appreciably delay AI, or greatly reduce the impact of AI, have a large chance of causing the premature deaths of people who currently exist, including our family and friends. This is really bad if you care significantly about people who currently exist.
This consideration is sometimes neglected in these discussions, perhaps because it’s seen as a form of selfish partiality that we should toss aside. But in my opinion, morality is allowed to be partial. Morality is whatever we want it to be. And I don’t have a strong urge to sacrifice everyone I know and love for the sake of slightly increasing (in my view) the chance of the human species being preserved.
(The additional considerations of potential totalitarianism, public choice arguments, and the fact that I think unaligned AIs will probably have moral value, make me quite averse to very strong regulatory controls on AI.)
So, it sounds like you’d be in favor of a 1-year pause or slowdown then, but not a 10-year?
(Also, I object to your side-swipe at longtermism. Longtermism according to wikipedia is “Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time.” “A key moral priority” doesn’t mean “the only thing that has substantial moral value.” If you had instead dunked on classic utilitarianism, I would have agreed.)
That depends on the benefits that we get from a 1-year pause. I’d be open to the policy, but I’m not currently convinced that the benefits would be large enough to justify the costs.
I didn’t side-swipe at longtermism, or try to dunk on it. I think longtermism is a decent philosophy, and I consider myself a longtermist in the dictionary sense as you quoted. I was simply talking about people who aren’t “fully committed” to the (strong) version of the philosophy.
OK, thanks for clarifying.
Personally I think a 1-year pause right around the time of AGI would give us something like 50% of the benefits of a 10-year pause. That’s just an initial guess, not super stable. And quantitatively I think it would improve overall chances of AGI going well by double-digit percentage points at least. Such that it makes sense to do a 1-year pause even for the sake of an elderly relative avoiding death from cancer, not to mention all the younger people alive today.
Makes sense. By comparison, my own unconditional estimate of p(doom) is not much higher than 10%, and so it’s hard on my view for any intervention to have a double-digit percentage point effect.
The crude mortality rate before the pandemic was about 0.7%. If we use that number to estimate the direct cost of a 1-year pause, then this is the bar that we’d need to clear for a pause to be justified. I find it plausible that this bar could be met, but at the same time, I am also pretty skeptical of the mechanisms various people have given for how a pause will help with AI safety.
I agree that 0.7% is the number to beat for people who mostly focus on helping present humans and who don’t take acausal or simulation argument stuff or cryonics seriously. I think that even if I was much more optimistic about AI alignment, I’d still think that number would be fairly plausibly beaten by a 1-year pause that begins right around the time of AGI.
What are the mechanisms people have given and why are you skeptical of them?
(Surely cryonics doesn’t matter given a realistic action space? Usage of cryonics is extremely rare and I don’t think there are plausible (cheap) mechanisms to increase uptake to >1% of population. I agree that simulation arguments and similar considerations maybe imply that “helping current humans” is either incoherant or unimportant.)
Good point, I guess I was thinking in that case about people who care a bunch about a smaller group of humans e.g. their family and friends.
Somewhat of a nitpick, but the relevant number would be p(doom | strong AGI being built) (maybe contrasted with p(utopia | strong AGI)) , not overall p(doom).
May I strongly recommend that you try to become a Dark Lord instead?
I mean, literally. Stage some small bloody civil war with expected body count of several millions, become dictator, provide everyone free insurance coverage for cryonics, it will be sure more ethical than 10% of chance of killing literally everyone from the perspective of most of ethical systems I know.
I don’t think staging a civil war is generally a good way of saving lives. Moreover, ordinary aging has about a 100% chance of “killing literally everyone” prematurely, so it’s unclear to me what moral distinction you’re trying to make in your comment. It’s possible you think that:
Death from aging is not as bad as death from AI because aging is natural whereas AI is artificial
Death from aging is not as bad as death from AI because human civilization would continue if everyone dies from aging, whereas it would not continue if AI kills everyone
In the case of (1) I’m not sure I share the intuition. Being forced to die from old age seems, if anything, worse than being forced to die from AI, since it is long and drawn-out, and presumably more painful than death from AI. You might also think about this dilemma in terms of act vs. omission, but I am not convinced there’s a clear asymmetry here.
In the case of (2), whether AI takeover is worse depends on how bad you think an “AI civilization” would be in the absence of humans. I recently wrote a post about some reasons to think that it wouldn’t be much worse than a human civilization.
In any case, I think this is simply a comparison between “everyone literally dies” vs. “everyone might literally die but in a different way”. So I don’t think it’s clear that pushing for one over the other makes someone a “Dark Lord”, in the morally relevant sense, compared to the alternative.
I think the perspective that you’re missing regarding 2. is that by building AGI one is taking the chance of non-consensually killing vast amounts of people and their children for some chance of improving one’s own longevity.
Even if one thinks it’s a better deal for them, a key point is that you are making the decision for them by unilaterally building AGI. So in that sense it is quite reasonable to see it as an “evil” action to work towards that outcome.
I think this misrepresents the scenario since AGI presumably won’t just improve my own longevity: it will presumably improve most people’s longevity (assuming it does that at all), in addition to all the other benefits that AGI would provide the world. Also, both potential decisions are “unilateral”: if some group forcibly stops AGI development, they’re causing everyone else to non-consensually die from old age, by assumption.
I understand you have the intuition that there’s an important asymmetry here. However, even if that’s true, I think it’s important to strive to be accurate when describing the moral choice here.
I agree that potentially the benefits can go to everyone. The point is that as the person pursuing AGI you are making the choice for everyone else.
The asymmetry is that if you do something that creates risk for everyone else, I believe that does single you out as an aggressor? While conversely, enforcing norms that prevent such risky behavior seems justified. The fact that by default people are mortal is tragic, but doesn’t have much bearing here. (You’d still be free to pursue life-extension technology in other ways, perhaps including limited AI tools).
Ideally, of course, there’d be some sort of democratic process here that let’s people in aggregate make informed (!) choices. In the real world, it’s unclear what a good solution here would be. What we have right now is the big labs creating facts that society has trouble catching up with, which I think many people are reasonably uncomfortable with.