AFAIK, Eliezer Yukowsky is one of Everett’s Multiple Worlds interpretation of QM, proponents. As such, he should combine the small, non-zero probability that everything is going to go well with AGI, and this MWI thing. So, there will be some branches where all is going to be well, even if the majority of them will be sterilized. Who cares for those! Thanks to Everett, all will look just fine for the survivors.
I see this as a contradiction in his belief system, not necessarily that he is wrong about AGI.
I think this is a bad way to think about probabilities under the Everett interpretation, for two reasons.
First, it’s a fully general argument against caring about the possibility of your own death. If this were a good way of thinking, then if you offer me $1 to play Russian roulette with bullets in 5 of the 6 chambers then I should take it—because the only branches where I continue to exist are ones where I didn’t get killed. That’s obviously stupid: it cannot possibly be unreasonable to care whether or not one dies. If it were a necessary consequence of the Everett interpretation, then I might say “OK, this means that one can’t coherently accept the Everett interpretation” or “hmm, seems like I have to completely rethink my preferences”, but in fact it is not a necessary consequence of the Everett interpretation.
Second, it ignores the possibility of branches where we survive but horribly. In that Russian roulette game, there are cases where I do get shot through the head but survive with terrible brain damage. In the unfriendly-AI scenarios, there are cases where the human race survives but unhappily. In either case the probability is small, but maybe not so small as a fraction of survival cases.
I think the only reasonable attitude to one’s future branches, if one accepts the Everett interpretation, is to care about all those branches, including those where one doesn’t survive, with weight corresponding to |psi|^2. That is, to treat “quantum probabilities” the same way as “ordinary probabilities”. (This attitude seems perfectly reasonable to me conditional on Everett.)
The alignment problem still has to get solved somehow in those branches, which almost all merely have slightly different versions of us doing mostly the same sorts of things.
What might be different in these branches is that world-ending AGIs have anomalously bad luck in getting started. But the vast majority of anthropic weight, even after selecting for winning branches, will be on branches that are pretty ordinary, and where the alignment problem still had to get solved the hard way, by people who were basically just luckier versions of us.
So even if we decide to stake our hope on those possibilities, it’s pretty much the same as staking hope on luckier versions of ourselves who still did the hard work. It doesn’t really change anything for us here and now; we still need to do the same sorts of things. It all adds up to normality.
If anthropic stuff actually works out like this, then this is great news for values over experiences, which will still be about as satisfiable as they were before, despite our impending doom. But values over world-states will not be at all consoled.
I suspect human values are a complicated mix of the two, with things like male-libido being far on the experience end (since each additional experience of sexual pleasure would correspond in the ancestral environment to a roughly linear increase in reproductive fitness), and things like maternal-love being far on the world-state end (since it needs to actually track the well-being of the children, even in cases where no further experiences are expected), and most things lying somewhere in the middle.
AFAIK, Eliezer Yukowsky is one of Everett’s Multiple Worlds interpretation of QM, proponents. As such, he should combine the small, non-zero probability that everything is going to go well with AGI, and this MWI thing. So, there will be some branches where all is going to be well, even if the majority of them will be sterilized. Who cares for those! Thanks to Everett, all will look just fine for the survivors.
I see this as a contradiction in his belief system, not necessarily that he is wrong about AGI.
I think this is a bad way to think about probabilities under the Everett interpretation, for two reasons.
First, it’s a fully general argument against caring about the possibility of your own death. If this were a good way of thinking, then if you offer me $1 to play Russian roulette with bullets in 5 of the 6 chambers then I should take it—because the only branches where I continue to exist are ones where I didn’t get killed. That’s obviously stupid: it cannot possibly be unreasonable to care whether or not one dies. If it were a necessary consequence of the Everett interpretation, then I might say “OK, this means that one can’t coherently accept the Everett interpretation” or “hmm, seems like I have to completely rethink my preferences”, but in fact it is not a necessary consequence of the Everett interpretation.
Second, it ignores the possibility of branches where we survive but horribly. In that Russian roulette game, there are cases where I do get shot through the head but survive with terrible brain damage. In the unfriendly-AI scenarios, there are cases where the human race survives but unhappily. In either case the probability is small, but maybe not so small as a fraction of survival cases.
I think the only reasonable attitude to one’s future branches, if one accepts the Everett interpretation, is to care about all those branches, including those where one doesn’t survive, with weight corresponding to |psi|^2. That is, to treat “quantum probabilities” the same way as “ordinary probabilities”. (This attitude seems perfectly reasonable to me conditional on Everett.)
The alignment problem still has to get solved somehow in those branches, which almost all merely have slightly different versions of us doing mostly the same sorts of things.
What might be different in these branches is that world-ending AGIs have anomalously bad luck in getting started. But the vast majority of anthropic weight, even after selecting for winning branches, will be on branches that are pretty ordinary, and where the alignment problem still had to get solved the hard way, by people who were basically just luckier versions of us.
So even if we decide to stake our hope on those possibilities, it’s pretty much the same as staking hope on luckier versions of ourselves who still did the hard work. It doesn’t really change anything for us here and now; we still need to do the same sorts of things. It all adds up to normality.
Another consideration I thought of:
If anthropic stuff actually works out like this, then this is great news for values over experiences, which will still be about as satisfiable as they were before, despite our impending doom. But values over world-states will not be at all consoled.
I suspect human values are a complicated mix of the two, with things like male-libido being far on the experience end (since each additional experience of sexual pleasure would correspond in the ancestral environment to a roughly linear increase in reproductive fitness), and things like maternal-love being far on the world-state end (since it needs to actually track the well-being of the children, even in cases where no further experiences are expected), and most things lying somewhere in the middle.