Great stuff.
But I don’t think anyone’s extrapolated volition would be to build their utopias in the real world. Post-ASI, virtual is strictly better. No one wants his utopia constrained by the laws of physics.
And it seems unlikely that anyone would choose to spend extended periods of time with pre-ASI humans rather than people made bespoke for them.
Also, it’s not clear to me that we will get a bargaining scenario. Aligned ASI could just impose equal apportioning of compute budget. This depends on how AI progress plays out.
Here is an experiment that demonstrates the unlikelihood of one potential AI outcome.
The outcome shown to be unlikely:
Aligned ASI is achieved sometime in the next couple of decades and each person is apportioned a sizable amount of compute to do with as they wish.
The experiment:
I have made a precommitment that I will, conditional on the outcome described above occurring, simulate billions of lives for myself—each indistinguishable from the life I have lived so far. By “indistinguishable” I do not necessarily mean identical (which might be impossible or expensive). All that is necessary is that each has similar amounts of suffering, scale, detail, imminent AGI, etc. I’ll set up these simulations so that in each of these simulated lives I will be transported at 4:00 pm Dec11′24 to a virtual personal utopia. Having precommitted to simulating these worlds, I should now expect to be transported into a personal utopia in three minutes time if this future is likely. And if I am not transported into a personal utopia I should conclude that this future is unlikely.
Let’s see what happens...
It’s 4:00 pm and I didn’t get transported into utopia.
So, this outcome is unlikely.
QED
Potential weak points
I do see a couple of potential weak points in the logic of this experiment. Firstly, it might be the case that I’ll have reason to simulate many indistinguishable lives in which I do not get transported to utopia, which would throw off the math. But I can’t see why I’d choose to create simulations of myself in not optimally-enjoyable lives unless I had good reason to, so I don’t think that objection holds.[1]
The other potential weak point is that perhaps I wouldn’t be willing to pay the opportunity cost of billions of years of personal utopia. Although billions of years of simulation is just a tiny proportion of my compute budget, it’s still billions of years that could otherwise have been spent in perfect virtual utopia. I think this potentially a serious issue with the argument, although I will note that I don’t actually have to simulate an entire life for the experiment to work, just a few minutes around 4:00pm on Dec11′24, minutes which were vaguely enjoyable. To address this objection the experiment could be carried out while euphoric (since the opportunity cost would then be lower).
Perhaps, as a prank response to this post, someone could use some of their compute budget to simulate lives in which I don’t get transported to utopia. But I think that there would be restrictions in place against running other people as anything other than p-zombies.