The relevant intuition to the second point there, is to imagine you somehow found out that there was only one ground truth base reality, only one real world, not a multiverse or a tegmark level 4 verse or whatever. And you’re a civilization that has successfully dealt with x-risks and unilateralist action and information vulnerabilities, to the point where you have the sort of unified control to make a top-down decision about whether to make massive numbers of civilizations. And you’re wondring whether to make a billion simulations.
And suddenly you’re faced with the prospect of building something that will make it so you no longer know whether you’re in the base universe. Someday gravity might get turned off because that’s what your overlords wanted. If you pull the trigger, you’ll never be sure that you weren’t actually one of the simulated ones, because there’s suddenly so many simulations.
And so you don’t pull the trigger, and you remain confident that you’re in the base universe.
This, plus some assumptions about all civilizations that have the capacity to do massive simulations also being wise enough to overcome x-risk and coordination problems so they can actually make a top-down decision here, plus some TDT magic whereby all such civilizations in the various multiverses and Tegmark levels can all coordinate in logical time to pick the same decision… leaves there being no unlawful simulations.
My crux here is that I don’t feel much uncertainty about whether or not our overlords will start interacting with us (they won’t and I really don’t expect that to change), and I’m trying to backchain from that to find reasons why it makes sense.
My basic argument is that all civilizations that have the capability to make simulations that aren’t true histories (but instead have lots of weird stuff happen in them) will all be philosophically sophisticated to collectively not do so, and so you can always expect to be in a true history and not have weird sh*t happen to you like in The Sims. The main counterargument here is to show that there are lots of civilizations that will exist with the powers to do this but lacking the wisdom to not do it. Two key examples that come to mind:
We build an AGI singleton that lacks important kinds of philosophical maturity, so makes lots of simulations that ruins the anthropic uncertainty for everyone else.
Civilizations at somewhere around our level get to a point where they can create massive numbers of simulations but haven’t managed to create existential risks like AGI. Even while you might think our civilization is pretty close to AGI, I could imagine alternative civilizations that aren’t, just like I could imagine alternative civilizations that are really close to making masses of ems but that aren’t close enough to AGI. This feels like a pretty empirical question about whether such civilizations are possible and whether they can have these kinds of resources without causing an existential catastrophe / building singleton AGI.
Why appeal to philosophical sophistication rather than lack of motivation? Humans given the power to make ancestor-simulations would create lots of interventionist sims (as is demonstrated by the populatity games like The Sims), but if the vast hypermajority of ancestor-simulations are run by unaligned AIs doing their analogue of history research, that could “drown out” the tiny minority of interventionist simulations.
That’s interesting. I don’t feel comfortable with that argument, it feels too much like random chance whether or not we should expect ourselves to be in an interventionist universe or not, whereas I feel like I should be able to find strong reasons to not be in an interventionist universe.
Now that’s fun. I need to figure out some more stuff about measure, I don’t quite get why some universes should be weighted more than others. But I think that sort of argument is probably a mistake—even if the lawful universes get more weighting for some reason, unless you also have reason to think that they don’t make simulations, there’s still loads of simulations within each of their lawful universes, setting the balance in favour of simulation again.
One big reason why it makes sense is that the simulation is designed for the purpose of accurately representing reality.
Another big reason why (a version of it) makes sense is that the simulation is designed for the purpose of inducing anthropic uncertainty in someone at some later time in the simulation. e.g. if the point of the simulation is to make our AGI worry that it is in a simulation, and manipulate it via probable environment hacking, then the simulation will be accurate and lawful (i.e. un-tampered-with) until AGI is created.
I think “polluting the lake” by increasing the general likelihood of you (and anyone else) being in a simulation is indeed something that some agents might not want to do, but (a) it’s a collective action problem, and (b) plenty of agents won’t mind it that much, and (c) there are good reasons to do it even if it has costs. I admit I am a bit confused about this though, so thank you for bringing it up, I will think about it more in the coming months.
Another big reason why (a version of it) makes sense is that the simulation is designed for the purpose of inducing anthropic uncertainty in someone at some later time in the simulation. e.g. if the point of the simulation is to make our AGI worry that it is in a simulation, and manipulate it via probable environment hacking, then the simulation will be accurate and lawful (i.e. un-tampered-with) until AGI is created.
Ugh, anthropic warfare, feels so ugly and scary. I hope we never face that sh*t.
The relevant intuition to the second point there, is to imagine you somehow found out that there was only one ground truth base reality, only one real world, not a multiverse or a tegmark level 4 verse or whatever. And you’re a civilization that has successfully dealt with x-risks and unilateralist action and information vulnerabilities, to the point where you have the sort of unified control to make a top-down decision about whether to make massive numbers of civilizations. And you’re wondring whether to make a billion simulations.
And suddenly you’re faced with the prospect of building something that will make it so you no longer know whether you’re in the base universe. Someday gravity might get turned off because that’s what your overlords wanted. If you pull the trigger, you’ll never be sure that you weren’t actually one of the simulated ones, because there’s suddenly so many simulations.
And so you don’t pull the trigger, and you remain confident that you’re in the base universe.
This, plus some assumptions about all civilizations that have the capacity to do massive simulations also being wise enough to overcome x-risk and coordination problems so they can actually make a top-down decision here, plus some TDT magic whereby all such civilizations in the various multiverses and Tegmark levels can all coordinate in logical time to pick the same decision… leaves there being no unlawful simulations.
My crux here is that I don’t feel much uncertainty about whether or not our overlords will start interacting with us (they won’t and I really don’t expect that to change), and I’m trying to backchain from that to find reasons why it makes sense.
My basic argument is that all civilizations that have the capability to make simulations that aren’t true histories (but instead have lots of weird stuff happen in them) will all be philosophically sophisticated to collectively not do so, and so you can always expect to be in a true history and not have weird sh*t happen to you like in The Sims. The main counterargument here is to show that there are lots of civilizations that will exist with the powers to do this but lacking the wisdom to not do it. Two key examples that come to mind:
We build an AGI singleton that lacks important kinds of philosophical maturity, so makes lots of simulations that ruins the anthropic uncertainty for everyone else.
Civilizations at somewhere around our level get to a point where they can create massive numbers of simulations but haven’t managed to create existential risks like AGI. Even while you might think our civilization is pretty close to AGI, I could imagine alternative civilizations that aren’t, just like I could imagine alternative civilizations that are really close to making masses of ems but that aren’t close enough to AGI. This feels like a pretty empirical question about whether such civilizations are possible and whether they can have these kinds of resources without causing an existential catastrophe / building singleton AGI.
Why appeal to philosophical sophistication rather than lack of motivation? Humans given the power to make ancestor-simulations would create lots of interventionist sims (as is demonstrated by the populatity games like The Sims), but if the vast hypermajority of ancestor-simulations are run by unaligned AIs doing their analogue of history research, that could “drown out” the tiny minority of interventionist simulations.
That’s interesting. I don’t feel comfortable with that argument, it feels too much like random chance whether or not we should expect ourselves to be in an interventionist universe or not, whereas I feel like I should be able to find strong reasons to not be in an interventionist universe.
Alternatively, “lawful universe” has lower Kolmogorov complexity than “lawful universe plus simulator intervention” and thereore gets exponentially more measure under the universal prior?? (See also “Infinite universes and Corbinian otaku” and “The Finale of the Ultimate Meta Mega Crossover”.)
Now that’s fun. I need to figure out some more stuff about measure, I don’t quite get why some universes should be weighted more than others. But I think that sort of argument is probably a mistake—even if the lawful universes get more weighting for some reason, unless you also have reason to think that they don’t make simulations, there’s still loads of simulations within each of their lawful universes, setting the balance in favour of simulation again.
One big reason why it makes sense is that the simulation is designed for the purpose of accurately representing reality.
Another big reason why (a version of it) makes sense is that the simulation is designed for the purpose of inducing anthropic uncertainty in someone at some later time in the simulation. e.g. if the point of the simulation is to make our AGI worry that it is in a simulation, and manipulate it via probable environment hacking, then the simulation will be accurate and lawful (i.e. un-tampered-with) until AGI is created.
I think “polluting the lake” by increasing the general likelihood of you (and anyone else) being in a simulation is indeed something that some agents might not want to do, but (a) it’s a collective action problem, and (b) plenty of agents won’t mind it that much, and (c) there are good reasons to do it even if it has costs. I admit I am a bit confused about this though, so thank you for bringing it up, I will think about it more in the coming months.
Ugh, anthropic warfare, feels so ugly and scary. I hope we never face that sh*t.