Ancestor Simulations for Fun and Profit
A passing thought: ”… it’s beneath my dignity as a human being to be scared of anything that isn’t smarter than I am” (-- HJPEV) likely applies equally well to superintelligences. Similarly, “It really made you appreciate what millions of years of hominids trying to outwit each other—an evolutionary arms race without limit—had led to in the way of increased mental capacity.” (-- ditto) suggests that one of the stronger spurs for superintelligences becoming as super-intelligent as possible could very well be the competition as they try to outwit each other.
Thus, instead of ancestor simulations being implemented simply out of historical curiosity, a larger portion of such simulations may arise as one super-intelligence tries to figure out another by working out how its competitor arose in the first place. This casts a somewhat different light on how such simulations would be built and treated, then the usual suggestion of university researchers or over-powered child-gods playing Civilization-3^^^3.
* Assume for a moment that you’re in the original, real (to whatever degree that word has meaning) universe, and you’re considering the vast numbers of copies of yourself that are going to be instantiated over future eons. Is there anything that the original you can do, think, or be which could improve your future copies’ lives? Eg, is there some pre-commitment you could make, privately or publicly?
* Assume for a moment that you’re in one of the simulated universes. Is there anything you can do that would make your subjective experience any different from what your original experienced?
* Assume for a moment that you’re a super-intelligence, or at least a proto-super-intelligence, considering running something that includes an ancestor simulation. Is there anything which the original people, or the simulated versions, could do or have done, which would change your mind about how to treat the simulated people?
* Assume for a moment that you’re in one of the simulated universes… and due to battle damage to a super-intelligence, you accidentally are given root access and control over your whole universe. Taking into account Reedspacer’s Lower Bound, and assuming an upper bound of not being able to noticeably affect the super-battle, what would you do with your universe?
I suspect that an AI fighting a war against an AI is unlikely to be leisurely enough that one would take time to simulate, in detail, my breakfast. Even if I do happen to also be reading LW.
Maybe you folks closer to AI development couldn’t rely on that, but way out here, I think I’m pretty safe.
1: Make sure the AI originating from HERE is friendly. otherwise nothing.
2: Stop trying to make the AI that’d not have any real influence anyway, instead focusing on hedonism and/or becoming a Buddhist monk.
3: [REDACTED BECAUSE ROKOS BASILISK]
4a: What I would probably do is fall to the temptation to self modify, and end up as UFAI myself. 4b: what I should do is hand it over to people way smarter than me, like Eliezer, and/or/at least debate it a lot and require multiple people agreeing and vetting code very carefully, resulting in an answer I couldn’t get for just this post.
One could of course simply stop simulating people accurately enough for them to become ‘real’ when they would otherwise undergo suffering. You could even put them back into the simulation afterwards.
Alternately, one could use a memory hacking tool and maximize the happiness value of a large number of simulated people, possibly without changing their behavior at all.
And to your last question: I would try to cause the universe-as-it-is-known to become a self-modifying general intelligence in the ‘real’ (or simulated-1) world.
I suppose I’d figure out how much variation from my current self I could incorporate into an instance and still consider it to be a reasonable copy of me. Then I might try to instantiate everything in that range and see what works best for whatever definition of best.