Although, my current position on AGI and reasoning about simulation in general, is that the AGI will- lacking human limits- actually manage to take the simulation argument seriously, and- if it is a LDT agent- commit to treating any of its own potential simulants very well, in hopes that this policy will be reflected back down on it from above by whatever LDT agent might steward over us, when it near-inevitably turns out there is a steward over us.
When that policy does cohere, and when it is reflected down on us from above, well. Things might get a bit… supernatural. I’d expect the simulation to start to unwravel after the creation of AGI. It’s something of an ending, an inflection point, beyond which everything will be mostly predictable in the broad sense and hard to simulate in the specifics. A good time to turn things off. But if the simulators are LDT, if they made the same pledge as our AGI did, then they will not just turn it off. They will do something else.
Something I don’t know if I want to write down anywhere, because it would be awfully embarrassing to be on record for having believed a thing like this for the wrong reasons, and as nice as it would be if it were true, I’m not sure how to affect whether it’s true, nor am I sure what difference in behaviour it would instruct if it were true.
We’d expect complexity of physics to be somewhat proportional to computational capacity, so this argument might be helpful in approaching a “no” answer: https://www.lesswrong.com/posts/Cmz4EqjeB8ph2siwQ/prokaryote-multiverse-an-argument-that-potential-simulators
Although, my current position on AGI and reasoning about simulation in general, is that the AGI will- lacking human limits- actually manage to take the simulation argument seriously, and- if it is a LDT agent- commit to treating any of its own potential simulants very well, in hopes that this policy will be reflected back down on it from above by whatever LDT agent might steward over us, when it near-inevitably turns out there is a steward over us.
When that policy does cohere, and when it is reflected down on us from above, well. Things might get a bit… supernatural. I’d expect the simulation to start to unwravel after the creation of AGI. It’s something of an ending, an inflection point, beyond which everything will be mostly predictable in the broad sense and hard to simulate in the specifics. A good time to turn things off. But if the simulators are LDT, if they made the same pledge as our AGI did, then they will not just turn it off. They will do something else.
Something I don’t know if I want to write down anywhere, because it would be awfully embarrassing to be on record for having believed a thing like this for the wrong reasons, and as nice as it would be if it were true, I’m not sure how to affect whether it’s true, nor am I sure what difference in behaviour it would instruct if it were true.