More saliently though, whatever mechanism you implement to potentially “release” the AGI into simulated universes could be gamed or hacked by the AGI itself.
I think this is fixable, game of life isn’t that complicated, you could prove correctness somehow.
Heck, this might not even be necessary—if all they’re getting are simulated universes, then they could probably create those themselves since they’re running on arbitrarily large compute anyway.
This is a great point, I forgot AIXI also had unbounded compute, why would it want to escape and get more!
I don’t think AIXI can “care” about universes it simulates itself, probably because of the cartesian boundary (non-embeddedness) meaning the utility function is defined on inputs (which AIXI doesn’t control). but I’m not sure. I don’t understand AIXI well.
You’re also making the assumption that these AIs would care about what happens inside a simulation created in the future, as something to guide their current actions. This may be true of some AI systems, but feels like a pretty strong one to hold universally.
The simulation being “created in the future” doesn’t seem to matter to me. You could also already be simulating the two universes and the game decides if the AIs gain access to them.
(I think this is a pretty cool post, by the way, and appreciate more ASoT content).
I think this is fixable, game of life isn’t that complicated, you could prove correctness somehow.
This is a great point, I forgot AIXI also had unbounded compute, why would it want to escape and get more!
I don’t think AIXI can “care” about universes it simulates itself, probably because of the cartesian boundary (non-embeddedness) meaning the utility function is defined on inputs (which AIXI doesn’t control). but I’m not sure. I don’t understand AIXI well.
The simulation being “created in the future” doesn’t seem to matter to me. You could also already be simulating the two universes and the game decides if the AIs gain access to them.
Thanks! Will do