I’m sympathetic to your argument, but I don’t see how we can be certain that verifying / constructing benevolent AGI is just as easy as creating high-fidelity simulations. Certainly proficiency in these tasks might be orthogonal and it is not impossible to imagine that maybe it is computationally intractable to create superintelligence that we know is benevolent, so instead we opt to just run vast quantities of simulations—kind of what is happening with empirical AI research right now.
IMO reasoning about what will be easy or not for a far advanced civilization is always mostly speculation.
Then there is the question of fidelity. If you imagine that our current world is a simulation, it might just be a vastly simplified simulation which runs on the equivalent of a calculator in the base reality, however because we only know our own frame of reference it seems to us like it is the most high fidelity we can imagine. I think the most important part in creating such a simulation would be to keep it truly isolated: We can’t introduce any inputs from our own world that are not internally consistent with the simulated world. E.g. if we were to include texts from our world in a lower fidelity simulation, it would most likely be easy to find out that something doesn’t add up.
I’m sympathetic to your argument, but I don’t see how we can be certain that verifying / constructing benevolent AGI is just as easy as creating high-fidelity simulations. Certainly proficiency in these tasks might be orthogonal and it is not impossible to imagine that maybe it is computationally intractable to create superintelligence that we know is benevolent, so instead we opt to just run vast quantities of simulations—kind of what is happening with empirical AI research right now.
IMO reasoning about what will be easy or not for a far advanced civilization is always mostly speculation.
Then there is the question of fidelity. If you imagine that our current world is a simulation, it might just be a vastly simplified simulation which runs on the equivalent of a calculator in the base reality, however because we only know our own frame of reference it seems to us like it is the most high fidelity we can imagine. I think the most important part in creating such a simulation would be to keep it truly isolated: We can’t introduce any inputs from our own world that are not internally consistent with the simulated world. E.g. if we were to include texts from our world in a lower fidelity simulation, it would most likely be easy to find out that something doesn’t add up.