I merely wanted to point out to Kaj that some “meaningful testing” could be done, even if the simulated world was drastically different from ours. I suspect that some core properties of intelligence would be the same regardless of what sort of world it existed in—so we are not crippling the AI by putting it in a world removed from our own.
Perhaps “if released into our world” wasn’t the best choice of words… more likely, you would want to use the simulated AI as an empirical test of some design ideas, which could then be used in a separate AI being carefully designed to be friendly to our world.
I merely wanted to point out to Kaj that some “meaningful testing” could be done, even if the simulated world was drastically different from ours. I suspect that some core properties of intelligence would be the same regardless of what sort of world it existed in—so we are not crippling the AI by putting it in a world removed from our own.
Perhaps “if released into our world” wasn’t the best choice of words… more likely, you would want to use the simulated AI as an empirical test of some design ideas, which could then be used in a separate AI being carefully designed to be friendly to our world.