I take it your implication is that you could play the game with a superintelligent entity somewhere far in spacetime. If this is your plan, how exactly are you going to get the results back? Not really a test if you don’t get results.
No, it’s not. You might be able to guess that a superintelligence would like negentropy and be ambivalant toward long walks on the beach, but this kind of “simulating” would never, ever, ever, ever allow you to beat it at paper scissor rock. Predicting which square of a payoff matrix it will pick, when it is to the interest of the AI to pick a different square than you think it will, is a problem of the latter type.
This is a general purpose argument against all reasoning relating to superintelligences, and aids your argument no more than mine.
I take it your implication is that you could play the game with a superintelligent entity somewhere far in spacetime. If this is your plan, how exactly are you going to get the results back? Not really a test if you don’t get results.
No, it’s not. You might be able to guess that a superintelligence would like negentropy and be ambivalant toward long walks on the beach, but this kind of “simulating” would never, ever, ever, ever allow you to beat it at paper scissor rock. Predicting which square of a payoff matrix it will pick, when it is to the interest of the AI to pick a different square than you think it will, is a problem of the latter type.
This is a general purpose argument against all reasoning relating to superintelligences, and aids your argument no more than mine.