I take it your implication is that you could play the game with a superintelligent entity somewhere far in spacetime. If this is your plan, how exactly are you going to get the results back? Not really a test if you don’t get results.
No, it’s not. You might be able to guess that a superintelligence would like negentropy and be ambivalant toward long walks on the beach, but this kind of “simulating” would never, ever, ever, ever allow you to beat it at paper scissor rock. Predicting which square of a payoff matrix it will pick, when it is to the interest of the AI to pick a different square than you think it will, is a problem of the latter type.
This is a general purpose argument against all reasoning relating to superintelligences, and aids your argument no more than mine.
1) Not in my garage, but this kind of thing doesn’t have a range limit.
2) For a sufficiently broad definition of “simulate”, then yes I can. That broad definition is sufficient.
3) Who are you to say what omega cannot do?
(Yea, I’ve used a bit of dark arts to make it sound more impressive than it actually is, but the point still stands.)
I take it your implication is that you could play the game with a superintelligent entity somewhere far in spacetime. If this is your plan, how exactly are you going to get the results back? Not really a test if you don’t get results.
No, it’s not. You might be able to guess that a superintelligence would like negentropy and be ambivalant toward long walks on the beach, but this kind of “simulating” would never, ever, ever, ever allow you to beat it at paper scissor rock. Predicting which square of a payoff matrix it will pick, when it is to the interest of the AI to pick a different square than you think it will, is a problem of the latter type.
This is a general purpose argument against all reasoning relating to superintelligences, and aids your argument no more than mine.