If you have good reason to beleive the superintelligence is a sucessful extrapolation of the values of it’s creators, simulate them (and their discussion partners) a few million times pondering appropriate subjects—PD, Newcolme’s, and similar problems. That should give you a good idea, with much less computronium spent that a mutual simulation or rewriting pact with the other SI would cost.
If you have good reason to beleive the superintelligence is a sucessful extrapolation of the values of it’s creators [...]
This seems to abstract the problem so that you have two problems instead of one: is the SI a successful extrapolation and the validity of the creators’ claims of their values. This seems less efficient unless one or both of these were already known to begin with.
You don’t need to trust the creators’ claims—you’re running their simulations, and you’re damn good at understanding them and extrapolating the consequences because, well, you’re superintelligent! Why would they even know they’re simulated? They’re just discussing one-shot PD on some blog.
As for the SI being a successful extrapolation, you run a few simulations of it’s birth the same way, starting a few decades before. It’s still cheaper and less messy than organizing a mutual reprogramming with the brain that’s made of the next galaxy.
Then the problem largely reduces to:
Verifying the data you passed each-other about your births are accurate.
Verifying ethical treatment of each-others simulated creators—no “victory candescence” when you get your answer!
If you have good reason to beleive the superintelligence is a sucessful extrapolation of the values of it’s creators, simulate them (and their discussion partners) a few million times pondering appropriate subjects—PD, Newcolme’s, and similar problems. That should give you a good idea, with much less computronium spent that a mutual simulation or rewriting pact with the other SI would cost.
This seems to abstract the problem so that you have two problems instead of one: is the SI a successful extrapolation and the validity of the creators’ claims of their values. This seems less efficient unless one or both of these were already known to begin with.
You don’t need to trust the creators’ claims—you’re running their simulations, and you’re damn good at understanding them and extrapolating the consequences because, well, you’re superintelligent! Why would they even know they’re simulated? They’re just discussing one-shot PD on some blog.
As for the SI being a successful extrapolation, you run a few simulations of it’s birth the same way, starting a few decades before. It’s still cheaper and less messy than organizing a mutual reprogramming with the brain that’s made of the next galaxy.
Then the problem largely reduces to:
Verifying the data you passed each-other about your births are accurate.
Verifying ethical treatment of each-others simulated creators—no “victory candescence” when you get your answer!