If you have good reason to beleive the superintelligence is a sucessful extrapolation of the values of it’s creators, simulate them a few million times discussing problems of a similar nature—PD, Newcolme’s, and similar problems. That should give you a good idea, with much less computronium spent that a mutual simulation or rewriting pact with the other SI would cost.
If you have good reason to beleive the superintelligence is a sucessful extrapolation of the values of it’s creators, simulate them a few million times discussing problems of a similar nature—PD, Newcolme’s, and similar problems. That should give you a good idea, with much less computronium spent that a mutual simulation or rewriting pact with the other SI would cost.