Although I would also consider this conclusion to follow from the broader claim that if A is a superintelligence with respect to B, B cannot control A, regardless of whether there’s a true code of morality (a question I’m not weighing in on here).
Well, unless you want to say that if A happens to want what B wants, or want what B would want if B were a superintelligence, or otherwise wants something that B endorses, or that B ought to endorse, or something like that (for example, if A is Friendly with respect to B), then B controls A, or acausally controls A, or something like that.
At which point I suspect we do better to taboo “control”, because we’re using it in a very strange way.
Well, yes.
Although I would also consider this conclusion to follow from the broader claim that if A is a superintelligence with respect to B, B cannot control A, regardless of whether there’s a true code of morality (a question I’m not weighing in on here).
Well, unless you want to say that if A happens to want what B wants, or want what B would want if B were a superintelligence, or otherwise wants something that B endorses, or that B ought to endorse, or something like that (for example, if A is Friendly with respect to B), then B controls A, or acausally controls A, or something like that.
At which point I suspect we do better to taboo “control”, because we’re using it in a very strange way.