I agree with you that, if a sufficiently powerful superintelligence is constrained to avoid any activities that a human would honestly classify as “coercion,” “threat,” “blackmail,” “addiction,” or “lie by omission” and is constrained to only induce changes in belief via means that a human would honestly classify as “natural” and “genuine,” it can nevertheless induce humans to accept its plan while satisfying those constraints.
I don’t think that prevents such a superintelligence from inducing humans to accept its plan through the use of means that would horrify us had we ever thought to consider them.
It’s also not at all clear to me that the fact that X would horrify me if I’d thought to consider it is sufficient grounds to reject using X.
I agree with you that, if a sufficiently powerful superintelligence is constrained to avoid any activities that a human would honestly classify as “coercion,” “threat,” “blackmail,” “addiction,” or “lie by omission” and is constrained to only induce changes in belief via means that a human would honestly classify as “natural” and “genuine,” it can nevertheless induce humans to accept its plan while satisfying those constraints.
I don’t think that prevents such a superintelligence from inducing humans to accept its plan through the use of means that would horrify us had we ever thought to consider them.
It’s also not at all clear to me that the fact that X would horrify me if I’d thought to consider it is sufficient grounds to reject using X.