If you mostly share its preferences, that’s not enough—it will probably make your life hell along every axis omitted from that “mostly.”
If you like X, Y and Z, and the other mind likes X and Y, then the result is likely still pretty good from your point of view.
If there are other wrapper-minds, they are all each others’ enemies, too[1].
This isn’t true. Sure, a wrapper mind will always prefer itself having resource X to anyone else having resource X, but it may well prefer someone else having X to X being destroyed. An apple maximizing AI will steal your orchard, if it can do so without damaging the apple trees. But it would prefer to buy apples from you than to let the orchard get destroyed in a war.
Also, you keep comparing wrapper minds to humans. The typical wrapper mind has crazy values it persues unconditionally.
The typical non-wrapper mind switches between many different systems of crazy values according to crazy rules. The non-wrapper mind has more free parameters.
If you like X, Y and Z, and the other mind likes X and Y, then the result is likely still pretty good from your point of view.
This isn’t true. Sure, a wrapper mind will always prefer itself having resource X to anyone else having resource X, but it may well prefer someone else having X to X being destroyed. An apple maximizing AI will steal your orchard, if it can do so without damaging the apple trees. But it would prefer to buy apples from you than to let the orchard get destroyed in a war.
Also, you keep comparing wrapper minds to humans. The typical wrapper mind has crazy values it persues unconditionally.
The typical non-wrapper mind switches between many different systems of crazy values according to crazy rules. The non-wrapper mind has more free parameters.