If CEV outputs a null utility function, that would seem to imply that human preferences are completely symmetrically distributed, which seems hard to believe.
If by “null utility function”, you mean one that says, “don’t DO anything”, then do note that it would not require that we all have balanced preferences, depending on how you do the combination.
A global utility function that creates more pleasure for me by creating pain for you would probably not be very useful. Heck, a function that creates pleasure for me by creating pain for me might not be useful. Pain and pleasure are not readily subtractable from each other on real human hardware, and when one is required to subtract them by forces outside one’s individual control, there is an additional disutility incurred.
These things being the case, a truly “Friendly” AI might well decide to limit itself to squashing unfriendly AIs and otherwise refusing to meddle in human affairs.
These things being the case, a truly “Friendly” AI might well decide to limit itself to squashing unfriendly AIs and otherwise refusing to meddle in human affairs.
I wouldn’t be particularly surprised by this outcome.
If by “null utility function”, you mean one that says, “don’t DO anything”, then do note that it would not require that we all have balanced preferences, depending on how you do the combination.
A global utility function that creates more pleasure for me by creating pain for you would probably not be very useful. Heck, a function that creates pleasure for me by creating pain for me might not be useful. Pain and pleasure are not readily subtractable from each other on real human hardware, and when one is required to subtract them by forces outside one’s individual control, there is an additional disutility incurred.
These things being the case, a truly “Friendly” AI might well decide to limit itself to squashing unfriendly AIs and otherwise refusing to meddle in human affairs.
I wouldn’t be particularly surprised by this outcome.