We want to create a unified superintelligence that encompasses the full computational power of the universe.” “We want to create the maximum possible number of sentient intelligences the universe can sustain.” “We want to create a being of perfect happiness, the maximally hedonic sentient.” “We want to eliminate the concepts of ‘selfishness’ and ‘hierarchy’ in favor of a transcendental egalitarian anarchy.
It seems to me that the major problem with these values (and why I think they make a better example than cheesecake) is that they are require use of pretty much all of the universe to fulfill, and are pretty much all or nothing, they can’t be incrementally satisfied.
This differs from nearly all human values. Most of the things people want can be obtained incrementally. If someone wants a high-quality computer or car they can be most satisfied by getting the top model, but getting a lesser model would still be really good. If someone wants to read all 52 monthly comics in the DC universe they could be incrementally satisfied by getting to read eight or ten of them. Human values aren’t all or nothing. The fact that our values can be incrementally satisfied makes us able to share with other people.
The cheesecaker would hopefully be similar, it would be able to be content with some of the universe being cheesecake, not all of it, because it understands the virtue of sharing. If that’s the case I can’t complain, people have had weirder hobbies then making cheesecake. A Cheesecaker with binary preferences, who would be 100% satisfied if 100% of the universe was cheesecake and 0% satisfied if a single molecule wasn’t cheesecake would, by contrast, be a horrible and dangerous monster. Ditto for most of the other AIs you describe (I don’t know, would that one AI be willing to settle for encompassing 1⁄4 of the computational power of the universe with a superintelligence?).
That seems like an important principle of transhumanist population ethics: Create creatures whose preferences can be satisfied incrementally along a sliding scale. Don’t create creatures who will be totally unsatisfied unless they’re allowed to eat the universe.
It seems to me that the major problem with these values (and why I think they make a better example than cheesecake) is that they are require use of pretty much all of the universe to fulfill, and are pretty much all or nothing, they can’t be incrementally satisfied.
This differs from nearly all human values. Most of the things people want can be obtained incrementally. If someone wants a high-quality computer or car they can be most satisfied by getting the top model, but getting a lesser model would still be really good. If someone wants to read all 52 monthly comics in the DC universe they could be incrementally satisfied by getting to read eight or ten of them. Human values aren’t all or nothing. The fact that our values can be incrementally satisfied makes us able to share with other people.
The cheesecaker would hopefully be similar, it would be able to be content with some of the universe being cheesecake, not all of it, because it understands the virtue of sharing. If that’s the case I can’t complain, people have had weirder hobbies then making cheesecake. A Cheesecaker with binary preferences, who would be 100% satisfied if 100% of the universe was cheesecake and 0% satisfied if a single molecule wasn’t cheesecake would, by contrast, be a horrible and dangerous monster. Ditto for most of the other AIs you describe (I don’t know, would that one AI be willing to settle for encompassing 1⁄4 of the computational power of the universe with a superintelligence?).
That seems like an important principle of transhumanist population ethics: Create creatures whose preferences can be satisfied incrementally along a sliding scale. Don’t create creatures who will be totally unsatisfied unless they’re allowed to eat the universe.