anon: “The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not.”
I am quite aware of that. Anyway, using “cheescake” as placeholder adds a bias to the whole story.
“Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.”
Indeed. So what? In reality, I am quite interested what superintelligence would really consider valueable. But I am pretty sure that “big cheescake” in unlikely.
Thinking about it, AFAIK Eliezer considers himself rationalist. Is not a big part of rationalism involved in disputing values that are merely consequences of our long history?
Indeed, when we substitute for “cheesecake” the likely things that a superintelligent AI might value, the problem becomes a whole lot less obvious.
“We want to create a unified superintelligence that encompasses the full computational power of the universe.”
“We want to create the maximum possible number of sentient intelligences the universe can sustain.”
“We want to create a being of perfect happiness, the maximally hedonic sentient.”
“We want to eliminate the concepts of ‘selfishness’ and ‘hierarchy’ in favor of a transcendental egalitarian anarchy.”
Would humans resist these goals? Yes, because they probably entail getting rid of us puny flesh-bags. But are they worth doing? I don’t know… it kinda seems like they might be.
We want to create a unified superintelligence that encompasses the full computational power of the universe.” “We want to create the maximum possible number of sentient intelligences the universe can sustain.” “We want to create a being of perfect happiness, the maximally hedonic sentient.” “We want to eliminate the concepts of ‘selfishness’ and ‘hierarchy’ in favor of a transcendental egalitarian anarchy.
It seems to me that the major problem with these values (and why I think they make a better example than cheesecake) is that they are require use of pretty much all of the universe to fulfill, and are pretty much all or nothing, they can’t be incrementally satisfied.
This differs from nearly all human values. Most of the things people want can be obtained incrementally. If someone wants a high-quality computer or car they can be most satisfied by getting the top model, but getting a lesser model would still be really good. If someone wants to read all 52 monthly comics in the DC universe they could be incrementally satisfied by getting to read eight or ten of them. Human values aren’t all or nothing. The fact that our values can be incrementally satisfied makes us able to share with other people.
The cheesecaker would hopefully be similar, it would be able to be content with some of the universe being cheesecake, not all of it, because it understands the virtue of sharing. If that’s the case I can’t complain, people have had weirder hobbies then making cheesecake. A Cheesecaker with binary preferences, who would be 100% satisfied if 100% of the universe was cheesecake and 0% satisfied if a single molecule wasn’t cheesecake would, by contrast, be a horrible and dangerous monster. Ditto for most of the other AIs you describe (I don’t know, would that one AI be willing to settle for encompassing 1⁄4 of the computational power of the universe with a superintelligence?).
That seems like an important principle of transhumanist population ethics: Create creatures whose preferences can be satisfied incrementally along a sliding scale. Don’t create creatures who will be totally unsatisfied unless they’re allowed to eat the universe.
anon: “The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not.”
I am quite aware of that. Anyway, using “cheescake” as placeholder adds a bias to the whole story.
“Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.”
Indeed. So what? In reality, I am quite interested what superintelligence would really consider valueable. But I am pretty sure that “big cheescake” in unlikely.
Thinking about it, AFAIK Eliezer considers himself rationalist. Is not a big part of rationalism involved in disputing values that are merely consequences of our long history?
Indeed, when we substitute for “cheesecake” the likely things that a superintelligent AI might value, the problem becomes a whole lot less obvious.
“We want to create a unified superintelligence that encompasses the full computational power of the universe.” “We want to create the maximum possible number of sentient intelligences the universe can sustain.” “We want to create a being of perfect happiness, the maximally hedonic sentient.” “We want to eliminate the concepts of ‘selfishness’ and ‘hierarchy’ in favor of a transcendental egalitarian anarchy.”
Would humans resist these goals? Yes, because they probably entail getting rid of us puny flesh-bags. But are they worth doing? I don’t know… it kinda seems like they might be.
It seems to me that the major problem with these values (and why I think they make a better example than cheesecake) is that they are require use of pretty much all of the universe to fulfill, and are pretty much all or nothing, they can’t be incrementally satisfied.
This differs from nearly all human values. Most of the things people want can be obtained incrementally. If someone wants a high-quality computer or car they can be most satisfied by getting the top model, but getting a lesser model would still be really good. If someone wants to read all 52 monthly comics in the DC universe they could be incrementally satisfied by getting to read eight or ten of them. Human values aren’t all or nothing. The fact that our values can be incrementally satisfied makes us able to share with other people.
The cheesecaker would hopefully be similar, it would be able to be content with some of the universe being cheesecake, not all of it, because it understands the virtue of sharing. If that’s the case I can’t complain, people have had weirder hobbies then making cheesecake. A Cheesecaker with binary preferences, who would be 100% satisfied if 100% of the universe was cheesecake and 0% satisfied if a single molecule wasn’t cheesecake would, by contrast, be a horrible and dangerous monster. Ditto for most of the other AIs you describe (I don’t know, would that one AI be willing to settle for encompassing 1⁄4 of the computational power of the universe with a superintelligence?).
That seems like an important principle of transhumanist population ethics: Create creatures whose preferences can be satisfied incrementally along a sliding scale. Don’t create creatures who will be totally unsatisfied unless they’re allowed to eat the universe.