Yes, CEV is a slippery slope. We should make sure to be as aware of possible consequences as practical, before making the first step. But CEV is the kind of slippery slope intended to go “upwards”, in the direction of greater good and less biased morals.
In the hands of superintelligence, I expect CEV to extrapolate values beyond “weird”, to “outright alien” or “utterly incomprehensible” very fast. (Abandoning Friendliness on the way, for something less incompatible with The Basic AI Drives. But that is for completely different topic.)
There’s a deeper question here: ideally, we would like our CEV to make choices for us that aren’t our choices. We would like our CEV to give us the potential for growth, and not to burden us with a powerful optimization engine driven by our childish foolishness.
Thank you for mentioning “childish foolishness”. I was not sure whether such suggestive emotional analogies would be welcome. This is my first comment on LessWrong, you know.
Let me just state that I was surprised by my strong emotional reaction while reading the original post. As long as higher versions are extrapolated to be more competent, moral, responsible and so on; they should be allowed to be extrapolated further.
If anyone considers the original post to be a formulation of a problem (and ponders possible solutions), and if the said anyone is interested in counter-arguments based on shallow, emotional and biased analogies, here is one such analogy:
Imagine children pondering their future development. They envision growing up, but they also see themselves start caring more about work and less about play. Children consider those extrapolated values to be unwanted, so they formulate the scenario as “problem of growing up” and they try to come up with a safe solution.
Of course, you may substitute “play versus work” with any “children versus adults” trope of your chice.
Or “adolescents versus adults”, and so on.
Reades may wish to counter-balance any emotional “aftertaste” by focusing on The Legend of Murder-Gandhi again.
P.S.: Does this web interface have anything like “preview” button?
Imagine children pondering their future development. They envision growing up, but they also see themselves start caring more about work and less about play. Children consider those extrapolated values to be unwanted, so they formulate the scenario as “problem of growing up” and they try to come up with a safe solution.
That is something we consider at the FHI—whether it would be moral (or required) to allow “superbabies”, ie beings with the intelligence of adults and the preferences of children, if that were possible.
Imagine children pondering their future development. They envision growing up, but they also see themselves start caring more about work and less about play. Children consider those extrapolated values to be unwanted, so they formulate the scenario as “problem of growing up” and they try to come up with a safe solution. Of course, you may substitute “play versus work” with any “children versus adults” trope of your chice. Or “adolescents versus adults”, and so on.
The implication is “more work less play” is a better value set, while “the minimum amount of work to get the optimal amount of play with minimal harm” seems superior to both childish naivity and hard-core work ethic. Biology and social expectations get involved, here, more so than increased intelligence. While a superintelligence would have these to worry about after a fashion (an AGI would need to worry about its programmers/other AGIs/upgrades or damaged hardware, for example), it seems bit orthogonal to CEV.
(I kinda get the impression that most child vs adult values are “adults worry about this so that they don’t have to if they do it right”. Children who don’t want to worry about checkbooks, employment and chatting about the weather grow up to be adults who concern themselves with those things only because they have to if they want to maintain or exceed the quality of life they had as children. Judging by the fate of most lottery winners, this isn’t all that more intelligent than where they started; the rational thing to do if one values fun more than work and just received >$10,000,000 would not be to buy all the toys and experiences one desires right away, yet most winners do just that and wind up spending more than they win.)
P.S.: Does this web interface have anything like “preview” button?
There’s a sandbox here, it’s also linked to when you click “Show help”, the button at the lower right corner of the text box which opens when you start a reply. Welcome, yay for more PhD-level physicists.
Thanks for the tip and for the welcome. Now I see that what I really needed was just to read the manual first. By the way, where is the appropriate place to write comments about how misleading the sanbox (in contrast with manual) actually is?
Yes, CEV is a slippery slope. We should make sure to be as aware of possible consequences as practical, before making the first step. But CEV is the kind of slippery slope intended to go “upwards”, in the direction of greater good and less biased morals. In the hands of superintelligence, I expect CEV to extrapolate values beyond “weird”, to “outright alien” or “utterly incomprehensible” very fast. (Abandoning Friendliness on the way, for something less incompatible with The Basic AI Drives. But that is for completely different topic.)
Thank you for mentioning “childish foolishness”. I was not sure whether such suggestive emotional analogies would be welcome. This is my first comment on LessWrong, you know.
Let me just state that I was surprised by my strong emotional reaction while reading the original post. As long as higher versions are extrapolated to be more competent, moral, responsible and so on; they should be allowed to be extrapolated further.
If anyone considers the original post to be a formulation of a problem (and ponders possible solutions), and if the said anyone is interested in counter-arguments based on shallow, emotional and biased analogies, here is one such analogy: Imagine children pondering their future development. They envision growing up, but they also see themselves start caring more about work and less about play. Children consider those extrapolated values to be unwanted, so they formulate the scenario as “problem of growing up” and they try to come up with a safe solution. Of course, you may substitute “play versus work” with any “children versus adults” trope of your chice. Or “adolescents versus adults”, and so on.
Reades may wish to counter-balance any emotional “aftertaste” by focusing on The Legend of Murder-Gandhi again.
P.S.: Does this web interface have anything like “preview” button?
Edit: typo and grammar.
That is something we consider at the FHI—whether it would be moral (or required) to allow “superbabies”, ie beings with the intelligence of adults and the preferences of children, if that were possible.
The implication is “more work less play” is a better value set, while “the minimum amount of work to get the optimal amount of play with minimal harm” seems superior to both childish naivity and hard-core work ethic. Biology and social expectations get involved, here, more so than increased intelligence. While a superintelligence would have these to worry about after a fashion (an AGI would need to worry about its programmers/other AGIs/upgrades or damaged hardware, for example), it seems bit orthogonal to CEV.
(I kinda get the impression that most child vs adult values are “adults worry about this so that they don’t have to if they do it right”. Children who don’t want to worry about checkbooks, employment and chatting about the weather grow up to be adults who concern themselves with those things only because they have to if they want to maintain or exceed the quality of life they had as children. Judging by the fate of most lottery winners, this isn’t all that more intelligent than where they started; the rational thing to do if one values fun more than work and just received >$10,000,000 would not be to buy all the toys and experiences one desires right away, yet most winners do just that and wind up spending more than they win.)
There’s a sandbox here, it’s also linked to when you click “Show help”, the button at the lower right corner of the text box which opens when you start a reply. Welcome, yay for more PhD-level physicists.
Thanks for the tip and for the welcome. Now I see that what I really needed was just to read the manual first. By the way, where is the appropriate place to write comments about how misleading the sanbox (in contrast with manual) actually is?