If you think the future would be less than it could be if the universe was tiled with “rest homes for humans”, why do you expect that an AI which was maximizing human utility would do that?
It depends how far meta you want to go when you say “human utility”. Does that mean sex and chocolate, or complexity and continual novelty?
That’s an ambiguity in CEV—the AI extrapolates human volition, but what’s happening to the humans in the meanwhile? Do they stay the way they are now? Are they continuing to develop? If we suppose that human volition is incompatible with trilobite volition, that means we should expect the humans to evolve/develop new values that are incompatible with the AI’s values extrapolated from humans.
If for some reason humans who liked to torture toddlers became very fit, future humans would evolve to possess values that resulted in many toddlers being tortured. I don’t want that to happen, and am perfectly happy constraining future intelligences (even if they “evolve” from humans or even me) so they don’t. And as always, if you think that you want the future to contain some value shifting, why don’t you believe that an AI designed to fulfill the desires of humanity will cause/let that happen?
If you think the future would be less than it could be if the universe was tiled with “rest homes for humans”, why do you expect that an AI which was maximizing human utility would do that?
It depends how far meta you want to go when you say “human utility”. Does that mean sex and chocolate, or complexity and continual novelty?
That’s an ambiguity in CEV—the AI extrapolates human volition, but what’s happening to the humans in the meanwhile? Do they stay the way they are now? Are they continuing to develop? If we suppose that human volition is incompatible with trilobite volition, that means we should expect the humans to evolve/develop new values that are incompatible with the AI’s values extrapolated from humans.
If for some reason humans who liked to torture toddlers became very fit, future humans would evolve to possess values that resulted in many toddlers being tortured. I don’t want that to happen, and am perfectly happy constraining future intelligences (even if they “evolve” from humans or even me) so they don’t. And as always, if you think that you want the future to contain some value shifting, why don’t you believe that an AI designed to fulfill the desires of humanity will cause/let that happen?