our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted
Dogs or C. eleganses don’t have an “as we wish that extrapolated”, which surely makes the question what their CEV is a wrong question, like “what is the GDP per capita of the Klein four-group”.
The reason we’d have to extrapolate our goal structure in order to get a satisfying future is because human value grows and changes. In C. elegans the extrapolation is dead simple—its goals don’t change much at all. So CEV seems possible, it just wouldn’t do anything that “CV” wouldn’t.
When something sneezes, is it “trying” to expel germs, “trying” to make achoo noises, or “trying” to get attention? It seems to me that the question simply doesn’t make sense unless the thing sneezing is a philosopher, in which case it might just as well decide not to look to its counterfactual behaviors for guidance at all.
When something sneezes, is it “trying” to expel germs, “trying” to make achoo noises, or “trying” to get attention?
I would have thought trying to ensure that the breathing apparatus was clear enough to work acceptably was a higher priority than anything specific to germs.
If a utility maximizer that has its utility function in terms of ‘attention gotten’ sneezes, is it “trying” to make achoo noises?
It seems like the question we’re asking here is “to what extent can we model this animal as if it made choices based on its prediction of future events?” Or a closely related question, “to what extent does it act like a utility maximizer?”
And the answer seems to be “pretty well within a limited domain, not very well at all outside that domain.” Small fish in their natural environment do a remarkable utility-maximizer impression, but when kept as pets they have a variety of inventive ways to kill themselves. The fish can act like utility maximizers because evolution stamped it into them, with lots and lots of simplifications to make the program run fast in tiny brains. When those simplifications are valid, the fish acts like the utility maximizer. When they’re broken, the ability to act like a utility maximizer evaporates.
The trouble with this context-dependent approach is that it’s context-dependent. But for animals that aren’t good at learning, that context seems pretty clearly to be the environment they evolved in, since evolution is the causal mechanism for them acting like utility maximizers, and by assumption they won’t have learned any new values.
So judging animals by their behavior, when used on dumb animals in ancestral environments, seems to be a decent way of assigning “wants” to animals.
Seeing as C. elegans lacks the neural structure to think objectively or have emotions, existing at all is the utopia for them. If we changed the worms enough to enable them to “enjoy” a utopia, they would no longer be what we started with. Would this be different with humans? To make us able to not get bored with our new utopia, we’d need different neural architecture, as well, which would, again, miss the point of a utopia for “us-now” humans.
If we want a FAI to create CEV, it wouldn’t be one for us-now. If we aren’t making a CEV for ourselves, why not just make a utopia for the FAI?
No. The point of utopia is that it is what we would want and not get bored with. CEV attempts to solve the problem of finding out what we would want, and not just what our current incoherent stated values are.
“Enjoy” is a human (or at least higher vertebrate) concept. Trying to squeeze the worm into that mold will of course not work.
Also, it’s worth noting that if you take just the brain of a human, you are getting most of the interesting parts of the system. The same cannot be said of the worm, you might as well take the heart and extrapolate its volition if you aren’t going to work with the whole thing.
I understand the abstract concept that you are endorsing, but the way that human brains work would not allow a utopia in the sense that you describe. Heaven would become boring; Hell would become bearable. If the perfect place for humans is one where wonderful things happen, then horrible things happen, well, coincidentally, that sounds a lot like Earth. Simulated Reality FTW?
From CEV:
Dogs or C. eleganses don’t have an “as we wish that extrapolated”, which surely makes the question what their CEV is a wrong question, like “what is the GDP per capita of the Klein four-group”.
The reason we’d have to extrapolate our goal structure in order to get a satisfying future is because human value grows and changes. In C. elegans the extrapolation is dead simple—its goals don’t change much at all. So CEV seems possible, it just wouldn’t do anything that “CV” wouldn’t.
When something sneezes, is it “trying” to expel germs, “trying” to make achoo noises, or “trying” to get attention? It seems to me that the question simply doesn’t make sense unless the thing sneezing is a philosopher, in which case it might just as well decide not to look to its counterfactual behaviors for guidance at all.
I would have thought trying to ensure that the breathing apparatus was clear enough to work acceptably was a higher priority than anything specific to germs.
If a utility maximizer that has its utility function in terms of ‘attention gotten’ sneezes, is it “trying” to make achoo noises?
It seems like the question we’re asking here is “to what extent can we model this animal as if it made choices based on its prediction of future events?” Or a closely related question, “to what extent does it act like a utility maximizer?”
And the answer seems to be “pretty well within a limited domain, not very well at all outside that domain.” Small fish in their natural environment do a remarkable utility-maximizer impression, but when kept as pets they have a variety of inventive ways to kill themselves. The fish can act like utility maximizers because evolution stamped it into them, with lots and lots of simplifications to make the program run fast in tiny brains. When those simplifications are valid, the fish acts like the utility maximizer. When they’re broken, the ability to act like a utility maximizer evaporates.
The trouble with this context-dependent approach is that it’s context-dependent. But for animals that aren’t good at learning, that context seems pretty clearly to be the environment they evolved in, since evolution is the causal mechanism for them acting like utility maximizers, and by assumption they won’t have learned any new values.
So judging animals by their behavior, when used on dumb animals in ancestral environments, seems to be a decent way of assigning “wants” to animals.
Seeing as C. elegans lacks the neural structure to think objectively or have emotions, existing at all is the utopia for them. If we changed the worms enough to enable them to “enjoy” a utopia, they would no longer be what we started with. Would this be different with humans? To make us able to not get bored with our new utopia, we’d need different neural architecture, as well, which would, again, miss the point of a utopia for “us-now” humans.
If we want a FAI to create CEV, it wouldn’t be one for us-now. If we aren’t making a CEV for ourselves, why not just make a utopia for the FAI?
No. The point of utopia is that it is what we would want and not get bored with. CEV attempts to solve the problem of finding out what we would want, and not just what our current incoherent stated values are.
“Enjoy” is a human (or at least higher vertebrate) concept. Trying to squeeze the worm into that mold will of course not work.
Also, it’s worth noting that if you take just the brain of a human, you are getting most of the interesting parts of the system. The same cannot be said of the worm, you might as well take the heart and extrapolate its volition if you aren’t going to work with the whole thing.
I understand the abstract concept that you are endorsing, but the way that human brains work would not allow a utopia in the sense that you describe. Heaven would become boring; Hell would become bearable. If the perfect place for humans is one where wonderful things happen, then horrible things happen, well, coincidentally, that sounds a lot like Earth. Simulated Reality FTW?