There isn’t a clear distinction, but CEV is exactly what the Amish have done. They took the values they had in the 18th century, tried to figure out what the minimal, essential values behind them were, and then developed a system for using those core values to extrapolate the Amish position on new developments, like electricity, the telephone, gasoline engines, the Internet, etc. It isn’t a simple rejection of new things; they have an eclectic selection of new things that may be used in certain ways or for certain purposes.
This is an interesting clarification of your early point, but I don’t see how this is a response to what I said.
For one thing, you’re ignoring the ‘if we were smarter, thought clearer’ part since of course the Amish can’t do that since they’re human.
But really, you just gave one negative example. Okay, being Amish is not growing up. What is growing up, and why would we predictably not value it while also finding it proper to object to its being not valued?
When you let your kids grow up, you accept that they won’t do things the way you want them to. They will have other values. You don’t try to optimize them for your value system.
Retaining values is one thing. FAI / CEV is designed to maximize a utility function based on your values. It corresponds to brainwashing your kids to have all of your values and stay as close to your value system as possible. Increasing smartness is beside the point.
I know that is the standard answer. I tried to discourage people from making it by saying, in the parent comment,
I know somebody’s going to say, “Well, then that’s your utility function!”
I’m talking about a real and important distinction, which is the degree of freedom in values to give the next generation. Under standard CEV, it’s zero.
I don’t think that parameter, the degree of freedom, should be thought of as a value, which we can plug any number we like into. It should be thought of as a parameter of the system, which has a predictable impact on the efficacy of the CEV system regardless of what values it is implementing.
I don’t think people allow their children freedom to make up their own minds because they value them doing so. They do it because we have centuries of experience showing that zero-freedom CEV doesn’t work. The oft-attempted process of getting kids to hold the same values as their parents, just modified for the new environment, always turns out badly.
I’m talking about a real and important distinction, which is the degree of freedom in values to give the next generation. Under standard CEV, it’s zero.
No, it’s not.
Zero is the number of degrees of freedom in the AI’s utility function. not the next generation’s utility functions.
You’ve completely lost me. Do you mean, this AI is our child? Do you mean that the way we will have children in a more conventional sense will be an instance of CEV?
If the former, I don’t see a moral problem. A singleton doesn’t get to be a person, even if it contains multitudes (much as the USA does not to get to be a person, though I would hope a singleton would function better).
If the latter… words fail me, at least for the moment, and I will wait for your confirmation before trying again.
The ‘if we were smarter, thought clearer, etc. etc.’ seems to be asking it to go beyond us.
What else do you mean by ‘growing up’, and why should we value it if it isn’t something we’d approve of?
There isn’t a clear distinction, but CEV is exactly what the Amish have done. They took the values they had in the 18th century, tried to figure out what the minimal, essential values behind them were, and then developed a system for using those core values to extrapolate the Amish position on new developments, like electricity, the telephone, gasoline engines, the Internet, etc. It isn’t a simple rejection of new things; they have an eclectic selection of new things that may be used in certain ways or for certain purposes.
This is an interesting clarification of your early point, but I don’t see how this is a response to what I said.
For one thing, you’re ignoring the ‘if we were smarter, thought clearer’ part since of course the Amish can’t do that since they’re human.
But really, you just gave one negative example. Okay, being Amish is not growing up. What is growing up, and why would we predictably not value it while also finding it proper to object to its being not valued?
When you let your kids grow up, you accept that they won’t do things the way you want them to. They will have other values. You don’t try to optimize them for your value system.
Retaining values is one thing. FAI / CEV is designed to maximize a utility function based on your values. It corresponds to brainwashing your kids to have all of your values and stay as close to your value system as possible. Increasing smartness is beside the point.
If we value them getting to go and make their own choices, then that will be included in CEV.
If we do not value them being brainwashed, it will not be included in CEV.
I strongly suspect that both of these are the case.
I know that is the standard answer. I tried to discourage people from making it by saying, in the parent comment,
I’m talking about a real and important distinction, which is the degree of freedom in values to give the next generation. Under standard CEV, it’s zero.
I don’t think that parameter, the degree of freedom, should be thought of as a value, which we can plug any number we like into. It should be thought of as a parameter of the system, which has a predictable impact on the efficacy of the CEV system regardless of what values it is implementing.
I don’t think people allow their children freedom to make up their own minds because they value them doing so. They do it because we have centuries of experience showing that zero-freedom CEV doesn’t work. The oft-attempted process of getting kids to hold the same values as their parents, just modified for the new environment, always turns out badly.
No, it’s not.
Zero is the number of degrees of freedom in the AI’s utility function. not the next generation’s utility functions.
When using the parent-child relationship as an instance of CEV, it is. The child takes the position of the AI.
You’ve completely lost me. Do you mean, this AI is our child? Do you mean that the way we will have children in a more conventional sense will be an instance of CEV?
If the former, I don’t see a moral problem. A singleton doesn’t get to be a person, even if it contains multitudes (much as the USA does not to get to be a person, though I would hope a singleton would function better).
If the latter… words fail me, at least for the moment, and I will wait for your confirmation before trying again.