Presumably you act out a weighted balance of the voting power of possible human preferences extrapolated over different possible environments which they might create for themselves.
If a person could label each preference system “evolutionary” or “organismal”, meaning which value they preferred, then you could use that to help you extrapolate their values into novel environments.
The problem is that the person is reasoning only over the propositional part of their values. They don’t know what their values are; they know only what the contribution within the propositional part is. That’s one of the main points of my post. The values they come up with will not always be the values they actually implement.
If you define a person’s values as being what they believe their values are, then, sure, most of what I posted will not be a problem. I think you’re missing the point of the post, and are using the geometry-based definition of identity.
If you can’t say whether the right value to choose in each case is evolutionary or organismal, then extrapolating into future environments isn’t going to help. You can’t gain information to make a decision in your current environment by hypothesizing an extension to your environment, making observations in that imagined environment, and using them to refine your current-environment estimates. That’s like trying to refine your estimate of an asteroid’s current position by simulating its movement into the future, and then tracking backwards along that projected trajectory to the present. It’s trying to get information for free. You can’t do that.
(I think what I said under “Fuzzy values and fancy math don’t help” is also relevant.)
If a person could label each preference system “evolutionary” or “organismal”, meaning which value they preferred, then you could use that to help you extrapolate their values into novel environments.
The problem is that the person is reasoning only over the propositional part of their values. They don’t know what their values are; they know only what the contribution within the propositional part is. That’s one of the main points of my post. The values they come up with will not always be the values they actually implement.
If you define a person’s values as being what they believe their values are, then, sure, most of what I posted will not be a problem. I think you’re missing the point of the post, and are using the geometry-based definition of identity.
If you can’t say whether the right value to choose in each case is evolutionary or organismal, then extrapolating into future environments isn’t going to help. You can’t gain information to make a decision in your current environment by hypothesizing an extension to your environment, making observations in that imagined environment, and using them to refine your current-environment estimates. That’s like trying to refine your estimate of an asteroid’s current position by simulating its movement into the future, and then tracking backwards along that projected trajectory to the present. It’s trying to get information for free. You can’t do that.
(I think what I said under “Fuzzy values and fancy math don’t help” is also relevant.)