There’s more memetic homegeneity here than I would prefer for such a venture.
Sometimes there are right answers, and smart people will mostly aggree. I suspect your perception of “memetic homegeneity” results from your insistance on disagreeing with some obviously (at least obviously after the discussions we’ve had) right answers, e.g. persistance of values as an instrumental value.
If I understand what you are talking about, I have expressed disagreement with it a couple of times. My disagreement has to do with the values expressed by a coalition (which will be some kind of bargained composite of the values of the individual members of that coalition).
But then when the membership in that coalition changes, the ‘deal’ must be renegotiated, and the coalition’s values are no longer perfectly persistent—nor should they be.
This is not just a technical quibble. The CEV of mankind is a composite value representing a coalition with a changing membership.
The case of agents in conflict. Keep your values and be destroyed, or change them and get the world partially optimized for your initial values.
The case of unknown future. You know class of worlds you want to be in. What you don’t know yet is that to reach them you must make choices incompatible with your values. And, to make things worse, all choices you can make ultimately lead to worlds you definitely don’t want to be in.
Yes. That is the general class that includes ‘Omega rewards you if you make your decision irrationally’. It applies whenever the specific state of your cognitive representation interacts significantly with the environment by means independent of your behaviour.
No. You don’t need to edit yourself to make unpleasant choices. Whenever you wish you were are different person than who you are so that you could make a different choice you just make that choice.
It works for pure consequentialist, but if one’s values have a deontology in the mix, then your suggestion effectively requires changing of one’s values.
And I doubt than instrumental value that will change terminal values can be called instrumental. Agent that adopts this value (persistence of values) will end up with different terminal values than agent that does not.
Sometimes there are right answers, and smart people will mostly aggree. I suspect your perception of “memetic homegeneity” results from your insistance on disagreeing with some obviously (at least obviously after the discussions we’ve had) right answers, e.g. persistance of values as an instrumental value.
What? Someone disagrees with that? But, but… how?
Ask Phil
If I understand what you are talking about, I have expressed disagreement with it a couple of times. My disagreement has to do with the values expressed by a coalition (which will be some kind of bargained composite of the values of the individual members of that coalition).
But then when the membership in that coalition changes, the ‘deal’ must be renegotiated, and the coalition’s values are no longer perfectly persistent—nor should they be.
This is not just a technical quibble. The CEV of mankind is a composite value representing a coalition with a changing membership.
The case of agents in conflict. Keep your values and be destroyed, or change them and get the world partially optimized for your initial values.
The case of unknown future. You know class of worlds you want to be in. What you don’t know yet is that to reach them you must make choices incompatible with your values. And, to make things worse, all choices you can make ultimately lead to worlds you definitely don’t want to be in.
Yes. That is the general class that includes ‘Omega rewards you if you make your decision irrationally’. It applies whenever the specific state of your cognitive representation interacts significantly with the environment by means independent of your behaviour.
No. You don’t need to edit yourself to make unpleasant choices. Whenever you wish you were are different person than who you are so that you could make a different choice you just make that choice.
It works for pure consequentialist, but if one’s values have a deontology in the mix, then your suggestion effectively requires changing of one’s values.
And I doubt than instrumental value that will change terminal values can be called instrumental. Agent that adopts this value (persistence of values) will end up with different terminal values than agent that does not.