2) Yes. Your values change based on your current values. One issue I hadn’t brought up is that I believe your moral values are only some of your values, and do not solely determine your choices.
3) I don’t think the algorithms are that close. Along the lines of research of Jonathan Haidt, I think there are different morality pattern match algorithms along the axes of fairness, autonomy, disgust, etc. I would guess that the algorithms for each axis are similar, but the weighting between them is less similar, as borne out in Haidt’s work.
Also, when you say “unfolding the algorithm”, what does that mean, and what algorithm are you speaking of? My unfolding of my 2place algorithm?
My largest issue is the implication that our 2place functions are imperfect images of an ideal 1place function. In some places that’s the clear implication I take, and in others, it’s not. In his final summary, he explicitly says:
we are dereferencing two different pointers to the same unverbalizable abstract computation.
I think that’s just wrong. We’re using the same label, but dereferencing to different 2place functions, mine and yours, and that’s why we’re often talking at cross purposes and don’t make much progress.
Eliezer says that we end up where we started, arguing in the same way we always have. I think we should be arguing in a new way. No longer trying to bludgeon people into submission to the values of our own 2place function, mistaking it for a universal 1place function, but trying to understand the other guy’s 2place function, and appealing to that.
I think I disagree with you, but I’m not sure exactly what you mean by what you’re saying. It might help to answer these questions three:
Taboo “universal”. What do you mean by “universal 1-place function”?
In what sense do you think morality is a 2-place function? How is this function applied in decision making? Does that mean it would be wrong to stop people whose “morality” says torture is “good” from torturing people?
In what sense do you think this 2-place function is different between people? (I’m looking for a precise answer in terms of the first and second argument to the function here.)
1) Yes. Different between two people.
2) Yes. Your values change based on your current values. One issue I hadn’t brought up is that I believe your moral values are only some of your values, and do not solely determine your choices.
3) I don’t think the algorithms are that close. Along the lines of research of Jonathan Haidt, I think there are different morality pattern match algorithms along the axes of fairness, autonomy, disgust, etc. I would guess that the algorithms for each axis are similar, but the weighting between them is less similar, as borne out in Haidt’s work.
Also, when you say “unfolding the algorithm”, what does that mean, and what algorithm are you speaking of? My unfolding of my 2place algorithm?
My largest issue is the implication that our 2place functions are imperfect images of an ideal 1place function. In some places that’s the clear implication I take, and in others, it’s not. In his final summary, he explicitly says:
I think that’s just wrong. We’re using the same label, but dereferencing to different 2place functions, mine and yours, and that’s why we’re often talking at cross purposes and don’t make much progress.
Eliezer says that we end up where we started, arguing in the same way we always have. I think we should be arguing in a new way. No longer trying to bludgeon people into submission to the values of our own 2place function, mistaking it for a universal 1place function, but trying to understand the other guy’s 2place function, and appealing to that.
I think I disagree with you, but I’m not sure exactly what you mean by what you’re saying. It might help to answer these questions three:
Taboo “universal”. What do you mean by “universal 1-place function”?
In what sense do you think morality is a 2-place function? How is this function applied in decision making? Does that mean it would be wrong to stop people whose “morality” says torture is “good” from torturing people?
In what sense do you think this 2-place function is different between people? (I’m looking for a precise answer in terms of the first and second argument to the function here.)