there aren’t just people fulfilling their preferences.
You missed a word in my original. I said that there were agents trying to fulfill their preferences. Now, per my comment at the end of your subthread with Amanojack, I realize that the word “preferences” may be unhelpful. Let me try to taboo it:
There are intelligent agents who assign higher values to some futures than others. I observe them generally making an effort to actualize those futures, but sometimes failing due to various immediate circumstances, which we could call cognitive overrides. What I mean by that is that these agents have biases and heuristics which lead them to poorly evaluate the consequences of actions.
Even if a human sleeping on the edge of a cliff knows that the cliff edge is right next to him, he will jolt if startled by noise or movement. He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it. Similarly, under conditions of sufficient hunger, thirst, fear, or pain, the analytical parts of the agent’s mind give way to evolved heuristics.
definition of morality (that doesn’t involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
If that’s how you would like to define it, that’s fine. Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it
I suspect it’s a matter of degree rather than either-or. People sleeping on the edges of cliffs are much less likely to jot when startled than people sleeping on soft beds, but not 0% likely. The interplay between your biases and your reason is highly complex.
Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
Yes; absolutely. I suspect that a coherent definition of morality that isn’t contingent on those will have to reference a deity.
You missed a word in my original. I said that there were agents trying to fulfill their preferences. Now, per my comment at the end of your subthread with Amanojack, I realize that the word “preferences” may be unhelpful. Let me try to taboo it:
There are intelligent agents who assign higher values to some futures than others. I observe them generally making an effort to actualize those futures, but sometimes failing due to various immediate circumstances, which we could call cognitive overrides. What I mean by that is that these agents have biases and heuristics which lead them to poorly evaluate the consequences of actions.
Even if a human sleeping on the edge of a cliff knows that the cliff edge is right next to him, he will jolt if startled by noise or movement. He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it. Similarly, under conditions of sufficient hunger, thirst, fear, or pain, the analytical parts of the agent’s mind give way to evolved heuristics.
If that’s how you would like to define it, that’s fine. Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
I suspect it’s a matter of degree rather than either-or. People sleeping on the edges of cliffs are much less likely to jot when startled than people sleeping on soft beds, but not 0% likely. The interplay between your biases and your reason is highly complex.
Yes; absolutely. I suspect that a coherent definition of morality that isn’t contingent on those will have to reference a deity.
We are, near as I can tell, in perfect agreement on the substance of this issue. Aumann would be proud. :)