Good point, I forgot to even consider the possible divergence in our utility functions; stupid. Revision: If my utility function is sufficiently similar to yours, then I will tell you that you ought to punch babies. If my utility function is sufficiently different from yours, I will try to triangulate and force a compromise in order to maximize my own utility. ETA: I suppose I actually am doing the latter in both cases; it just so happens that we agree on what actions are best to take in the first one.
But clearly the “should” is distinct from the compromise, no? You think that I shouldn’t punch babies but are willing to compromise at some baby-punching.
And I am arguing that your utility function is a kind of belief—a moral belief, that motivates action. “Should” statements are statements of, not about, your utility function.
(The of/about distinction is the distinction between “The sky is blue” and “I think the sky is blue”)
I suppose this is true. As long as your action is in conflict with my utility function, I will think that you “shouldn’t” do it.
I agree with that.
The triangulated “should-expression” in my above example is an expression of my utility function, but it is indirect insofar that it’s a calculation given that your utility function conflicts substantially with mine.
Also, when I was talking about divergence before I realize that I was being sloppy. Our utility functions can “diverge” quite a bit without “conflicting”, and the can “conflict” quite a bit without “diverging”; that is, our algorithms can both be “win the game of tic-tac-toe in front of you”, and thus be exactly the same, but still be in perfect conflict. So sorry about that, that was just sloppy thinking altogether on my part.
Good point, I forgot to even consider the possible divergence in our utility functions; stupid. Revision: If my utility function is sufficiently similar to yours, then I will tell you that you ought to punch babies. If my utility function is sufficiently different from yours, I will try to triangulate and force a compromise in order to maximize my own utility. ETA: I suppose I actually am doing the latter in both cases; it just so happens that we agree on what actions are best to take in the first one.
But clearly the “should” is distinct from the compromise, no? You think that I shouldn’t punch babies but are willing to compromise at some baby-punching.
And I am arguing that your utility function is a kind of belief—a moral belief, that motivates action. “Should” statements are statements of, not about, your utility function.
(The of/about distinction is the distinction between “The sky is blue” and “I think the sky is blue”)
I suppose this is true. As long as your action is in conflict with my utility function, I will think that you “shouldn’t” do it.
I agree with that.
The triangulated “should-expression” in my above example is an expression of my utility function, but it is indirect insofar that it’s a calculation given that your utility function conflicts substantially with mine.
Also, when I was talking about divergence before I realize that I was being sloppy. Our utility functions can “diverge” quite a bit without “conflicting”, and the can “conflict” quite a bit without “diverging”; that is, our algorithms can both be “win the game of tic-tac-toe in front of you”, and thus be exactly the same, but still be in perfect conflict. So sorry about that, that was just sloppy thinking altogether on my part.
Then we agree, and just had some terminology problems.