Maybe instead you should just say “my utility function has a component which assigns negative value to violating agency of other people”
Let’s say I value paper clips. And you are destroying a bunch of paper clips that you created. What I see is that you are destroying value and I refuse to cooperate in return. But, you are not infringing on my agency, so I don’t infringe on yours. That is an entirely separate concern not related to your destruction of value. So merely saying I value agency hides that important distinction.
I’m concerned that you are mixing two different things. One thing is that I might hold “not violating other people’s agency” as a terminal value (and I indeed believe I and many other people have such a value). This wouldn’t apply to a paperclip maximizer. Another is a game theoretic phenomenon in which I (either causally or acausally) agree to cooperate with another agent. This would apply to any agent with a “sufficiently good” decision theory. I wouldn’t call it “moral”, I’d just call it “bargaining”. The last point is just a matter of terminology, but the distinction between the two scenarios is principal.
Although I’ve come to expect this result, it still baffles me.
I’m concerned that you are mixing two different things.
Those two things would be value ethics and agency ethics and I’m the one trying to hold them apart while you are conflating them.
I wouldn’t call it “moral”, I’d just call it “bargaining”.
But we’re not bargaining. This works even if we never meet. If agency is just another terminal value you can trade it for whatever else you value and by that you are failing to make the distinction that I’m trying to show. Only because agency is not just a terminal value can I make a game theoretic consideration outside the mere value comparison.
Agency thus becomes a set of guidelines that we use to judge right from wrong outside of mere value calculations. How is that not what we call ‘morality’?
but the distinction between the two scenarios is principal.
But we’re not bargaining. This works even if we never meet.
Yeah, which would make it acausal trade. It’s still bargaining in the game theoretic sense. The agents have a “sufficiently advanced” decision theory to allow them to reach a Pareto optimal outcome (e.g. Nash bargaining solution) rather than e.g. Nash equilibrium even acausally. It has nothing to do with “respecting agency”.
Let’s say I value paper clips. And you are destroying a bunch of paper clips that you created. What I see is that you are destroying value and I refuse to cooperate in return. But, you are not infringing on my agency, so I don’t infringe on yours. That is an entirely separate concern not related to your destruction of value. So merely saying I value agency hides that important distinction.
I’m concerned that you are mixing two different things. One thing is that I might hold “not violating other people’s agency” as a terminal value (and I indeed believe I and many other people have such a value). This wouldn’t apply to a paperclip maximizer. Another is a game theoretic phenomenon in which I (either causally or acausally) agree to cooperate with another agent. This would apply to any agent with a “sufficiently good” decision theory. I wouldn’t call it “moral”, I’d just call it “bargaining”. The last point is just a matter of terminology, but the distinction between the two scenarios is principal.
Although I’ve come to expect this result, it still baffles me.
Those two things would be value ethics and agency ethics and I’m the one trying to hold them apart while you are conflating them.
But we’re not bargaining. This works even if we never meet. If agency is just another terminal value you can trade it for whatever else you value and by that you are failing to make the distinction that I’m trying to show. Only because agency is not just a terminal value can I make a game theoretic consideration outside the mere value comparison.
Agency thus becomes a set of guidelines that we use to judge right from wrong outside of mere value calculations. How is that not what we call ‘morality’?
And that would be exactly my point.
Yeah, which would make it acausal trade. It’s still bargaining in the game theoretic sense. The agents have a “sufficiently advanced” decision theory to allow them to reach a Pareto optimal outcome (e.g. Nash bargaining solution) rather than e.g. Nash equilibrium even acausally. It has nothing to do with “respecting agency”.