It’s worth being clear what you mean by “trade” in these cases. Does “moral trade” mean “compromising one part of your moral beliefs in order to support another part”? or “negotiate with immoral agents to maximize overall moral value” or just “recognize that morals are preferences and all trade is moral trade”?
I think I agree that “trade” is the wrong metaphor for models and priors. There is sharing, and two-way sharing is often called “exchange”, but that’s misleading. For resources, “trade” implies loss of something and gain of something else, where the utility of the things to each party differ in a way that both are better off. For private epistemology (not in the public sphere where what you say may differ from what you believe), there’s nothing you give up or trade away for new updates.
(I aimed for non-”political” examples, which ended up sounding ridiculous.)
Suppose you believed that the color blue is evil, and want there to be less blue things in the world.
Suppose I believed the same as you, except for me the color is red.
Perhaps we could agree on a moral trade—we will both be against the colors blue and red! Or perhaps something less extreme—you won’t make things red unless they’d look really good red, and I won’t make things blue unless they’d look really good blue. Or we trade in some other manner—if we were neighbors and our houses were blue and red we might paint them different colors (neither red nor blue), or trade them.
Hmm, those examples seem to be just “trade”. Agreeing to do something dispreferred, in exchange for someone else doing something they disprefer and you prefer, when all things under consideration are permitted and optional under the relevant moral strictures.
I aimed for non-”political” examples, which ended up sounding ridiculous.
I wonder if that implies that politics is one of the main areas where the concept applies.
meta: thanks for your comment; no expectation for you to read this comment; it doesn’t even really respond to your comments, just some thoughts that came after reading it; see last paragraph for an answer to your question| quality: didn’t spent much time formatting my thoughts
I use “moral trade” for non-egoist preferences. The latter is the trivial case that’s the most prevalent; we trade resources because I care more about myself and you care more about yourself, and both want something personally out of the trade.
Two people that are only different in that one adopts Bentham’s utilitarianism and the other adopts Mill’s might want to trade. One value the existence of a human more than the existence of a pig. So one might trade their diet (become vegan) for a donation to poverty alleviation.
Two people could have the same values, but one think that there’s a religious after life and the other not because they processed evidence differently. Someone could propose the following trade: the atheist will pray all their life (the reward massively overweights the cost from the theist person’s perspective), and in exchange, the theist will sign up for cryonics (the reward massively overweights the cost from the atheist person’s perspective). Hummmm, actually, writing out this example, it now seems to make sense to me to trade. Assuming both people are pure utilitarians (and no opportunity cost), they would both, in expectation, from their relative model of the world, gain a much larger reward than its cost. I guess this could also be called moral trade, but the different in expected value comes from different model of the worlds instead of different values.
So you never actually trade epistemologies or priors (as in, I reprogram my mind if you reprogram yours so that we have a more similar way of modelling the world), but you can trade acting as if. (Well, there are also cases were you would actually trade them, but only because it’s morally beneficial to both parties.) It sounds trivial now, but yeah, epistemologies and priors are not necessarily intrinsically moving. I’m not sure what I had in mind exactly yesterday.
Ah, I think meant, let’s assume I have Model 1 and you have Model 2. Model 1 evaluates Model 2 to be 50% wrong and vice versa, and both assume they themselves are 95% right. Let’s assume that there’s a third model that is 94% right according to both. If you do an average, it seems better. But it obviously doesn’t mean it’s optimal from any of the agent’s perspective to accept this modification to their model.
It’s worth being clear what you mean by “trade” in these cases. Does “moral trade” mean “compromising one part of your moral beliefs in order to support another part”? or “negotiate with immoral agents to maximize overall moral value” or just “recognize that morals are preferences and all trade is moral trade”?
I think I agree that “trade” is the wrong metaphor for models and priors. There is sharing, and two-way sharing is often called “exchange”, but that’s misleading. For resources, “trade” implies loss of something and gain of something else, where the utility of the things to each party differ in a way that both are better off. For private epistemology (not in the public sphere where what you say may differ from what you believe), there’s nothing you give up or trade away for new updates.
(I aimed for non-”political” examples, which ended up sounding ridiculous.)
Suppose you believed that the color blue is evil, and want there to be less blue things in the world.
Suppose I believed the same as you, except for me the color is red.
Perhaps we could agree on a moral trade—we will both be against the colors blue and red! Or perhaps something less extreme—you won’t make things red unless they’d look really good red, and I won’t make things blue unless they’d look really good blue. Or we trade in some other manner—if we were neighbors and our houses were blue and red we might paint them different colors (neither red nor blue), or trade them.
Hmm, those examples seem to be just “trade”. Agreeing to do something dispreferred, in exchange for someone else doing something they disprefer and you prefer, when all things under consideration are permitted and optional under the relevant moral strictures.
I wonder if that implies that politics is one of the main areas where the concept applies.
meta: thanks for your comment; no expectation for you to read this comment; it doesn’t even really respond to your comments, just some thoughts that came after reading it; see last paragraph for an answer to your question| quality: didn’t spent much time formatting my thoughts
I use “moral trade” for non-egoist preferences. The latter is the trivial case that’s the most prevalent; we trade resources because I care more about myself and you care more about yourself, and both want something personally out of the trade.
Two people that are only different in that one adopts Bentham’s utilitarianism and the other adopts Mill’s might want to trade. One value the existence of a human more than the existence of a pig. So one might trade their diet (become vegan) for a donation to poverty alleviation.
Two people could have the same values, but one think that there’s a religious after life and the other not because they processed evidence differently. Someone could propose the following trade: the atheist will pray all their life (the reward massively overweights the cost from the theist person’s perspective), and in exchange, the theist will sign up for cryonics (the reward massively overweights the cost from the atheist person’s perspective). Hummmm, actually, writing out this example, it now seems to make sense to me to trade. Assuming both people are pure utilitarians (and no opportunity cost), they would both, in expectation, from their relative model of the world, gain a much larger reward than its cost. I guess this could also be called moral trade, but the different in expected value comes from different model of the worlds instead of different values.
So you never actually trade epistemologies or priors (as in, I reprogram my mind if you reprogram yours so that we have a more similar way of modelling the world), but you can trade acting as if. (Well, there are also cases were you would actually trade them, but only because it’s morally beneficial to both parties.) It sounds trivial now, but yeah, epistemologies and priors are not necessarily intrinsically moving. I’m not sure what I had in mind exactly yesterday.
Ah, I think meant, let’s assume I have Model 1 and you have Model 2. Model 1 evaluates Model 2 to be 50% wrong and vice versa, and both assume they themselves are 95% right. Let’s assume that there’s a third model that is 94% right according to both. If you do an average, it seems better. But it obviously doesn’t mean it’s optimal from any of the agent’s perspective to accept this modification to their model.