meta: thanks for your comment; no expectation for you to read this comment; it doesn’t even really respond to your comments, just some thoughts that came after reading it; see last paragraph for an answer to your question| quality: didn’t spent much time formatting my thoughts
I use “moral trade” for non-egoist preferences. The latter is the trivial case that’s the most prevalent; we trade resources because I care more about myself and you care more about yourself, and both want something personally out of the trade.
Two people that are only different in that one adopts Bentham’s utilitarianism and the other adopts Mill’s might want to trade. One value the existence of a human more than the existence of a pig. So one might trade their diet (become vegan) for a donation to poverty alleviation.
Two people could have the same values, but one think that there’s a religious after life and the other not because they processed evidence differently. Someone could propose the following trade: the atheist will pray all their life (the reward massively overweights the cost from the theist person’s perspective), and in exchange, the theist will sign up for cryonics (the reward massively overweights the cost from the atheist person’s perspective). Hummmm, actually, writing out this example, it now seems to make sense to me to trade. Assuming both people are pure utilitarians (and no opportunity cost), they would both, in expectation, from their relative model of the world, gain a much larger reward than its cost. I guess this could also be called moral trade, but the different in expected value comes from different model of the worlds instead of different values.
So you never actually trade epistemologies or priors (as in, I reprogram my mind if you reprogram yours so that we have a more similar way of modelling the world), but you can trade acting as if. (Well, there are also cases were you would actually trade them, but only because it’s morally beneficial to both parties.) It sounds trivial now, but yeah, epistemologies and priors are not necessarily intrinsically moving. I’m not sure what I had in mind exactly yesterday.
Ah, I think meant, let’s assume I have Model 1 and you have Model 2. Model 1 evaluates Model 2 to be 50% wrong and vice versa, and both assume they themselves are 95% right. Let’s assume that there’s a third model that is 94% right according to both. If you do an average, it seems better. But it obviously doesn’t mean it’s optimal from any of the agent’s perspective to accept this modification to their model.
meta: thanks for your comment; no expectation for you to read this comment; it doesn’t even really respond to your comments, just some thoughts that came after reading it; see last paragraph for an answer to your question| quality: didn’t spent much time formatting my thoughts
I use “moral trade” for non-egoist preferences. The latter is the trivial case that’s the most prevalent; we trade resources because I care more about myself and you care more about yourself, and both want something personally out of the trade.
Two people that are only different in that one adopts Bentham’s utilitarianism and the other adopts Mill’s might want to trade. One value the existence of a human more than the existence of a pig. So one might trade their diet (become vegan) for a donation to poverty alleviation.
Two people could have the same values, but one think that there’s a religious after life and the other not because they processed evidence differently. Someone could propose the following trade: the atheist will pray all their life (the reward massively overweights the cost from the theist person’s perspective), and in exchange, the theist will sign up for cryonics (the reward massively overweights the cost from the atheist person’s perspective). Hummmm, actually, writing out this example, it now seems to make sense to me to trade. Assuming both people are pure utilitarians (and no opportunity cost), they would both, in expectation, from their relative model of the world, gain a much larger reward than its cost. I guess this could also be called moral trade, but the different in expected value comes from different model of the worlds instead of different values.
So you never actually trade epistemologies or priors (as in, I reprogram my mind if you reprogram yours so that we have a more similar way of modelling the world), but you can trade acting as if. (Well, there are also cases were you would actually trade them, but only because it’s morally beneficial to both parties.) It sounds trivial now, but yeah, epistemologies and priors are not necessarily intrinsically moving. I’m not sure what I had in mind exactly yesterday.
Ah, I think meant, let’s assume I have Model 1 and you have Model 2. Model 1 evaluates Model 2 to be 50% wrong and vice versa, and both assume they themselves are 95% right. Let’s assume that there’s a third model that is 94% right according to both. If you do an average, it seems better. But it obviously doesn’t mean it’s optimal from any of the agent’s perspective to accept this modification to their model.