On first-order effects, it seems that your preference rankings as are follows:
1) You have the widget, the commune has $80, your total satisfaction is $30+80x.
2a) You have nothing, the commune has $100, your total satisfaction is $100x.
2b) You have $100, the commune has nothing, your total satisfaction is $100.
3) You have the widget, a monopoly you don’t value has $80. Your total satisfaction is $30+80y.
By changing x and y, we represent your altruism to the other parties in the situation; if x is greater than 1, then you would rather give the commune money than have it yourself, but if x is above 1.5 then you’d rather just give the money to the commune than have a widget for yourself. For ys below 7/8ths, you’d rather not buy the widget. (The x and y I inferred from the question are slightly above 1 and slightly above 0, which suggests the best option is indeed 1.)
---
Why do humans have moral intuitions at all? I claim a major role is to represent higher order effects as shorthand. When you see a bike you don’t own, you might run the first order calculations and think it’s worth more to you than it is to whoever owns it, and so global utility is maximized by you stealing the bike. But a world in which agents reflexively don’t steal bikes has other benefits to it, such that the low-theft equilibrium might have higher global utility than the high-theft equilibrium. But you can’t get from the high-theft equilibrium to the low-theft equilibrium by making small pareto improvements.
And so if you notice you have moral intuitions that rise up whenever you run the numbers and decide you shouldn’t be upset that someone stole your bike, try to figure out what effects those intuitions are trying to have.
---
Why put economic transactions in a separate domain from charitable donations? There are a few related things to disentangle.
First, for you personally, it really doesn’t matter much. If you would rather pay your favorite charity $100 for a t-shirt with their logo on it, even though you normally wouldn’t pay $100 for a t-shirt, even though you could just give them the $100, then do it.
Second, for society as a whole, prices are a information-transmission mechanism, conveying how much caring something requires to produce, and how much people care about it being produced. Mucking with this mechanism to divert value flows generally destroys more than it creates, especially since the prices can freely fluctuate in response to changing conditions, whereas policies are stickier.
Wait, are you claiming that humans have moral intuitions because it maximizes global utility? Surely moral intuitions have been produced by evolution. Why would evolution select for agents with behaviour that maximize global utility?
Wait, are you claiming that humans have moral intuitions because it maximizes global utility? Surely moral intuitions have been produced by evolution.
No, I’m claiming that moral intuitions reflect the precomputation of higher-order strategic considerations (of the sort “if I let this person get away with stealing a bike, then I will be globally worse off even though I seem locally better off”).
I agree that you should expect evolution to create agents that maximize inclusive genetic fitness, which is quite different from global utility. But even if one adopts the frame that ‘utilitarian calculus is the standard of correctness,’ one can still use those moral intuitions as valuable cognitive guides, by directing attention towards considerations that might otherwise be missed.
By changing x and y, we represent your altruism to the other parties in the situation; if x is greater than 1, then you would rather give the commune money than have it yourself,
Small correction: you want to buy the widget as long as x > 7⁄8 .
You should also almost never expect x>1, because that means you should immediately spend your money on that cause until x becomes 1 or you run out of credit. x=1 means that something is the best marginal way to allocate money that you know of right now.
I think we’re using margins differently. Yes, you shouldn’t expect situations with x>1 to be durable, but you should expect x>1 before every charitable donation that you make. Otherwise you wouldn’t make the donation! And so x=1 is the ‘money in the bank’ valuation, instead of the upper bound.
On first-order effects, it seems that your preference rankings as are follows:
1) You have the widget, the commune has $80, your total satisfaction is $30+80x.
2a) You have nothing, the commune has $100, your total satisfaction is $100x.
2b) You have $100, the commune has nothing, your total satisfaction is $100.
3) You have the widget, a monopoly you don’t value has $80. Your total satisfaction is $30+80y.
By changing x and y, we represent your altruism to the other parties in the situation; if x is greater than 1, then you would rather give the commune money than have it yourself, but if x is above 1.5 then you’d rather just give the money to the commune than have a widget for yourself. For ys below 7/8ths, you’d rather not buy the widget. (The x and y I inferred from the question are slightly above 1 and slightly above 0, which suggests the best option is indeed 1.)
---
Why do humans have moral intuitions at all? I claim a major role is to represent higher order effects as shorthand. When you see a bike you don’t own, you might run the first order calculations and think it’s worth more to you than it is to whoever owns it, and so global utility is maximized by you stealing the bike. But a world in which agents reflexively don’t steal bikes has other benefits to it, such that the low-theft equilibrium might have higher global utility than the high-theft equilibrium. But you can’t get from the high-theft equilibrium to the low-theft equilibrium by making small pareto improvements.
And so if you notice you have moral intuitions that rise up whenever you run the numbers and decide you shouldn’t be upset that someone stole your bike, try to figure out what effects those intuitions are trying to have.
---
Why put economic transactions in a separate domain from charitable donations? There are a few related things to disentangle.
First, for you personally, it really doesn’t matter much. If you would rather pay your favorite charity $100 for a t-shirt with their logo on it, even though you normally wouldn’t pay $100 for a t-shirt, even though you could just give them the $100, then do it.
Second, for society as a whole, prices are a information-transmission mechanism, conveying how much caring something requires to produce, and how much people care about it being produced. Mucking with this mechanism to divert value flows generally destroys more than it creates, especially since the prices can freely fluctuate in response to changing conditions, whereas policies are stickier.
Wait, are you claiming that humans have moral intuitions because it maximizes global utility? Surely moral intuitions have been produced by evolution. Why would evolution select for agents with behaviour that maximize global utility?
No, I’m claiming that moral intuitions reflect the precomputation of higher-order strategic considerations (of the sort “if I let this person get away with stealing a bike, then I will be globally worse off even though I seem locally better off”).
I agree that you should expect evolution to create agents that maximize inclusive genetic fitness, which is quite different from global utility. But even if one adopts the frame that ‘utilitarian calculus is the standard of correctness,’ one can still use those moral intuitions as valuable cognitive guides, by directing attention towards considerations that might otherwise be missed.
Small correction: you want to buy the widget as long as x > 7⁄8 .
You should also almost never expect x>1, because that means you should immediately spend your money on that cause until x becomes 1 or you run out of credit. x=1 means that something is the best marginal way to allocate money that you know of right now.
I think we’re using margins differently. Yes, you shouldn’t expect situations with x>1 to be durable, but you should expect x>1 before every charitable donation that you make. Otherwise you wouldn’t make the donation! And so x=1 is the ‘money in the bank’ valuation, instead of the upper bound.