Suppose that I value a widget at $30. Suppose that the widget costs the widget-manufacturer $20 to produce, but, due to monopoly power on their part, they can charge $100 per widget.
The economic calculus for this problem is as follows. $30 (widget valuation) - $100 (widget price) = -$70 to me; $100 (widget price) - $20 (widget cost) = $80 to widget producers. $80 - $70 = +$10 total value. Ordinarily, this wouldn’t imply that utilitarians are required to spend all their money on widgets because for a function to convert dollars to utils u($), u’($)>0, u″($)<0 and widget-producers usually have higher $ then widget consumers.
But suppose the widget monopolist is a poor worker commune. The profits go directly to the workers who, on average, have lower $ then I do. It seems like buying widgets would be more moral then, say, donating $80 to the same group of poor people ($80 - $80 = $0) because the widget purchase slightly compensates me for the donation in a way that is greater then the cost of the recipient to produce the widget.
And yet, I feel even less moral compunction to buy widgets then I do to donate $80 to GiveDirectly. Is this just an arbitrary, unjustifiable, subconscious desire to shove economic transactions into a separate domain from charitable donations or is there actually some mistake in the utilitarian logic here? If there isn’t a mistake in the logic, is this something that the Open Philanthropy Project should be looking at?
[Question inspired by a similar question at the end of chapter 7 of Steven Landsburg’s The Armchair Economist]
On first-order effects, it seems that your preference rankings as are follows:
1) You have the widget, the commune has $80, your total satisfaction is $30+80x.
2a) You have nothing, the commune has $100, your total satisfaction is $100x.
2b) You have $100, the commune has nothing, your total satisfaction is $100.
3) You have the widget, a monopoly you don’t value has $80. Your total satisfaction is $30+80y.
By changing x and y, we represent your altruism to the other parties in the situation; if x is greater than 1, then you would rather give the commune money than have it yourself, but if x is above 1.5 then you’d rather just give the money to the commune than have a widget for yourself. For ys below 7/8ths, you’d rather not buy the widget. (The x and y I inferred from the question are slightly above 1 and slightly above 0, which suggests the best option is indeed 1.)
---
Why do humans have moral intuitions at all? I claim a major role is to represent higher order effects as shorthand. When you see a bike you don’t own, you might run the first order calculations and think it’s worth more to you than it is to whoever owns it, and so global utility is maximized by you stealing the bike. But a world in which agents reflexively don’t steal bikes has other benefits to it, such that the low-theft equilibrium might have higher global utility than the high-theft equilibrium. But you can’t get from the high-theft equilibrium to the low-theft equilibrium by making small pareto improvements.
And so if you notice you have moral intuitions that rise up whenever you run the numbers and decide you shouldn’t be upset that someone stole your bike, try to figure out what effects those intuitions are trying to have.
---
Why put economic transactions in a separate domain from charitable donations? There are a few related things to disentangle.
First, for you personally, it really doesn’t matter much. If you would rather pay your favorite charity $100 for a t-shirt with their logo on it, even though you normally wouldn’t pay $100 for a t-shirt, even though you could just give them the $100, then do it.
Second, for society as a whole, prices are a information-transmission mechanism, conveying how much caring something requires to produce, and how much people care about it being produced. Mucking with this mechanism to divert value flows generally destroys more than it creates, especially since the prices can freely fluctuate in response to changing conditions, whereas policies are stickier.
Wait, are you claiming that humans have moral intuitions because it maximizes global utility? Surely moral intuitions have been produced by evolution. Why would evolution select for agents with behaviour that maximize global utility?
No, I’m claiming that moral intuitions reflect the precomputation of higher-order strategic considerations (of the sort “if I let this person get away with stealing a bike, then I will be globally worse off even though I seem locally better off”).
I agree that you should expect evolution to create agents that maximize inclusive genetic fitness, which is quite different from global utility. But even if one adopts the frame that ‘utilitarian calculus is the standard of correctness,’ one can still use those moral intuitions as valuable cognitive guides, by directing attention towards considerations that might otherwise be missed.
Small correction: you want to buy the widget as long as x > 7⁄8 .
You should also almost never expect x>1, because that means you should immediately spend your money on that cause until x becomes 1 or you run out of credit. x=1 means that something is the best marginal way to allocate money that you know of right now.
I think we’re using margins differently. Yes, you shouldn’t expect situations with x>1 to be durable, but you should expect x>1 before every charitable donation that you make. Otherwise you wouldn’t make the donation! And so x=1 is the ‘money in the bank’ valuation, instead of the upper bound.