This seems totally wrong. The use of coercive force is an active ingredient in the state feeding the hungry, as it is with other public good provision.
I feel like you’ve come up with an example where people are just barely charitable enough that they support redistribution, but not charitable enough that they would ever give a gift themselves. This is a counterexample to Friedman’s claim, but it’s not obvious that it’s real.
That is, suppose altruism is distributed unevenly among the population; then it likely will be the case that whenever the median opinion is in support of redistribution (like food stamps, say) then the right quartile (i.e. the people most interested in redistribution) could fund it themselves.
And historically we have many examples of this, with voluntary clubs and organizations as the coercion-free method of public good provision. Perhaps no individual person wants to solve hunger on their own, but they are interested in a kickstarter to solve hunger or being a dues-paying member to the Anti-Hunger Society.
(And Against Against Billionaire Philanthropy, while being not totally relevant, is nevertheless an important part of the story when it comes to public good provision and what’s required for it.)
I feel like you’ve come up with an example where people are just barely charitable enough that they support redistribution, but not charitable enough that they would ever give a gift themselves. This is a counterexample to Friedman’s claim, but it’s not obvious that it’s real.
For consequentialists, the gap between “charitable enough to give” and “charitable enough to support redistribution” seems to be more than a million-fold; if so, I don’t think it warrants that “just barely” modifier.
I’m confused about the ‘million-fold’ claim; I thought that if a noble dialed up their “caring about peasants” by 100x, so that rather than a factor of 1e-6 it was 1e-4, then they would have utility 4 and derivative 1e-4 from their income, a peasant would have utility 1 and derivative 1 from their income, and so the noble would be indifferent between holding onto a dollar and giving it away, and so any increases above 100x (like 101x) cause some gifts to happen.
(Like, this is where the <1% in your post comes from, right?)
I don’t think it warrants that “just barely” modifier.
There are two gaps; between ‘not caring about peasants’ and ‘supporting redistribution’, and between ‘supporting redistribution’ and ‘gives a gift at least once’. I meant ‘just barely’ about the first gap, where they only care 1e-6 about an individual peasant. If they cared 1e-7 about individual peasants, then I don’t think they would support any level of redistribution.
Also, in this model I don’t think nobles have altruism towards each other? Given the low derivative on noble utility compared to peasant utility, this doesn’t matter much for a while; like, if they care equal amounts about themselves, nobles as a whole, and peasants as a whole, then it looks like the optimal tax is something like 27%. If they care about nobles as a whole ten times as much as they care about peasants as a whole, then it looks like redistribution is off the table.
(If they care about individual nobles as much as they care about individual peasants, then it doesn’t shift things much.)
---
Suppose this is more like an actual kingdom, where each peasant is assigned a noble, and nobles only care about the peasants assigned to them, and they care about all of their peasants as much as they care about themselves. Then without the state stepping in, the nobles maximize their utility by giving away almost half of their money to their peasants (where total peasant income and noble income are equalized). That is, even for consequentialists, having some way of preferring people close to you over people far from you can make it easy to tip the scales in favor of charity. And, interestingly, this only relied on the same level of altruism for each noble (i.e. no noble ever has to put their personal satisfaction as less than 50% of their utility function), but got better effects through concentration.
Of course, there’s still the question of where the public goods are. Public goods on the regional level should be well-provided for by this scheme (where consequentialist peasants would shirk paying for a cathedral, but the consequentialist noble funds it as part of ‘redistribution’), but public goods on the national level won’t be (where consequentialist nobles will shirk paying for the border wall against the White Walkers).
But this puts us in public choice territory, I think; the question is closer to “what mechanisms will lead to the provisions of which public goods?”, which is a rather different question than “maximize the sum of these logarithms with varying parameters.”
For the nobles the ratio is only 1000 (= the total number of nobles). In e.g. the modern US the multiples are much higher since the tax base is much larger. That is, there is a gap of > a million between the levels of altruism at which you would prefer higher taxes vs. actually give away some money.
only 1000 (= the total number of nobles). In e.g. the modern US the multiples are much higher since the tax base is much larger.
Oh, I see; in situations where your altruism is scope-invariant (i.e. you care half about yourself and half about others, regardless of the size of the others), then as you vary the population size centralized coercive redistribution remains basically equally desirable (since it’s just a question of wealth gaps and percents) whereas diffusion of responsibility eats consequentialist private charity (since there’s nothing singling out the people you decide to help).
There’s still some ways to voluntarily maintain concentration, like picking increasingly narrow public goods to wholly own. (Carnegie’s “I funded public libraries across America” compared to something like “I funded open access to transcribed ship logs of ocean weather measurements from 1500 to 1800.”) But this is a prestige market instead of an effectiveness market (“I funded a bunch of toilets”), and the more public goods look like wealth redistribution instead of entrepreneurship / project completion the less attractive this variant becomes. [And even in worlds where it looks more like entrepreneurship / project completion, decentralized funding causes unilateralist / vetting problems.]
(Like, this is where the <1% in your post comes from, right?)
No, the <1% in the post comes from the other “bad option” (the first being that “They care about themselves >10 million times as much as other people”), namely that people care about themselves <10 million times as much as other people. (Since there are more than a billion people in the world, <10 million times as much as other people is “<1% as much as everyone else in the whole world put together.”)
I feel like you’ve come up with an example where people are just barely charitable enough that they support redistribution, but not charitable enough that they would ever give a gift themselves. This is a counterexample to Friedman’s claim, but it’s not obvious that it’s real.
That is, suppose altruism is distributed unevenly among the population; then it likely will be the case that whenever the median opinion is in support of redistribution (like food stamps, say) then the right quartile (i.e. the people most interested in redistribution) could fund it themselves.
And historically we have many examples of this, with voluntary clubs and organizations as the coercion-free method of public good provision. Perhaps no individual person wants to solve hunger on their own, but they are interested in a kickstarter to solve hunger or being a dues-paying member to the Anti-Hunger Society.
(And Against Against Billionaire Philanthropy, while being not totally relevant, is nevertheless an important part of the story when it comes to public good provision and what’s required for it.)
For consequentialists, the gap between “charitable enough to give” and “charitable enough to support redistribution” seems to be more than a million-fold; if so, I don’t think it warrants that “just barely” modifier.
I’m confused about the ‘million-fold’ claim; I thought that if a noble dialed up their “caring about peasants” by 100x, so that rather than a factor of 1e-6 it was 1e-4, then they would have utility 4 and derivative 1e-4 from their income, a peasant would have utility 1 and derivative 1 from their income, and so the noble would be indifferent between holding onto a dollar and giving it away, and so any increases above 100x (like 101x) cause some gifts to happen.
(Like, this is where the <1% in your post comes from, right?)
There are two gaps; between ‘not caring about peasants’ and ‘supporting redistribution’, and between ‘supporting redistribution’ and ‘gives a gift at least once’. I meant ‘just barely’ about the first gap, where they only care 1e-6 about an individual peasant. If they cared 1e-7 about individual peasants, then I don’t think they would support any level of redistribution.
Also, in this model I don’t think nobles have altruism towards each other? Given the low derivative on noble utility compared to peasant utility, this doesn’t matter much for a while; like, if they care equal amounts about themselves, nobles as a whole, and peasants as a whole, then it looks like the optimal tax is something like 27%. If they care about nobles as a whole ten times as much as they care about peasants as a whole, then it looks like redistribution is off the table.
(If they care about individual nobles as much as they care about individual peasants, then it doesn’t shift things much.)
---
Suppose this is more like an actual kingdom, where each peasant is assigned a noble, and nobles only care about the peasants assigned to them, and they care about all of their peasants as much as they care about themselves. Then without the state stepping in, the nobles maximize their utility by giving away almost half of their money to their peasants (where total peasant income and noble income are equalized). That is, even for consequentialists, having some way of preferring people close to you over people far from you can make it easy to tip the scales in favor of charity. And, interestingly, this only relied on the same level of altruism for each noble (i.e. no noble ever has to put their personal satisfaction as less than 50% of their utility function), but got better effects through concentration.
Of course, there’s still the question of where the public goods are. Public goods on the regional level should be well-provided for by this scheme (where consequentialist peasants would shirk paying for a cathedral, but the consequentialist noble funds it as part of ‘redistribution’), but public goods on the national level won’t be (where consequentialist nobles will shirk paying for the border wall against the White Walkers).
But this puts us in public choice territory, I think; the question is closer to “what mechanisms will lead to the provisions of which public goods?”, which is a rather different question than “maximize the sum of these logarithms with varying parameters.”
For the nobles the ratio is only 1000 (= the total number of nobles). In e.g. the modern US the multiples are much higher since the tax base is much larger. That is, there is a gap of > a million between the levels of altruism at which you would prefer higher taxes vs. actually give away some money.
Oh, I see; in situations where your altruism is scope-invariant (i.e. you care half about yourself and half about others, regardless of the size of the others), then as you vary the population size centralized coercive redistribution remains basically equally desirable (since it’s just a question of wealth gaps and percents) whereas diffusion of responsibility eats consequentialist private charity (since there’s nothing singling out the people you decide to help).
There’s still some ways to voluntarily maintain concentration, like picking increasingly narrow public goods to wholly own. (Carnegie’s “I funded public libraries across America” compared to something like “I funded open access to transcribed ship logs of ocean weather measurements from 1500 to 1800.”) But this is a prestige market instead of an effectiveness market (“I funded a bunch of toilets”), and the more public goods look like wealth redistribution instead of entrepreneurship / project completion the less attractive this variant becomes. [And even in worlds where it looks more like entrepreneurship / project completion, decentralized funding causes unilateralist / vetting problems.]
No, the <1% in the post comes from the other “bad option” (the first being that “They care about themselves >10 million times as much as other people”), namely that people care about themselves <10 million times as much as other people. (Since there are more than a billion people in the world, <10 million times as much as other people is “<1% as much as everyone else in the whole world put together.”)