All moral arguments are either politicized or have the potential to be.
Your complaint was about moral assumptions rather than moral arguments. I would say the same about moral assumptions as you do about moral arguments, and suggest that therefore calling someone’s moral assumptions “politicized” is not a cogent criticism unless you go further and explain why their politicitization is worse than every other assumption’s.
Better moral arguments would involve taking a broader look at the future of humanity, [...]
I think there may be two different issues here that are at risk of getting mixed up. (1) If your aim is to make things better for the world’s poorest people, or to optimize net utility (which at least superficially looks like calling for very similar actions), you need to consider the long as well as the short term, and it might turn out that those goals are best achieved by actions whose short-term consequences look bad for poor people or bad for net utility. (2) You might care more about other things than net utility or the plight of the least fortunate.
Of these, it seems to me that #1 is the one it’s more helpful to discuss (because pure disagreements on values tend not to make for fruitful discussions) and is, at least ostensibly, the focus of most of the actual discussion here—but unlike #2 is isn’t actually a moral argument.
I do, for the avoidance of doubt, agree with #1. And it’s not impossible that putting money into the US stock market does more expected long-term good for the world’s poorest people than giving them money or buying them malaria nets. But the arguments deployed in support of that argument in this thread seem to me to be terrible in the same kind of way as the arguments for conventional EA are alleged to be, but with less excuse; and thinly disguised self-interest seems like an awfully plausible explanation for that.
I suppose I should make some attempt to justify my claim that the arguments are terrible, or at least explain it. Here is what I think is the best example.
Both Salemicus (in the OP) and pianoforte611 (a few articles upthread) seem just to tacitly assume that whatever produces the most growth must be best overall, and that this means not transferring any wealth to poorer people whose growth rate is lower. This seems to me exactly parallel to just tacitly assuming that whatever gives the most short-run benefit to the world’s poorest people must be best overall. And I think it’s flatly wrong. Here is a toy model to explain why I think so.
Consider a world made up of two populations, the Rich and the Poor. The sizes of these populations are, let’s say, in the fixed ratio 1:a. At time t they have wealth per capita of u(t),v(t). Utility per person is proportional to log wealth. Wealth grows exponentially: u’ = pu, v’ = qv. We suppose u(0)>v(0) -- the rich are richer than the poor—and p>q—the rich generate more growth than the poor. We discount everyone’s future utility by a factor exp(-rt); same discount rate r for rich and poor. And, finally, the rich give some fraction of their wealth to the poor, so the actual differential equations are u’ = (p-c)u, v’ = qv+cu.
So, the solution of these differential equations is a linear combination of exponentials that I won’t bother you with; then the net utility looks like the integral from 0 to infinity of exp(-rt) log(linear combination of exponentials) which, so far as I know, doesn’t have a closed form; so I used Mathematica to do it numerically.
The resulting plot of net utility against donation level c is typically not monotone decreasing; its global maximum is at a small but positive value. For instance, let’s consider the Rich to consist of the US plus Western Europe (population about 720M) and the Poor to consist of sub-Saharan Africa (population about 800M) so crudely take a=1; take u(0)=40000 and v(0)=1700 (rough estimates of GDP per capita; not the same as wealth but it’ll do; ratio of wealth will probably be much larger); take p=0.06 and q=0.03, although in fact I think sub-Saharan Africa is doing better than that lately; and take r=0.02 (which I think is lower than most people’s discount rates; higher discount rates tend to favour more charity).
The resulting curve has its maximum at about c=0.01; about 1% of GDP in the Rich countries should be given to the Poor to maximize long-term net utility. Diddling with the parameters doesn’t change this hugely; over a wide range of values we get optima roughly in the range 0.001 .. 0.02.
As long as the discount rate is fairly small, this model will never recommend a value of c much bigger than half the difference in growth rates—because if c is bigger than that, the Poor get richer faster than the Rich do and in the long run they are richer than the Rich :-). (A better model would allow c to vary, and I bet it would end up recommending larger values of c while the Poor are much poorer than the Rich.)
I think most of this discussion just boils down into a difference of values. You suggest that donating to the world’s poorest people seems like to way to increase net utility, but this depends on a utility function and moral framework that I am questioning. I have alluded to at least two objections, which is that this outlook seems too near-mode, and it assumes that people should be weighted the same. I agree with you that getting into a deeper discussion of values would not be fruitful.
Your model is interesting, but it still looks like it weights utility of different people the same, and it doesn’t take into account resulting incentives and externalities.
It’s possible to imagine a value system and geopolitical picture where saving lives in the third world has zero utility, weakly positive utility, or weakly negative utility. If so, then investing in people who are productive at least does something with your money.
Or my suspicions could be wrong, and there could be flow-through effects that I would find compelling. If I had a comprehensive and strong alternative EA approach and clearly superior value system, then I could be more explicit.
I do want to clarify that I don’t consider investing in the stock market to be EA, at least, not very strong EA. I see the stock market more as a way to grow money so that you can do EA later.
it still looks like it weights utility of different people the same
It does, but if you (say) care about the utility of the Rich 100x more than you do about the utility of the Poor, you can compensate for that just by pretending there are 100x more Rich people. (More likely, of course, what you care about more is your own utility and that of people close to you. The effect is fairly similar.)
It’s possible to imagine a value system and geopolitical picture where saving lives in the third world has zero utility [...]
Yes, it’s possible. I don’t (given my own values and epistemic state) see any reason to take that possibility any more seriously than, say, the possibility that increased economic growth in affluent nations is a bad thing overall. (Which it could be, likewise, given some value systems—e.g., ones that strongly disvalue inequality as such—or some geopolitical situations—e.g., ones in which humanity is badly threatened by harms likely to be accelerated by more prosperous rich nations, such as harmful climate change or “unfriendly” AI.)
I don’t consider investing in the stock market to be [...] very strong EA.
OK. So your position differs from the one Salemicus was espousing in the OP; fair enough.
Your complaint was about moral assumptions rather than moral arguments. I would say the same about moral assumptions as you do about moral arguments, and suggest that therefore calling someone’s moral assumptions “politicized” is not a cogent criticism unless you go further and explain why their politicitization is worse than every other assumption’s.
I think there may be two different issues here that are at risk of getting mixed up. (1) If your aim is to make things better for the world’s poorest people, or to optimize net utility (which at least superficially looks like calling for very similar actions), you need to consider the long as well as the short term, and it might turn out that those goals are best achieved by actions whose short-term consequences look bad for poor people or bad for net utility. (2) You might care more about other things than net utility or the plight of the least fortunate.
Of these, it seems to me that #1 is the one it’s more helpful to discuss (because pure disagreements on values tend not to make for fruitful discussions) and is, at least ostensibly, the focus of most of the actual discussion here—but unlike #2 is isn’t actually a moral argument.
I do, for the avoidance of doubt, agree with #1. And it’s not impossible that putting money into the US stock market does more expected long-term good for the world’s poorest people than giving them money or buying them malaria nets. But the arguments deployed in support of that argument in this thread seem to me to be terrible in the same kind of way as the arguments for conventional EA are alleged to be, but with less excuse; and thinly disguised self-interest seems like an awfully plausible explanation for that.
I suppose I should make some attempt to justify my claim that the arguments are terrible, or at least explain it. Here is what I think is the best example.
Both Salemicus (in the OP) and pianoforte611 (a few articles upthread) seem just to tacitly assume that whatever produces the most growth must be best overall, and that this means not transferring any wealth to poorer people whose growth rate is lower. This seems to me exactly parallel to just tacitly assuming that whatever gives the most short-run benefit to the world’s poorest people must be best overall. And I think it’s flatly wrong. Here is a toy model to explain why I think so.
Consider a world made up of two populations, the Rich and the Poor. The sizes of these populations are, let’s say, in the fixed ratio 1:a. At time t they have wealth per capita of u(t),v(t). Utility per person is proportional to log wealth. Wealth grows exponentially: u’ = pu, v’ = qv. We suppose u(0)>v(0) -- the rich are richer than the poor—and p>q—the rich generate more growth than the poor. We discount everyone’s future utility by a factor exp(-rt); same discount rate r for rich and poor. And, finally, the rich give some fraction of their wealth to the poor, so the actual differential equations are u’ = (p-c)u, v’ = qv+cu.
So, the solution of these differential equations is a linear combination of exponentials that I won’t bother you with; then the net utility looks like the integral from 0 to infinity of exp(-rt) log(linear combination of exponentials) which, so far as I know, doesn’t have a closed form; so I used Mathematica to do it numerically.
The resulting plot of net utility against donation level c is typically not monotone decreasing; its global maximum is at a small but positive value. For instance, let’s consider the Rich to consist of the US plus Western Europe (population about 720M) and the Poor to consist of sub-Saharan Africa (population about 800M) so crudely take a=1; take u(0)=40000 and v(0)=1700 (rough estimates of GDP per capita; not the same as wealth but it’ll do; ratio of wealth will probably be much larger); take p=0.06 and q=0.03, although in fact I think sub-Saharan Africa is doing better than that lately; and take r=0.02 (which I think is lower than most people’s discount rates; higher discount rates tend to favour more charity).
The resulting curve has its maximum at about c=0.01; about 1% of GDP in the Rich countries should be given to the Poor to maximize long-term net utility. Diddling with the parameters doesn’t change this hugely; over a wide range of values we get optima roughly in the range 0.001 .. 0.02.
As long as the discount rate is fairly small, this model will never recommend a value of c much bigger than half the difference in growth rates—because if c is bigger than that, the Poor get richer faster than the Rich do and in the long run they are richer than the Rich :-). (A better model would allow c to vary, and I bet it would end up recommending larger values of c while the Poor are much poorer than the Rich.)
I think most of this discussion just boils down into a difference of values. You suggest that donating to the world’s poorest people seems like to way to increase net utility, but this depends on a utility function and moral framework that I am questioning. I have alluded to at least two objections, which is that this outlook seems too near-mode, and it assumes that people should be weighted the same. I agree with you that getting into a deeper discussion of values would not be fruitful.
Your model is interesting, but it still looks like it weights utility of different people the same, and it doesn’t take into account resulting incentives and externalities.
It’s possible to imagine a value system and geopolitical picture where saving lives in the third world has zero utility, weakly positive utility, or weakly negative utility. If so, then investing in people who are productive at least does something with your money.
Or my suspicions could be wrong, and there could be flow-through effects that I would find compelling. If I had a comprehensive and strong alternative EA approach and clearly superior value system, then I could be more explicit.
I do want to clarify that I don’t consider investing in the stock market to be EA, at least, not very strong EA. I see the stock market more as a way to grow money so that you can do EA later.
It does, but if you (say) care about the utility of the Rich 100x more than you do about the utility of the Poor, you can compensate for that just by pretending there are 100x more Rich people. (More likely, of course, what you care about more is your own utility and that of people close to you. The effect is fairly similar.)
Yes, it’s possible. I don’t (given my own values and epistemic state) see any reason to take that possibility any more seriously than, say, the possibility that increased economic growth in affluent nations is a bad thing overall. (Which it could be, likewise, given some value systems—e.g., ones that strongly disvalue inequality as such—or some geopolitical situations—e.g., ones in which humanity is badly threatened by harms likely to be accelerated by more prosperous rich nations, such as harmful climate change or “unfriendly” AI.)
OK. So your position differs from the one Salemicus was espousing in the OP; fair enough.