I think most of this discussion just boils down into a difference of values. You suggest that donating to the world’s poorest people seems like to way to increase net utility, but this depends on a utility function and moral framework that I am questioning. I have alluded to at least two objections, which is that this outlook seems too near-mode, and it assumes that people should be weighted the same. I agree with you that getting into a deeper discussion of values would not be fruitful.
Your model is interesting, but it still looks like it weights utility of different people the same, and it doesn’t take into account resulting incentives and externalities.
It’s possible to imagine a value system and geopolitical picture where saving lives in the third world has zero utility, weakly positive utility, or weakly negative utility. If so, then investing in people who are productive at least does something with your money.
Or my suspicions could be wrong, and there could be flow-through effects that I would find compelling. If I had a comprehensive and strong alternative EA approach and clearly superior value system, then I could be more explicit.
I do want to clarify that I don’t consider investing in the stock market to be EA, at least, not very strong EA. I see the stock market more as a way to grow money so that you can do EA later.
it still looks like it weights utility of different people the same
It does, but if you (say) care about the utility of the Rich 100x more than you do about the utility of the Poor, you can compensate for that just by pretending there are 100x more Rich people. (More likely, of course, what you care about more is your own utility and that of people close to you. The effect is fairly similar.)
It’s possible to imagine a value system and geopolitical picture where saving lives in the third world has zero utility [...]
Yes, it’s possible. I don’t (given my own values and epistemic state) see any reason to take that possibility any more seriously than, say, the possibility that increased economic growth in affluent nations is a bad thing overall. (Which it could be, likewise, given some value systems—e.g., ones that strongly disvalue inequality as such—or some geopolitical situations—e.g., ones in which humanity is badly threatened by harms likely to be accelerated by more prosperous rich nations, such as harmful climate change or “unfriendly” AI.)
I don’t consider investing in the stock market to be [...] very strong EA.
OK. So your position differs from the one Salemicus was espousing in the OP; fair enough.
I think most of this discussion just boils down into a difference of values. You suggest that donating to the world’s poorest people seems like to way to increase net utility, but this depends on a utility function and moral framework that I am questioning. I have alluded to at least two objections, which is that this outlook seems too near-mode, and it assumes that people should be weighted the same. I agree with you that getting into a deeper discussion of values would not be fruitful.
Your model is interesting, but it still looks like it weights utility of different people the same, and it doesn’t take into account resulting incentives and externalities.
It’s possible to imagine a value system and geopolitical picture where saving lives in the third world has zero utility, weakly positive utility, or weakly negative utility. If so, then investing in people who are productive at least does something with your money.
Or my suspicions could be wrong, and there could be flow-through effects that I would find compelling. If I had a comprehensive and strong alternative EA approach and clearly superior value system, then I could be more explicit.
I do want to clarify that I don’t consider investing in the stock market to be EA, at least, not very strong EA. I see the stock market more as a way to grow money so that you can do EA later.
It does, but if you (say) care about the utility of the Rich 100x more than you do about the utility of the Poor, you can compensate for that just by pretending there are 100x more Rich people. (More likely, of course, what you care about more is your own utility and that of people close to you. The effect is fairly similar.)
Yes, it’s possible. I don’t (given my own values and epistemic state) see any reason to take that possibility any more seriously than, say, the possibility that increased economic growth in affluent nations is a bad thing overall. (Which it could be, likewise, given some value systems—e.g., ones that strongly disvalue inequality as such—or some geopolitical situations—e.g., ones in which humanity is badly threatened by harms likely to be accelerated by more prosperous rich nations, such as harmful climate change or “unfriendly” AI.)
OK. So your position differs from the one Salemicus was espousing in the OP; fair enough.