it still looks like it weights utility of different people the same
It does, but if you (say) care about the utility of the Rich 100x more than you do about the utility of the Poor, you can compensate for that just by pretending there are 100x more Rich people. (More likely, of course, what you care about more is your own utility and that of people close to you. The effect is fairly similar.)
It’s possible to imagine a value system and geopolitical picture where saving lives in the third world has zero utility [...]
Yes, it’s possible. I don’t (given my own values and epistemic state) see any reason to take that possibility any more seriously than, say, the possibility that increased economic growth in affluent nations is a bad thing overall. (Which it could be, likewise, given some value systems—e.g., ones that strongly disvalue inequality as such—or some geopolitical situations—e.g., ones in which humanity is badly threatened by harms likely to be accelerated by more prosperous rich nations, such as harmful climate change or “unfriendly” AI.)
I don’t consider investing in the stock market to be [...] very strong EA.
OK. So your position differs from the one Salemicus was espousing in the OP; fair enough.
It does, but if you (say) care about the utility of the Rich 100x more than you do about the utility of the Poor, you can compensate for that just by pretending there are 100x more Rich people. (More likely, of course, what you care about more is your own utility and that of people close to you. The effect is fairly similar.)
Yes, it’s possible. I don’t (given my own values and epistemic state) see any reason to take that possibility any more seriously than, say, the possibility that increased economic growth in affluent nations is a bad thing overall. (Which it could be, likewise, given some value systems—e.g., ones that strongly disvalue inequality as such—or some geopolitical situations—e.g., ones in which humanity is badly threatened by harms likely to be accelerated by more prosperous rich nations, such as harmful climate change or “unfriendly” AI.)
OK. So your position differs from the one Salemicus was espousing in the OP; fair enough.