I think you’re inferring some things that aren’t there. I’m not claiming an agent-neutral morality. I’m claiming that “physical proximity,” in particular, being a major factor of moral worth in-and-of-itself never really made sense to me, and always seemed a bit cringey.
Using physical proximity as a relevant metric in judging the value of alliances? Factoring other metrics of proximity into my personal assessments of moral worth? I do both.
(Although I think using agent-neutral methods to generate Schelling Points for coordination reasons is quite valuable, and at times where that coordination is really important, I tend to weight it extremely heavily.)
When I limit myself to looking at charity and not alliance-formation, all types of proximity-encouraging motives get drowned out by the sheer size of the difference in magnitude-of-need and the drastically-increased buying power of first-world money in parts of the third world. I think that’s a pretty common feeling among EAs. That said, I do conduct a stronger level of time- and uncertainty-discounting, but I still ended up being pretty concerned about existential risk.
I think you’re inferring some things that aren’t there. I’m not claiming an agent-neutral morality. I’m claiming that “physical proximity,” in particular, being a major factor of moral worth in-and-of-itself never really made sense to me, and always seemed a bit cringey.
I see. It seems to me that the more literally you interpret “physical proximity”, the more improbable it is to find people who consider it “a major factor of moral worth”.
Is your experience different? Do you really find that people think that literal physical proximity matters morally? Not cultural proximity, not geopolitical proximity, not proximity in communication-space or proximity in interaction-space, not even geographical proximity—but quite literal Euclidian distance in spacetime? If so, then I would be very curious to see an example of someone espousing such a view—and even more curious to see an example of someone explicitly defending it!
Whereas if you begin to take the concept less literally (following something like the progression I implied above), then it is increasingly difficult to see why it would be “cringey” to consider it a “major factor” in moral considerations. If you disagree with that, then—my question stands: why?
When I limit myself to looking at charity and not alliance-formation, all types of proximity-encouraging motives get drowned out by the sheer size of the difference in magnitude-of-need and the drastically-increased buying power of first-world money in parts of the third world. I think that’s a pretty common feeling among EAs.
Yes, perhaps that is so, but (as you correctly note), this has to do proximity as a purely instrumental factor in how to implement your values. It does not do much to address the matter of proximity as a factor in what your values are (that is: who, and what, you value, and how much).
I think you’re inferring some things that aren’t there. I’m not claiming an agent-neutral morality. I’m claiming that “physical proximity,” in particular, being a major factor of moral worth in-and-of-itself never really made sense to me, and always seemed a bit cringey.
Using physical proximity as a relevant metric in judging the value of alliances? Factoring other metrics of proximity into my personal assessments of moral worth? I do both.
(Although I think using agent-neutral methods to generate Schelling Points for coordination reasons is quite valuable, and at times where that coordination is really important, I tend to weight it extremely heavily.)
When I limit myself to looking at charity and not alliance-formation, all types of proximity-encouraging motives get drowned out by the sheer size of the difference in magnitude-of-need and the drastically-increased buying power of first-world money in parts of the third world. I think that’s a pretty common feeling among EAs. That said, I do conduct a stronger level of time- and uncertainty-discounting, but I still ended up being pretty concerned about existential risk.
I see. It seems to me that the more literally you interpret “physical proximity”, the more improbable it is to find people who consider it “a major factor of moral worth”.
Is your experience different? Do you really find that people think that literal physical proximity matters morally? Not cultural proximity, not geopolitical proximity, not proximity in communication-space or proximity in interaction-space, not even geographical proximity—but quite literal Euclidian distance in spacetime? If so, then I would be very curious to see an example of someone espousing such a view—and even more curious to see an example of someone explicitly defending it!
Whereas if you begin to take the concept less literally (following something like the progression I implied above), then it is increasingly difficult to see why it would be “cringey” to consider it a “major factor” in moral considerations. If you disagree with that, then—my question stands: why?
Yes, perhaps that is so, but (as you correctly note), this has to do proximity as a purely instrumental factor in how to implement your values. It does not do much to address the matter of proximity as a factor in what your values are (that is: who, and what, you value, and how much).