The standard forms are obviously super-broken (there’s a lot of good reasons why EA partially builds itself as a strong reaction against that; a lot of us cringe at “local is better” charity speak unless it gets tied into “capacity building”).
Could you say more about this? What do you consider “obviously super-broken” about (if I understand you correctly) moralities that are not agent-neutral and equal-consideration? Why does “local is better” make you cringe?
(Meta-note: I spent more time on this than I wanted to. I think if I run into this sort of question again, I’m going to ask clarifying questions about what it was that you in particular wanted, because answering broad questions without having a clear internal idea of a target audience contributed to both spending too much time on this, and contributed to feeling bad about it. Oops.)
I think you’re inferring some things that aren’t there. I’m not claiming an agent-neutral morality. I’m claiming that “physical proximity,” in particular, being a major factor of moral worth in-and-of-itself never really made sense to me, and always seemed a bit cringey.
Using physical proximity as a relevant metric in judging the value of alliances? Factoring other metrics of proximity into my personal assessments of moral worth? I do both.
(Although I think using agent-neutral methods to generate Schelling Points for coordination reasons is quite valuable, and at times where that coordination is really important, I tend to weight it extremely heavily.)
When I limit myself to looking at charity and not alliance-formation, all types of proximity-encouraging motives get drowned out by the sheer size of the difference in magnitude-of-need and the drastically-increased buying power of first-world money in parts of the third world. I think that’s a pretty common feeling among EAs. That said, I do conduct a stronger level of time- and uncertainty-discounting, but I still ended up being pretty concerned about existential risk.
I think you’re inferring some things that aren’t there. I’m not claiming an agent-neutral morality. I’m claiming that “physical proximity,” in particular, being a major factor of moral worth in-and-of-itself never really made sense to me, and always seemed a bit cringey.
I see. It seems to me that the more literally you interpret “physical proximity”, the more improbable it is to find people who consider it “a major factor of moral worth”.
Is your experience different? Do you really find that people think that literal physical proximity matters morally? Not cultural proximity, not geopolitical proximity, not proximity in communication-space or proximity in interaction-space, not even geographical proximity—but quite literal Euclidian distance in spacetime? If so, then I would be very curious to see an example of someone espousing such a view—and even more curious to see an example of someone explicitly defending it!
Whereas if you begin to take the concept less literally (following something like the progression I implied above), then it is increasingly difficult to see why it would be “cringey” to consider it a “major factor” in moral considerations. If you disagree with that, then—my question stands: why?
When I limit myself to looking at charity and not alliance-formation, all types of proximity-encouraging motives get drowned out by the sheer size of the difference in magnitude-of-need and the drastically-increased buying power of first-world money in parts of the third world. I think that’s a pretty common feeling among EAs.
Yes, perhaps that is so, but (as you correctly note), this has to do proximity as a purely instrumental factor in how to implement your values. It does not do much to address the matter of proximity as a factor in what your values are (that is: who, and what, you value, and how much).
Personally, one of the most damning marks against the physical-distance intuition in particular is its rampant exploitability in the modern world, where distance is so incredibly easily abridged. If someone set up a deal where they extract some money and kidnap 2 far-away people in exchange for letting 1 nearby person go, someone with physical-distance-discounting might keep making this deal, and the only thing the kidnappers would need to use to exploit it is a truck. If view through a camera is enough to abridge the physical distance, it’s even easier to exploit. I think this premise is played around with in media like The Box, but I’ve also heard of some really awful real-world cases, especially if phone calls or video count as abridging distance (for a lot of people, it seems to). The ease and severity of exploitation of it definitely contributes to why, in the modern world, I don’t just call it unintuitive, I call it straight-up broken.
When the going exchange rate between time and physical distance was higher, this intuition might not have been so broken. With the speed of transport where it is now...
Maybe at bottom, it’s also just not very intuitive to me. I find a certain humor in parodies of it, which I’m going to badly approximate here.
“As you move away from me at a constant speed, how rapidly should I morally discount you? Should the discounting be exponential, or logarithmic?”
“A runaway trolley will kill 5 people tied to the train track, unless you pull a switch and redirect it to a nearby track with only 1 person on it. Do you pull the switch?” “Well, that depends. Are the 5 people further away from me?”
I do wonder where that lack-of-intuition comes from… Maybe my lack of this intuition was originally because when I imagine things happening to someone nearby and someone far away, the real object I’m interacting with in judging /comparing is the imagination-construct in my head, and if they’re both equally vivid that collapses all felt physical distance? Who can say.
In my heart of hearts, though… if all else is truly equal, it also does just feel obvious that a person’s physical distance from you really should not affect your sense of their moral worth.
If someone set up a deal where they extract some money and kidnap 2 far-away people in exchange for letting 1 nearby person go, someone with physical-distance-discounting might keep making this deal, and the only thing the kidnappers would need to use to exploit it is a truck. If view through a camera is enough to abridge the physical distance, it’s even easier to exploit.
I’ve two things to say to this.
First, the moral view that you imply here, seems to me to be an awful caricature. As I say in my other comment, I should be very curious to see some real-world examples of people espousing, and defending, this sort of view. To me it seems tremendously implausible, like you’ve terribly misunderstood the views of the people you’re disagreeing with. (Of course, it’s possible I am wrong; see below. However, even if such views exist—after all, it is possible to find examples of people espousing almost any view, no matter how extreme or insane—do you really suggest that they are at all common?!)
Second… any moral argument that must invoke such implausible scenarios as this sort of “repeatedly kidnap and then transport people, forcing the mark to keep paying money over and over, a slave to his own comically, robotically rigid moral views” story in order to make its point is, I think, to be automatically discounted in plausibility. Yes, if things happened in this way, and if someone were to react in this way, that would be terrible, but of course nothing like this could ever take place, for a whole host of reasons. What are the real-world justifications of your view?
… I’ve also heard of some really awful real-world cases, especially if phone calls or video count as abridging distance (for a lot of people, it seems to). The ease and severity of exploitation of it definitely contributes to why, in the modern world, I don’t just call it unintuitive, I call it straight-up broken.
Now this is interesting! Could you cite some such cases? I think it would be quite instructive to examine some case studies!
Yeah… this is reading as more “moralizing” and “combative” than as “trying to understand and model my view,” to me. I do not feel like putting more time into hashing this out with you, so I most likely won’t reply.
It has a very… “gotcha” feel to it. Even the curiosity seems to be phrased to be slightly accusatory, which really doesn’t help matters. Maybe we have incompatible conversation styles.
Certainly it’s your right to discontinue the conversation if you find it unproductive. But I find that I’m confused; what was your goal in posting these things publicly, if not to invite discussion?
Do you simply prefer that people not engage with these “shortform feed” entries? (It may be useful to note that in the top-level post, if so. Is there some sort of accepted norm for these things?)
My preference for most of my shortform feed entries is to intentionally have a very limited amount of visibility, with most commenting coming from people who are primarily interested in a collaborative/explorative framing. My model of Spiracular (though they are very welcome to correct me) feels similar.
I think I mentioned in the past that I think it’s good for ideas to start in an early explorative/generative phase and then later move to a more evaluative phrase, and the shortform feeds for me try to fill the niche of making it as low-cost as possible for me to generate things. Some of these ideas (usually the best ones) tend to then later get made into full posts (or in my case, feature proposals for LessWrong) where I tend to be more welcoming of evaluative frames.
I see. Well, fair enough. Would it be possible to add (or perhaps simply encourage authors to add) some sort of note to this effect to shortform feeds, if only as a reminder?
(As an aside, I don’t think I quite grasp how you’re using the term “visibility” here. With that clause removed, what you’re saying seems straightforward enough, but that part makes me doubt my understanding.)
*nods* I think definitely when we make shortform feeds more of a first-class feature then we should encourage authors to specify their preferences for comments on their feeds.
I mean visibility pretty straightforwardly in that I often want to intentionally limit the number of people who can see my content because I feel worried about being misunderstood/judged/dragged into uncomfortable interactions.
Happy to discuss any of this further since I think shortform feeds and norms around them are important, but would prefer to do so on a separate post. You’re welcome to start a thread about this over on my own shortform feed.
Could you say more about this? What do you consider “obviously super-broken” about (if I understand you correctly) moralities that are not agent-neutral and equal-consideration? Why does “local is better” make you cringe?
(Meta-note: I spent more time on this than I wanted to. I think if I run into this sort of question again, I’m going to ask clarifying questions about what it was that you in particular wanted, because answering broad questions without having a clear internal idea of a target audience contributed to both spending too much time on this, and contributed to feeling bad about it. Oops.)
This certainly seems reasonable.
I think you’re inferring some things that aren’t there. I’m not claiming an agent-neutral morality. I’m claiming that “physical proximity,” in particular, being a major factor of moral worth in-and-of-itself never really made sense to me, and always seemed a bit cringey.
Using physical proximity as a relevant metric in judging the value of alliances? Factoring other metrics of proximity into my personal assessments of moral worth? I do both.
(Although I think using agent-neutral methods to generate Schelling Points for coordination reasons is quite valuable, and at times where that coordination is really important, I tend to weight it extremely heavily.)
When I limit myself to looking at charity and not alliance-formation, all types of proximity-encouraging motives get drowned out by the sheer size of the difference in magnitude-of-need and the drastically-increased buying power of first-world money in parts of the third world. I think that’s a pretty common feeling among EAs. That said, I do conduct a stronger level of time- and uncertainty-discounting, but I still ended up being pretty concerned about existential risk.
I see. It seems to me that the more literally you interpret “physical proximity”, the more improbable it is to find people who consider it “a major factor of moral worth”.
Is your experience different? Do you really find that people think that literal physical proximity matters morally? Not cultural proximity, not geopolitical proximity, not proximity in communication-space or proximity in interaction-space, not even geographical proximity—but quite literal Euclidian distance in spacetime? If so, then I would be very curious to see an example of someone espousing such a view—and even more curious to see an example of someone explicitly defending it!
Whereas if you begin to take the concept less literally (following something like the progression I implied above), then it is increasingly difficult to see why it would be “cringey” to consider it a “major factor” in moral considerations. If you disagree with that, then—my question stands: why?
Yes, perhaps that is so, but (as you correctly note), this has to do proximity as a purely instrumental factor in how to implement your values. It does not do much to address the matter of proximity as a factor in what your values are (that is: who, and what, you value, and how much).
Personally, one of the most damning marks against the physical-distance intuition in particular is its rampant exploitability in the modern world, where distance is so incredibly easily abridged. If someone set up a deal where they extract some money and kidnap 2 far-away people in exchange for letting 1 nearby person go, someone with physical-distance-discounting might keep making this deal, and the only thing the kidnappers would need to use to exploit it is a truck. If view through a camera is enough to abridge the physical distance, it’s even easier to exploit. I think this premise is played around with in media like The Box, but I’ve also heard of some really awful real-world cases, especially if phone calls or video count as abridging distance (for a lot of people, it seems to). The ease and severity of exploitation of it definitely contributes to why, in the modern world, I don’t just call it unintuitive, I call it straight-up broken.
When the going exchange rate between time and physical distance was higher, this intuition might not have been so broken. With the speed of transport where it is now...
Maybe at bottom, it’s also just not very intuitive to me. I find a certain humor in parodies of it, which I’m going to badly approximate here.
“As you move away from me at a constant speed, how rapidly should I morally discount you? Should the discounting be exponential, or logarithmic?”
“A runaway trolley will kill 5 people tied to the train track, unless you pull a switch and redirect it to a nearby track with only 1 person on it. Do you pull the switch?” “Well, that depends. Are the 5 people further away from me?”
I do wonder where that lack-of-intuition comes from… Maybe my lack of this intuition was originally because when I imagine things happening to someone nearby and someone far away, the real object I’m interacting with in judging /comparing is the imagination-construct in my head, and if they’re both equally vivid that collapses all felt physical distance? Who can say.
In my heart of hearts, though… if all else is truly equal, it also does just feel obvious that a person’s physical distance from you really should not affect your sense of their moral worth.
I’ve two things to say to this.
First, the moral view that you imply here, seems to me to be an awful caricature. As I say in my other comment, I should be very curious to see some real-world examples of people espousing, and defending, this sort of view. To me it seems tremendously implausible, like you’ve terribly misunderstood the views of the people you’re disagreeing with. (Of course, it’s possible I am wrong; see below. However, even if such views exist—after all, it is possible to find examples of people espousing almost any view, no matter how extreme or insane—do you really suggest that they are at all common?!)
Second… any moral argument that must invoke such implausible scenarios as this sort of “repeatedly kidnap and then transport people, forcing the mark to keep paying money over and over, a slave to his own comically, robotically rigid moral views” story in order to make its point is, I think, to be automatically discounted in plausibility. Yes, if things happened in this way, and if someone were to react in this way, that would be terrible, but of course nothing like this could ever take place, for a whole host of reasons. What are the real-world justifications of your view?
Now this is interesting! Could you cite some such cases? I think it would be quite instructive to examine some case studies!
Yeah… this is reading as more “moralizing” and “combative” than as “trying to understand and model my view,” to me. I do not feel like putting more time into hashing this out with you, so I most likely won’t reply.
It has a very… “gotcha” feel to it. Even the curiosity seems to be phrased to be slightly accusatory, which really doesn’t help matters. Maybe we have incompatible conversation styles.
Giving offense wasn’t my intent, by any means!
Certainly it’s your right to discontinue the conversation if you find it unproductive. But I find that I’m confused; what was your goal in posting these things publicly, if not to invite discussion?
Do you simply prefer that people not engage with these “shortform feed” entries? (It may be useful to note that in the top-level post, if so. Is there some sort of accepted norm for these things?)
My preference for most of my shortform feed entries is to intentionally have a very limited amount of visibility, with most commenting coming from people who are primarily interested in a collaborative/explorative framing. My model of Spiracular (though they are very welcome to correct me) feels similar.
I think I mentioned in the past that I think it’s good for ideas to start in an early explorative/generative phase and then later move to a more evaluative phrase, and the shortform feeds for me try to fill the niche of making it as low-cost as possible for me to generate things. Some of these ideas (usually the best ones) tend to then later get made into full posts (or in my case, feature proposals for LessWrong) where I tend to be more welcoming of evaluative frames.
I see. Well, fair enough. Would it be possible to add (or perhaps simply encourage authors to add) some sort of note to this effect to shortform feeds, if only as a reminder?
(As an aside, I don’t think I quite grasp how you’re using the term “visibility” here. With that clause removed, what you’re saying seems straightforward enough, but that part makes me doubt my understanding.)
*nods* I think definitely when we make shortform feeds more of a first-class feature then we should encourage authors to specify their preferences for comments on their feeds.
I mean visibility pretty straightforwardly in that I often want to intentionally limit the number of people who can see my content because I feel worried about being misunderstood/judged/dragged into uncomfortable interactions.
Happy to discuss any of this further since I think shortform feeds and norms around them are important, but would prefer to do so on a separate post. You’re welcome to start a thread about this over on my own shortform feed.