I don’t know, I think it’s a pretty decent argument. I agree it sometimes gets overused, but I do think given it’s assumptions “you care about people far away as much as people closeby” and “there are lots of people far away you can help much more than people close by” and “here is a situation where you would help someone closeby, so you might also want to help the people far away in the same way” are all part of a totally valid logical chain of inference that seems useful to have in discussions on ethics.
Like, you don’t need to take it to an extreme, but it seems locally valid and totally fine to use, even if not all the assumptions that make it locally valid are always fully explicated.
On self-reflection, I just plain don’t care about people far away as much as those near to me. Parts of me think I should, but other parts aren’t swayed. The fact that a lot of the motivating stories for EA don’t address this at all is one of the reasons I don’t listen very closely to EA advice.
I am (somewhat) an altruist. And I strive to be effective at everything I undertake. But I’m not an EA, and I don’t really understand those who are.
Yep, that’s fine. I am not a moral prescriptivist who tells you what you have to care about.
I do think that you are probably going to change your mind on this at some point in the next millennium if we ever get to live that long, and I do have a bunch of arguments that feel relevant, but I don’t think it’s completely implausible you really don’t care.
I do think that not caring about how people are far away is pretty common, and building EA on that assumption seems fine. Not all clubs and institutions need to be justifiable to everyone.
Right, my gripe with the argument is that these first two assumptions are almost always unstated, and most of the time when people use the argument, they “trick” people into agreeing with assumption one.
(for the record, I think the first premise is true)
I don’t know, I think it’s a pretty decent argument. I agree it sometimes gets overused, but I do think given it’s assumptions “you care about people far away as much as people closeby” and “there are lots of people far away you can help much more than people close by” and “here is a situation where you would help someone closeby, so you might also want to help the people far away in the same way” are all part of a totally valid logical chain of inference that seems useful to have in discussions on ethics.
Like, you don’t need to take it to an extreme, but it seems locally valid and totally fine to use, even if not all the assumptions that make it locally valid are always fully explicated.
On self-reflection, I just plain don’t care about people far away as much as those near to me. Parts of me think I should, but other parts aren’t swayed. The fact that a lot of the motivating stories for EA don’t address this at all is one of the reasons I don’t listen very closely to EA advice.
I am (somewhat) an altruist. And I strive to be effective at everything I undertake. But I’m not an EA, and I don’t really understand those who are.
Yep, that’s fine. I am not a moral prescriptivist who tells you what you have to care about.
I do think that you are probably going to change your mind on this at some point in the next millennium if we ever get to live that long, and I do have a bunch of arguments that feel relevant, but I don’t think it’s completely implausible you really don’t care.
I do think that not caring about how people are far away is pretty common, and building EA on that assumption seems fine. Not all clubs and institutions need to be justifiable to everyone.
Right, my gripe with the argument is that these first two assumptions are almost always unstated, and most of the time when people use the argument, they “trick” people into agreeing with assumption one.
(for the record, I think the first premise is true)