Utilitarianism is not the only system that becomes problematic if you try to formalize it enough; the problem is that there is no comprehensive moral system that wouldn’t either run into paradoxical answers, or be so vague that you’d need to fill in the missing gaps with intuition anyway.
Also agree, as in, this is how I usually formulate my moral decision and it’s basically a pragmatic view on ethics, which is one I generally agree with.
is just “the kinds where donating to EA charities makes more intuitive sense than not donating”; often people describe these kinds of moral intuitions as “utilitarian”, but few people would actually endorse all of the conclusions of purely utilitarian reasoning.
So basically, the idea here is that it actually makes intuitive moral sense for most EA donors to donate to EA causes ? As in, it might be that they partially justify it with one moral system or another, but at the end of the day it seems “intuitively right” to them to do so.
Regarding “intuitive moral sense”, I would add that one’s intuitions can be somewhat shaped by consciously thinking about their implications, noticing inconsistencies and settling on solutions/improvements.
For example, the realisation that I usually care about people more the better I know them made me realize that the only reason I do not care about strangers at all is the fact that I do not know them. As this collided with another intuition that refuses such a reason as arbitrary (I could have easily ended up knowing and thus caring for different people, which is evidence that this behaviour of my intuition does not reflect my ‘actual’ preferences), my intuitions updated towards valuing strangers.
I am not sure how strongly other EAs have reshaped their intuitions, but I think that using and accepting quantitative arguments for moral questions needs quite a bit of intuition-reshaping for most people.
No worries, I wasn’t assuming you were a speaker for the EA community here, I just wanted to better understand possible motivations for donating to EA given my current perspective on ethics. I think the answer you gave outline on such line of reasoning quite well.
Agree, I wasn’t trying to imply otherwise
Also agree, as in, this is how I usually formulate my moral decision and it’s basically a pragmatic view on ethics, which is one I generally agree with.
So basically, the idea here is that it actually makes intuitive moral sense for most EA donors to donate to EA causes ? As in, it might be that they partially justify it with one moral system or another, but at the end of the day it seems “intuitively right” to them to do so.
Not sure whether every EA would endorse this description, but it’s how I think of it, yes.
Regarding “intuitive moral sense”, I would add that one’s intuitions can be somewhat shaped by consciously thinking about their implications, noticing inconsistencies and settling on solutions/improvements.
For example, the realisation that I usually care about people more the better I know them made me realize that the only reason I do not care about strangers at all is the fact that I do not know them. As this collided with another intuition that refuses such a reason as arbitrary (I could have easily ended up knowing and thus caring for different people, which is evidence that this behaviour of my intuition does not reflect my ‘actual’ preferences), my intuitions updated towards valuing strangers.
I am not sure how strongly other EAs have reshaped their intuitions, but I think that using and accepting quantitative arguments for moral questions needs quite a bit of intuition-reshaping for most people.
No worries, I wasn’t assuming you were a speaker for the EA community here, I just wanted to better understand possible motivations for donating to EA given my current perspective on ethics. I think the answer you gave outline on such line of reasoning quite well.