The problem is that people are really really good at self-deception, something that often requires a lot of reflection to uncover. Ultimately, the passion vs reason debate comes down to which one has served us the best personally.
I think you have a really good history with following your moral and social intuitions. I’m guessing that, all else equal, following your heart led to better social and personal outcomes than following your head?
If I followed my heart, I’d probably be Twitter-stalking and crying over my college ex-gf and playing video games while unemployed right now. Reflection > gut instinct for many. Actually, violating my gut instinct has mostly led to positive outcomes when it came to my social life and career whenever it has come in conflict with reason, so I have a high level of contempt for intuitivist anything.
Consequentialism only works if you can predict the consequences. I think many “failures of consequentialist thinking” could be summarized as “these people predicted that doing X will result in Y, and they turned out to be horribly wrong”.
So the question is whether your reason or emotion is a better predictor of future. Which probably depends on the type of question asked (emotions will be better for situations similar to those that existed in the ancient jungles, e.g. human relations; reason will be better for situations involving math, e.g. investing), but neither is infallible. Which means we cannot go fully consequentialist, because that means fully overconfident.
I agree with both of you that the question for consequentialists is to determine when and where an act-consequentialist decision procedure (reasoning about consequences), a deontological decision procedure (reasoning about standing duties/rules), or the decision procedure of the virtuous agent (guided by both emotions and reasoning) are better outcome producers.
But you’re missing part of the overall point here: according to many philosophers (including sophisticated consequentialists) there is something wrong/ugly/harmful about relying too much on reasoning (whether about rules or consequences). Someone who needs to reason their way to the conclusion that they should visit their sick friend in order to motivate themselves to go, is not as good a friend as the person who just feels worried and goes to visit their friend.
I am certainly not an exemplar of virtue: I regularly struggle with overthinking things. But this is something one can work on. See the last section of my post.
The problem is that people are really really good at self-deception, something that often requires a lot of reflection to uncover. Ultimately, the passion vs reason debate comes down to which one has served us the best personally.
I think you have a really good history with following your moral and social intuitions. I’m guessing that, all else equal, following your heart led to better social and personal outcomes than following your head?
If I followed my heart, I’d probably be Twitter-stalking and crying over my college ex-gf and playing video games while unemployed right now. Reflection > gut instinct for many. Actually, violating my gut instinct has mostly led to positive outcomes when it came to my social life and career whenever it has come in conflict with reason, so I have a high level of contempt for intuitivist anything.
Consequentialism only works if you can predict the consequences. I think many “failures of consequentialist thinking” could be summarized as “these people predicted that doing X will result in Y, and they turned out to be horribly wrong”.
So the question is whether your reason or emotion is a better predictor of future. Which probably depends on the type of question asked (emotions will be better for situations similar to those that existed in the ancient jungles, e.g. human relations; reason will be better for situations involving math, e.g. investing), but neither is infallible. Which means we cannot go fully consequentialist, because that means fully overconfident.
I agree with both of you that the question for consequentialists is to determine when and where an act-consequentialist decision procedure (reasoning about consequences), a deontological decision procedure (reasoning about standing duties/rules), or the decision procedure of the virtuous agent (guided by both emotions and reasoning) are better outcome producers.
But you’re missing part of the overall point here: according to many philosophers (including sophisticated consequentialists) there is something wrong/ugly/harmful about relying too much on reasoning (whether about rules or consequences). Someone who needs to reason their way to the conclusion that they should visit their sick friend in order to motivate themselves to go, is not as good a friend as the person who just feels worried and goes to visit their friend.
I am certainly not an exemplar of virtue: I regularly struggle with overthinking things. But this is something one can work on. See the last section of my post.