(Insert large amount of regret about not writing “Taking Ideas Seriously” better.)
Anyway, it’s worth quoting Richard Chappell’s comment on my post about virtue ethics-style consequentialism:
It’s worth noting that pretty much every consequentialist since J.S. Mill has stressed the importance of inculcating generally-reliable dispositions / character traits, rather than attempting to explicitly make utility calculations in everyday life. It’s certainly a good recommendation, but it seems misleading to characterize this as in any way at odds with the consequentialist tradition.
Hm, I wouldn’t consider that ‘in everyday life’. It seems like an expected utility calculation you do once every few months or years, when you’re deciding where you should be giving charity. You would spend that time doing proto-consequentialist calculations anyway, even if you weren’t explicitly calculating expected utility. Wanting to get the most warm fuzzies or status per dollar is typical altruistic behavior.
The difference in Eliezer’s exhortations is that he’s asking you to introspect more and think about whether or not you really want warm fuzzies or actual utilons, after you find out that significant utilons really are at stake. Whether or not you believe those utilons really are at stake at a certain probability becomes a question of fact, not a strain on your moral intuitions.
I had a broader meaning of everyday life, as things everyone might do.
Even taking a literal view of the sentence, burning down fields isn’t an every day kind of thing.
I was actually thinking of Anna Salamon and her back of the envelope calculations about how worth it is to donate to SIAI, with that comment. I believe she mentions donating to givewell as a baseline to compare it with. Saving a human life is fairly significant utilons itself. So it was asking me to weigh up saving a human life to donating to SIAI. So the symmetric question came to mind. Hence this post.
So it was asking me to weigh up saving a human life to donating to SIAI.
You phrase this as a weird dichotomy. It’s more like asking you to weigh saving a life versus saving a lot of lives. Whether or not a lot of lives are actually at stake is an epistemic question, not a moral one.
(Insert large amount of regret about not writing “Taking Ideas Seriously” better.)
Insert a reminder pointing to your medidation post and your relisation that post hoc beating yourself up about things doesn’t benefit you enough to make it worth doing.
(Insert large amount of regret about not writing “Taking Ideas Seriously” better.)
Anyway, it’s worth quoting Richard Chappell’s comment on my post about virtue ethics-style consequentialism:
But SIAI have stressed making utility calculations in everyday life.… especially about charity.
Hm, I wouldn’t consider that ‘in everyday life’. It seems like an expected utility calculation you do once every few months or years, when you’re deciding where you should be giving charity. You would spend that time doing proto-consequentialist calculations anyway, even if you weren’t explicitly calculating expected utility. Wanting to get the most warm fuzzies or status per dollar is typical altruistic behavior.
The difference in Eliezer’s exhortations is that he’s asking you to introspect more and think about whether or not you really want warm fuzzies or actual utilons, after you find out that significant utilons really are at stake. Whether or not you believe those utilons really are at stake at a certain probability becomes a question of fact, not a strain on your moral intuitions.
I had a broader meaning of everyday life, as things everyone might do.
Even taking a literal view of the sentence, burning down fields isn’t an every day kind of thing.
I was actually thinking of Anna Salamon and her back of the envelope calculations about how worth it is to donate to SIAI, with that comment. I believe she mentions donating to givewell as a baseline to compare it with. Saving a human life is fairly significant utilons itself. So it was asking me to weigh up saving a human life to donating to SIAI. So the symmetric question came to mind. Hence this post.
You phrase this as a weird dichotomy. It’s more like asking you to weigh saving a life versus saving a lot of lives. Whether or not a lot of lives are actually at stake is an epistemic question, not a moral one.
Insert a reminder pointing to your medidation post and your relisation that post hoc beating yourself up about things doesn’t benefit you enough to make it worth doing.