My thoughts on the following are rather disorganized and I’ve been meaning to collate them into a post for quite some time but here goes:
Discussions of morality and ethics in the LW-sphere overwhelmingly tend to short-circuit to naive harm-based consequentialist morality. When pressed I think most will state a far-mode meta-ethical version that acknowledges other facets of human morality (disgust, purity, fairness etc) that would get wrapped up into a standardized utilon currency (I believe CEV is meant to do this?) but when it comes to actual policy (EA) there is too much focus on optimizing what we can measure (lives saved in africa) instead of what would actually satisfy people. The drunken moral philosopher looking under the lamppost for his keys because that’s where the light is. I also think there’s a more-or-less unstated assumption that considerations other than Harm are low-status.
Do you have any thoughts on how to do EA on the other aspects of morality? I think about this a fair bit, but run into the same problem you mentioned. I have had a few ideas but do not wish to prime you. Feel free to PM me.
My thoughts on the following are rather disorganized and I’ve been meaning to collate them into a post for quite some time but here goes:
Discussions of morality and ethics in the LW-sphere overwhelmingly tend to short-circuit to naive harm-based consequentialist morality. When pressed I think most will state a far-mode meta-ethical version that acknowledges other facets of human morality (disgust, purity, fairness etc) that would get wrapped up into a standardized utilon currency (I believe CEV is meant to do this?) but when it comes to actual policy (EA) there is too much focus on optimizing what we can measure (lives saved in africa) instead of what would actually satisfy people. The drunken moral philosopher looking under the lamppost for his keys because that’s where the light is. I also think there’s a more-or-less unstated assumption that considerations other than Harm are low-status.
Ah, yes. The standard problem with measurement based incentives: you start optimizing for what’s easy to measure.
Do you have any thoughts on how to do EA on the other aspects of morality? I think about this a fair bit, but run into the same problem you mentioned. I have had a few ideas but do not wish to prime you. Feel free to PM me.