After thinking more about orthogonality I’ve become more confident that one must go about ethics in a mind-dependent way. If I am arguing about what is ‘right’ with a paperclipper, there’s nothing I can say to them to convince them to instead value human preferences or whatever.
I used to be a staunch moral realist, mainly relying on very strong intuitions against nihilism, and then arguing something that not nihilism → moral realism. I now reject the implication, and think that there is both 1) no universal, objective morality, and 2) things matter.
My current approach is to think of “goodness” in terms of what CEV-Thomas would think of as good. Moral uncertainty, then, is uncertainty over what CEV-Thomas thinks. CEV is necessary to get morality out of a human brain, because it is currently a bundle of contradictory heuristics. However, my moral intuitions still give bits about goodness. Other people’s moral intuitions also give some bits about goodness, because of how similar their brains are to mine, so I should weight other peoples beliefs in my moral uncertainty.
Ideally, I should trade with other people so that we both maximize a joint utility function, instead of each of us maximizing our own utility function. In the extreme, this looks like ECL. For most people, I’m not sure that this reasoning is necessary, however, because their intuitions might already be priced into my uncertainty over my CEV.
I tend not to believe that systems dependent on legible and consistent utility functions of other agents are not possible. If you’re thinking in terms of a negotiated joint utility function, you’re going to get gamed (by agents that have or appear to have extreme EV curves, so you have to deviate more than them). Think of it as a relative utility monster—there’s no actual solution to it.
Thinking about ethics.
After thinking more about orthogonality I’ve become more confident that one must go about ethics in a mind-dependent way. If I am arguing about what is ‘right’ with a paperclipper, there’s nothing I can say to them to convince them to instead value human preferences or whatever.
I used to be a staunch moral realist, mainly relying on very strong intuitions against nihilism, and then arguing something that not nihilism → moral realism. I now reject the implication, and think that there is both 1) no universal, objective morality, and 2) things matter.
My current approach is to think of “goodness” in terms of what CEV-Thomas would think of as good. Moral uncertainty, then, is uncertainty over what CEV-Thomas thinks. CEV is necessary to get morality out of a human brain, because it is currently a bundle of contradictory heuristics. However, my moral intuitions still give bits about goodness. Other people’s moral intuitions also give some bits about goodness, because of how similar their brains are to mine, so I should weight other peoples beliefs in my moral uncertainty.
Ideally, I should trade with other people so that we both maximize a joint utility function, instead of each of us maximizing our own utility function. In the extreme, this looks like ECL. For most people, I’m not sure that this reasoning is necessary, however, because their intuitions might already be priced into my uncertainty over my CEV.
I tend not to believe that systems dependent on legible and consistent utility functions of other agents are not possible. If you’re thinking in terms of a negotiated joint utility function, you’re going to get gamed (by agents that have or appear to have extreme EV curves, so you have to deviate more than them). Think of it as a relative utility monster—there’s no actual solution to it.