I don’t think that’s a viable alternative, given that I don’t believe that egoism is certainly right (surely the right way to treat moral uncertainty can’t be to just pick something and “adopt it”?), plus I don’t even know how to adopt egoism if I wanted to:
I don’t think that’s a viable alternative, given that I don’t believe that egoism is certainly right (surely the right way to treat moral uncertainty can’t be to just pick something and “adopt it”?), plus I don’t even know how to adopt egoism if I wanted to:
https://www.lesswrong.com/posts/Nz62ZurRkGPigAxMK/where-do-selfish-values-come-from
https://www.lesswrong.com/posts/c73kPDr8pZGdZSe3q/solving-selfishness-for-udt (which doesn’t really solve the problem despite the title)