Larry D’Anna: we have to worry about what other Optimizers want, not just if they “think correctly”.
I argue that if there exist objective values which are implicit in the structure of our universe, then some significant fraction of possible minds will approximate those objective values.
However, those objective values probably differ quite a lot from most of what most human beings find important in their lives; for example our obsessions with sex, romance and child-rearing probably aren’t in there.
Having re-read Eliezer’s work on “the bottom line”, and this piece on optimism, I am re-assesing very carefully what I think those objective values might look like. I am going to try very hard to make sure I don’t simply rationalize those things that I have already decided (or been genetically programmed to think) are valuable.
Is there a good way to overcome a bias—such as the above—that one is acutely aware of?
Larry D’Anna: we have to worry about what other Optimizers want, not just if they “think correctly”.
I argue that if there exist objective values which are implicit in the structure of our universe, then some significant fraction of possible minds will approximate those objective values.
However, those objective values probably differ quite a lot from most of what most human beings find important in their lives; for example our obsessions with sex, romance and child-rearing probably aren’t in there.
Having re-read Eliezer’s work on “the bottom line”, and this piece on optimism, I am re-assesing very carefully what I think those objective values might look like. I am going to try very hard to make sure I don’t simply rationalize those things that I have already decided (or been genetically programmed to think) are valuable.
Is there a good way to overcome a bias—such as the above—that one is acutely aware of?