My primary moral is to resist the temptation to generalize over all of mind design space
If we focus on the bounded subspace of mind design space which contains all those minds whose makeup can be specified in a trillion bits or less, then every universal generalization that you make has two to the trillionth power chances to be falsified.
Conversely, every existential generalization—“there exists at least one mind such that X”—has two to the trillionth power chances to be true.
So you want to resist the temptation to say either that all minds do something, or that no minds do something.
There are some states of the world you would consider good, so the utility functions that aim for those states are good too. There are utility functions that think X is bad and Y is good to the exact same extent you think these things are bad or good.
https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general
There are some states of the world you would consider good, so the utility functions that aim for those states are good too. There are utility functions that think X is bad and Y is good to the exact same extent you think these things are bad or good.