First I want to make sure we’re splitting off the personal from the aesthetic here. By “the aesthetic,” I mean the moral value from a truly outside perspective—like asking the question “if I got to design the universe, which way would I rather it be?” You don’t anticipate being this person, you just like people from an aesthetic standpoint and want your universe to have some. For this type of preference, you can prefer the universe to be however you’d like (:P) including larger vs. smaller computers.
First I want to make sure we’re splitting off the personal from the aesthetic here. By “the aesthetic,” I mean the moral value from a truly outside perspective—like asking the question “if I got to design the universe, which way would I rather it be?” You don’t anticipate being this person, you just like people from an aesthetic standpoint and want your universe to have some. For this type of preference, you can prefer the universe to be however you’d like (:P) including larger vs. smaller computers.
Second is the personal question. If the person being simulated is me, what would I prefer? I resolved these questions to my own satisfaction in Anthropic Selfish Preferences as a Modification of TDT ( https://www.lesswrong.com/posts/gTmWZEu3CcEQ6fLLM/treating-anthropic-selfish-preferences-as-an-extension-of ), but I’m not sure how helpful that post actually is for sharing insight.