I’ll respond to your point about me-simulation in the comments of your other post, as you suggested.
Response to your Section II
I’m skeptical that your utility function is reducible to the perception of complexity. In Fake Utility Functions Eliezer writes:
Press [the one constructing the Amazingly Simple Utility Function] on some particular point, like the love a mother has for her children, and they reply “But if the superintelligence wants ‘complexity’, it will see how complicated the parent-child relationship is, and therefore encourage mothers to love their children.” Goodness, where do I start?
Begin with the motivated stopping: A superintelligence actually searching for ways to maximize complexity wouldn’t conveniently stop if it noticed that a parent-child relation was complex. It would ask if anything else was more complex. This is a fake justification; the one trying to argue the imaginary superintelligence into a policy selection, didn’t really arrive at that policy proposal by carrying out a pure search for ways to maximize complexity.
The whole argument is a fake morality. If what you really valued was complexity, then you would be justifying the parental-love drive by pointing to how it increases complexity. If you justify a complexity drive by alleging that it increases parental love, it means that what you really value is the parental love. It’s like giving a prosocial argument in favor of selfishness.
There are facts about what you care about, but you don’t get to know them all. Not by default. Not yet. Humans don’t have that sort of introspective capabilities yet. They don’t have that sort of philosophical sophistication yet. But they do have a massive and well-documented incentive to convince themselves that they care about simple things — which is why it’s a bit suspicious when people go around claiming they know their true preferences.
From here, it looks very unlikely to me that anyone has the ability to pin down exactly what they really care about. Why? Because of where human values came from. Remember that one time that Time tried to build a mind that wanted to eat healthy, and accidentally built a mind that enjoys salt and fat? I jest, of course, and it’s dangerous to anthropomorphize natural selection, but the point stands: our values come from a complex and intricate process tied closely to innumerable coincidences of history.
Thou art Godshatter, whose utility function has a thousand terms, each of which is in itself indispensible. While the utility function of human beings is complex, that doesn’t imply that it reduces to “complexity is valuable.”
I don’t claim that what I’ve just said should convince you that your utility function isn’t “maximize the amount of complexity I perceive”; I’m not in your head, for all I know it could be. All I intend it to convey is my reasons for being skeptical.
Let me ask you this: Why do you value complexity? And how do you know?
Response to your section III
Regarding why it’s still worth it to act altruistically: I just want to clarify that the part of my comment that you quoted:
“[...] even in a first-person simulation, the people you were interacting with would be conscious as long as they were within your frame of awareness (otherwise the simulation couldn’t be accurate), it’s just that they would blink out of existence once they left your frame of awareness.”
wasn’t supposed to have anything to do with my argument that it’s still worth it to act altruistically.
Suppose that premise (2) above is true, i.e. you really are the only conscious being in the me-simulation, and even the people in your frame of awareness aren’t conscious, even while they’re there. Even if that’s the case, your credence that it’s the case shouldn’t be 100%; you should have at least some miniscule doubt. Suppose that you assign a .01 credence to the possibility that premise (2) is false (i.e. a 1% credence); that is, you think there’s a 99% chance it’s true and a 1% chance it’s false. In that case, suppose you could perform some action that, if premise (2) were false, would produce 1,000,0000 utils for some people who aren’t you. Then the expected utility of that action, even if you think premise (2) has a 99% chance of being right, is 10,000 utils. So if you don’t have any other actions available that produce more than 10,000 utils, you should do that action. And the conclusion is the same even if the numbers are different.
A further argument
A further argument also occurred to me for why it might still be worth it to act altruistically after I posted my original comment, namely the following: if there are enough me-simulations that you are likely to be in one of them, then it’s also likely that you appear in at least some other me-simulations as a “shadow-person,” to use Bostrom’s term. And since you are simply an instantiation of an algorithm, and the algorithm of you-in-the-me-simulation is similar to the algorithm of you-as-a-shadow-person, due to Functional Decision Theory and Timeless Decision Theory considerations your actions in the me-simulation will affect the actions of you-as-a-shadow-person. And you-as-a-shadow-person’s actions will affect the conscious person who is in the other me-simulation, which means that you-in-the-me-simulation’s actions will, acausally, have effects on other conscious beings, so you should want to act in such a way that the outputs of your near-copies in other me-simulations will have positive effects on the conscious inhabitants of those other me-simulations. If this argument isn’t clear, let me know and I can try to rephrase it. It’s also not my True Rejection, so I don’t place too much weight on it.
I’ll respond to your point about me-simulation in the comments of your other post, as you suggested.
Response to your Section II
I’m skeptical that your utility function is reducible to the perception of complexity. In Fake Utility Functions Eliezer writes:
In “You Don’t Get to Know What You’re Fighting For,” Nate Soares writes:
Thou art Godshatter, whose utility function has a thousand terms, each of which is in itself indispensible. While the utility function of human beings is complex, that doesn’t imply that it reduces to “complexity is valuable.”
I don’t claim that what I’ve just said should convince you that your utility function isn’t “maximize the amount of complexity I perceive”; I’m not in your head, for all I know it could be. All I intend it to convey is my reasons for being skeptical.
Let me ask you this: Why do you value complexity? And how do you know?
Response to your section III
Regarding why it’s still worth it to act altruistically: I just want to clarify that the part of my comment that you quoted:
wasn’t supposed to have anything to do with my argument that it’s still worth it to act altruistically.
Suppose that premise (2) above is true, i.e. you really are the only conscious being in the me-simulation, and even the people in your frame of awareness aren’t conscious, even while they’re there. Even if that’s the case, your credence that it’s the case shouldn’t be 100%; you should have at least some miniscule doubt. Suppose that you assign a .01 credence to the possibility that premise (2) is false (i.e. a 1% credence); that is, you think there’s a 99% chance it’s true and a 1% chance it’s false. In that case, suppose you could perform some action that, if premise (2) were false, would produce 1,000,0000 utils for some people who aren’t you. Then the expected utility of that action, even if you think premise (2) has a 99% chance of being right, is 10,000 utils. So if you don’t have any other actions available that produce more than 10,000 utils, you should do that action. And the conclusion is the same even if the numbers are different.
A further argument
A further argument also occurred to me for why it might still be worth it to act altruistically after I posted my original comment, namely the following: if there are enough me-simulations that you are likely to be in one of them, then it’s also likely that you appear in at least some other me-simulations as a “shadow-person,” to use Bostrom’s term. And since you are simply an instantiation of an algorithm, and the algorithm of you-in-the-me-simulation is similar to the algorithm of you-as-a-shadow-person, due to Functional Decision Theory and Timeless Decision Theory considerations your actions in the me-simulation will affect the actions of you-as-a-shadow-person. And you-as-a-shadow-person’s actions will affect the conscious person who is in the other me-simulation, which means that you-in-the-me-simulation’s actions will, acausally, have effects on other conscious beings, so you should want to act in such a way that the outputs of your near-copies in other me-simulations will have positive effects on the conscious inhabitants of those other me-simulations. If this argument isn’t clear, let me know and I can try to rephrase it. It’s also not my True Rejection, so I don’t place too much weight on it.