Okay. To be fair I have two conflicting intuitions about this. One is that if you upload someone and manage to give him a perfect experience, then filling the universe with computronium in order to run that program again and again with the same input isn’t particularly valuable; in fact I want to say it’s not any more valuable than just having the experience occur once.
The other intuition is that in “normal” situations, people are just different from each other. And if I want to evaluate how good someone’s experience is, it seems condescending to say that it’s less important because someone else already had a similar experience. How similar can such an experience be, in the real world? I mean they are different people.
My intuition here is that the more similar the experiences are to each other, the faster their marginal utility diminishes.
There’s also the fact that the person having the experience likely doesn’t care about whether it’s happened before.
That’s not clear. As I was saying, an agent having an experience has no way of refering to one instantiation of itself separately from other identical instantiations of the agent having the same experience, so presumably the agent cares about all instantiations of itself in the same way, and I don’t see why that way must be linear.
Regarding 10^100 vs 10^102, I recognize that there are a lot of ways in which having such an advanced civilization could be counted as “winning”. For example there’s a good chance we’ve solved Hilbert’s sixth problem by then which in my book is a pretty nice acheivement. And of course you can only do it once. But does it, or any similar metric that depends on the boolean existence of civilization, really compare to the 10^100 lives that are at stake here? It seems like the answer is no, so tentatively I would take the gamble, though I could imagine being convinced out of it.
To be clear, by “winning”, I was refering to the 10^100 flourishing humans being brought into existence, not glorious intellectual achievements that would be made by this civilization. Those are also nice, but I agree that they are insignificant in comparison.
Anyway, it sounds like you agree with me that under this hypothesis MWI seems to imply LUH, but you think that the hypothesis isn’t satisfied very often.
I didn’t totally agree; I said it was more plausible. Someone could care how their copies are distributed among Everette branches.
Nevertheless, it’s interesting that whether randomness is quantum or not seems to be having consequences for our decision theory. Does it mean that we want more of our uncertainty to be quantum, or less?
Yes, I am interested in this. I think the answer to your latter question probably depends on what the uncertainty is about, but I’ll have to think about how it depends on that.
Hmm. I’m not sure that reference works the way you say it does. If an upload points at itself and the experience of pointing is copied, it seems fair to say that you have a bunch of individuals pointing at themselves, not all pointing at each other. Not sure why other forms of reference should be any different. Though if it does work the way you say, maybe it would explain why uploads seem to be different from pigs… unless you think that the pigs can’t refer to themselves except as a group either.
My intuition here is that the more similar the experiences are to each other, the faster their marginal utility diminishes.
That’s not clear. As I was saying, an agent having an experience has no way of refering to one instantiation of itself separately from other identical instantiations of the agent having the same experience, so presumably the agent cares about all instantiations of itself in the same way, and I don’t see why that way must be linear.
To be clear, by “winning”, I was refering to the 10^100 flourishing humans being brought into existence, not glorious intellectual achievements that would be made by this civilization. Those are also nice, but I agree that they are insignificant in comparison.
I didn’t totally agree; I said it was more plausible. Someone could care how their copies are distributed among Everette branches.
Yes, I am interested in this. I think the answer to your latter question probably depends on what the uncertainty is about, but I’ll have to think about how it depends on that.
Hmm. I’m not sure that reference works the way you say it does. If an upload points at itself and the experience of pointing is copied, it seems fair to say that you have a bunch of individuals pointing at themselves, not all pointing at each other. Not sure why other forms of reference should be any different. Though if it does work the way you say, maybe it would explain why uploads seem to be different from pigs… unless you think that the pigs can’t refer to themselves except as a group either.