I would spend one day’s hard labor (8-12 hours) to create one copy of me, just because I’m uncertain enough about how the multiverse works that having an extra copy would be vaguely reassuring. I might do another couple of hours on another day for copy #3. After that I think I’m done.
I’m interested, but suspicious of fraud—how do I know the copy really exists?
Also, it seems like as posed, my copies will live in identical universes and have identical futures as well as present state—i.e. I’m making an exact copy of everyone and everything else as well. If that’s the offer, then I’d need more information about the implications of universe cloning. If there are none, then the question seems like nonsense to me.
I was only initially interested at the thought of my copies diverging, even without interaction (I suppose MWI implies this is what goes on behind the scenes all the time).
If the other universe(s) are simulated inside our own, then there may be relevant differences between the simulating universe and the simulated ones.
In particular, how do we create universes identical to the ‘master copy’? The easiest way is to observe our universe, and run the simulations a second behind, reproducing whatever we observe. That would mean decisions in our universe control events in the simulated worlds, so they have different weights under some decision theories.
I assumed we couldn’t observe our copies, because if we could, then they’d be observing them too. In other words, somebody’s experience of observing a copy would have to be fake—just a view of their present reality and not of a distinct copy.
This all follows from the setup, where there can be no difference between a copy (+ its environment) and the original. It’s hard to think about what value that has.
I assume Mass Driver is uncertain between certain specifiable classes of “ways the multiverse could work” (with some probability left for “none of the above”), and that in the majority of the classified hypotheses, having a copy either helps you or doesn’t hurt.
Thus on balance, they should expect positive expected value, even considering that some of the “none of the above” possibilities might be harmful to copying.
Because scenarios where having an extra copy hurts seem… engineered, somehow. Short of having a deity or Dark Lord of the Matrix punish those with so much hubris as to copy themselves, I have a hard time imagining how it could hurt, while I can easily think of simple rules for anthropic probabilities in the multiverse under which it would (1) help or (2) have no effect.
I realize that the availability heuristic is not something in which we should repose much confidence on such problems (thus the probability mass I still assign to “none of the above”), but it does seem to be better than assuming a maxentropy prior on the consequences of all novel actions.
I think, in general, the LW community often errs by placing too much weight on a maxentropy prior as opposed to letting heuristics or traditions have at least some input. Still, it’s probably an overcorrection that comes in handy sometimes; the rest of the world massively overvalues heuristics and tradition, so there are whole areas of possibility-space that get massively underexplored, and LW may as well spend most of its time in those areas.
You could be right about the LW tendency to err… but this thread isn’t the place where it springs to mind as a possible problem! I am almost certain that neither the EEA nor current circumstance are such that heuristics and tradition are likely to give useful decisions about clone trenches.
Well, short of having a deity reward those who copy themselves with extra afterlife, I’m having difficulty imagining how creating non-interacting identical copies could help, either.
The problem with the availability heuristic here isn’t so much that it’s not a formal logical proof. It’s that it fails to convince me, because I don’t happen to have the same intuition about it, which is why we’re having this conversation in the first place.
I don’t see how you could assign positive utility to truly novel actions without being able to say something about their anticipated consequences. But non-interacting copies are pretty much specified to have no consequences.
Well, in my understanding of the mathematical universe, this sort of copying could be used to change anthropic probabilities without the downsides of quantum suicide. So there’s that.
Robin Hanson probably has his own justification for lots of noninteracting copies (assuming that was the setup presented to him as mentioned in the OP), and I’d be interested to hear that as well.
I would spend one day’s hard labor (8-12 hours) to create one copy of me, just because I’m uncertain enough about how the multiverse works that having an extra copy would be vaguely reassuring. I might do another couple of hours on another day for copy #3. After that I think I’m done.
I’m interested, but suspicious of fraud—how do I know the copy really exists?
Also, it seems like as posed, my copies will live in identical universes and have identical futures as well as present state—i.e. I’m making an exact copy of everyone and everything else as well. If that’s the offer, then I’d need more information about the implications of universe cloning. If there are none, then the question seems like nonsense to me.
I was only initially interested at the thought of my copies diverging, even without interaction (I suppose MWI implies this is what goes on behind the scenes all the time).
If the other universe(s) are simulated inside our own, then there may be relevant differences between the simulating universe and the simulated ones.
In particular, how do we create universes identical to the ‘master copy’? The easiest way is to observe our universe, and run the simulations a second behind, reproducing whatever we observe. That would mean decisions in our universe control events in the simulated worlds, so they have different weights under some decision theories.
I assumed we couldn’t observe our copies, because if we could, then they’d be observing them too. In other words, somebody’s experience of observing a copy would have to be fake—just a view of their present reality and not of a distinct copy.
This all follows from the setup, where there can be no difference between a copy (+ its environment) and the original. It’s hard to think about what value that has.
If you’re uncertain about how the universe works, why do you think that creating a clone is more likely to help you than to harm you?
I assume Mass Driver is uncertain between certain specifiable classes of “ways the multiverse could work” (with some probability left for “none of the above”), and that in the majority of the classified hypotheses, having a copy either helps you or doesn’t hurt.
Thus on balance, they should expect positive expected value, even considering that some of the “none of the above” possibilities might be harmful to copying.
I understand that that’s what Mass_Driver is saying. I’m asking, why think that?
Because scenarios where having an extra copy hurts seem… engineered, somehow. Short of having a deity or Dark Lord of the Matrix punish those with so much hubris as to copy themselves, I have a hard time imagining how it could hurt, while I can easily think of simple rules for anthropic probabilities in the multiverse under which it would (1) help or (2) have no effect.
I realize that the availability heuristic is not something in which we should repose much confidence on such problems (thus the probability mass I still assign to “none of the above”), but it does seem to be better than assuming a maxentropy prior on the consequences of all novel actions.
I think, in general, the LW community often errs by placing too much weight on a maxentropy prior as opposed to letting heuristics or traditions have at least some input. Still, it’s probably an overcorrection that comes in handy sometimes; the rest of the world massively overvalues heuristics and tradition, so there are whole areas of possibility-space that get massively underexplored, and LW may as well spend most of its time in those areas.
You could be right about the LW tendency to err… but this thread isn’t the place where it springs to mind as a possible problem! I am almost certain that neither the EEA nor current circumstance are such that heuristics and tradition are likely to give useful decisions about clone trenches.
Well, short of having a deity reward those who copy themselves with extra afterlife, I’m having difficulty imagining how creating non-interacting identical copies could help, either.
The problem with the availability heuristic here isn’t so much that it’s not a formal logical proof. It’s that it fails to convince me, because I don’t happen to have the same intuition about it, which is why we’re having this conversation in the first place.
I don’t see how you could assign positive utility to truly novel actions without being able to say something about their anticipated consequences. But non-interacting copies are pretty much specified to have no consequences.
Well, in my understanding of the mathematical universe, this sort of copying could be used to change anthropic probabilities without the downsides of quantum suicide. So there’s that.
Robin Hanson probably has his own justification for lots of noninteracting copies (assuming that was the setup presented to him as mentioned in the OP), and I’d be interested to hear that as well.