I assume Mass Driver is uncertain between certain specifiable classes of “ways the multiverse could work” (with some probability left for “none of the above”), and that in the majority of the classified hypotheses, having a copy either helps you or doesn’t hurt.
Thus on balance, they should expect positive expected value, even considering that some of the “none of the above” possibilities might be harmful to copying.
Because scenarios where having an extra copy hurts seem… engineered, somehow. Short of having a deity or Dark Lord of the Matrix punish those with so much hubris as to copy themselves, I have a hard time imagining how it could hurt, while I can easily think of simple rules for anthropic probabilities in the multiverse under which it would (1) help or (2) have no effect.
I realize that the availability heuristic is not something in which we should repose much confidence on such problems (thus the probability mass I still assign to “none of the above”), but it does seem to be better than assuming a maxentropy prior on the consequences of all novel actions.
I think, in general, the LW community often errs by placing too much weight on a maxentropy prior as opposed to letting heuristics or traditions have at least some input. Still, it’s probably an overcorrection that comes in handy sometimes; the rest of the world massively overvalues heuristics and tradition, so there are whole areas of possibility-space that get massively underexplored, and LW may as well spend most of its time in those areas.
You could be right about the LW tendency to err… but this thread isn’t the place where it springs to mind as a possible problem! I am almost certain that neither the EEA nor current circumstance are such that heuristics and tradition are likely to give useful decisions about clone trenches.
Well, short of having a deity reward those who copy themselves with extra afterlife, I’m having difficulty imagining how creating non-interacting identical copies could help, either.
The problem with the availability heuristic here isn’t so much that it’s not a formal logical proof. It’s that it fails to convince me, because I don’t happen to have the same intuition about it, which is why we’re having this conversation in the first place.
I don’t see how you could assign positive utility to truly novel actions without being able to say something about their anticipated consequences. But non-interacting copies are pretty much specified to have no consequences.
Well, in my understanding of the mathematical universe, this sort of copying could be used to change anthropic probabilities without the downsides of quantum suicide. So there’s that.
Robin Hanson probably has his own justification for lots of noninteracting copies (assuming that was the setup presented to him as mentioned in the OP), and I’d be interested to hear that as well.
If you’re uncertain about how the universe works, why do you think that creating a clone is more likely to help you than to harm you?
I assume Mass Driver is uncertain between certain specifiable classes of “ways the multiverse could work” (with some probability left for “none of the above”), and that in the majority of the classified hypotheses, having a copy either helps you or doesn’t hurt.
Thus on balance, they should expect positive expected value, even considering that some of the “none of the above” possibilities might be harmful to copying.
I understand that that’s what Mass_Driver is saying. I’m asking, why think that?
Because scenarios where having an extra copy hurts seem… engineered, somehow. Short of having a deity or Dark Lord of the Matrix punish those with so much hubris as to copy themselves, I have a hard time imagining how it could hurt, while I can easily think of simple rules for anthropic probabilities in the multiverse under which it would (1) help or (2) have no effect.
I realize that the availability heuristic is not something in which we should repose much confidence on such problems (thus the probability mass I still assign to “none of the above”), but it does seem to be better than assuming a maxentropy prior on the consequences of all novel actions.
I think, in general, the LW community often errs by placing too much weight on a maxentropy prior as opposed to letting heuristics or traditions have at least some input. Still, it’s probably an overcorrection that comes in handy sometimes; the rest of the world massively overvalues heuristics and tradition, so there are whole areas of possibility-space that get massively underexplored, and LW may as well spend most of its time in those areas.
You could be right about the LW tendency to err… but this thread isn’t the place where it springs to mind as a possible problem! I am almost certain that neither the EEA nor current circumstance are such that heuristics and tradition are likely to give useful decisions about clone trenches.
Well, short of having a deity reward those who copy themselves with extra afterlife, I’m having difficulty imagining how creating non-interacting identical copies could help, either.
The problem with the availability heuristic here isn’t so much that it’s not a formal logical proof. It’s that it fails to convince me, because I don’t happen to have the same intuition about it, which is why we’re having this conversation in the first place.
I don’t see how you could assign positive utility to truly novel actions without being able to say something about their anticipated consequences. But non-interacting copies are pretty much specified to have no consequences.
Well, in my understanding of the mathematical universe, this sort of copying could be used to change anthropic probabilities without the downsides of quantum suicide. So there’s that.
Robin Hanson probably has his own justification for lots of noninteracting copies (assuming that was the setup presented to him as mentioned in the OP), and I’d be interested to hear that as well.