Huh? For evolution, what really matters is whether you actually have lots of surviving grandchildren
Yeah, as well as whether your siblings have surviving grandchildren, not to mention grandgrandchildren—I was trying to be concise. I could’ve just said “inclusive fitness”, but I was trying to avoid jargon—though given the number of computer science analogies in the post, that wasn’t exactly very successful.
The example you gave, of choosing to study or play games, doesn’t strike me as an anthropic question, but rather than aspirational question.
Right, that was intentional. I was making the argument that a sense of personal identity is necessary for a large part of our normal every-day decision-making, and anthropic questions feel so weird exactly because they’re so different from the normal decision-making that we’re used to.
Yeah, as well as whether your siblings have surviving grandchildren, not to mention grandgrandchildren—I was trying to be concise. I could’ve just said “inclusive fitness”, but I was trying to avoid jargon—though given the number of computer science analogies in the post, that wasn’t exactly very successful.
This isn’t an issue of concision- it’s an issue of whether what matters for evolution is internal or external. The answer to “why don’t most people put themselves in delusion boxes?” appears to be “those that don’t are probably hardwired to not want to, because evolutionary selection acts on the algorithm that generates that decision.” That’s an immensely important point for self-modifying AI design, which would like the drive for realism to have an internal justification and representation.
[edit] To be clearer, it looked to me like that comment, as written, is confusing means and ends. Inclusive fitness is what really matters; when enduring personal identity aids inclusive fitness, we should expect it to be encouraged by evolution, and when enduring personal identity impairs inclusive fitness, we should expect it to not be encouraged by evolution.
I was making the argument that a sense of personal identity is necessary for a large part of our normal every-day decision-making, and anthropic questions feel so weird exactly because they’re so different from the normal decision-making that we’re used to.
I agree that anthropic questions feel weird, and if we commonly experience them we would have adapted to them so that they wouldn’t feel weird from the inside.
My claim is that it doesn’t seem complete to argue “we need a sense of identity to run long-run optimization problems well.” I run optimization programs without a sense of identity just fine- you tell them the objective function, you tell them the decision variables, you tell them the constraints, and then they process until they’ve got an answer. It doesn’t seem to me like you’re claiming the ‘sense of personal identity’ boils down to ‘the set of decision variables and the objective function,’ but I think that’s only as far as your argument goes.
It feels much more likely to me that the sense of personal identity is an internalized representation of our reputation, and where we would like to push our reputation. A sophisticated consequence-prediction or probability estimation system would be of use to a solitary hunter, but it’s not clear to me that a sense of subjective experience / personal identity / etc. would be nearly as useful for a solitary hunter than a social animal.
My claim is that it doesn’t seem complete to argue “we need a sense of identity to run long-run optimization problems well.” I run optimization programs without a sense of identity just fine- you tell them the objective function, you tell them the decision variables, you tell them the constraints, and then they process until they’ve got an answer. It doesn’t seem to me like you’re claiming the ‘sense of personal identity’ boils down to ‘the set of decision variables and the objective function,’ but I think that’s only as far as your argument goes.
Hmm, looks like I expressed myself badly, as several people seem to have this confusion. I wasn’t saying that long-term optimization problems in general would require a sense of identity, just that the specific optimization program that’s implemented in our current mental architecture seems to require it.
(Yes, a utilitarian could in principle decide that they want to minimize the amount of suffering in the world and then do a calculation about how to best achieve that which didn’t refer to a sense of identity at all… but they’ll have a hard time getting themselves to actually take action based on that calculation, unless they can somehow also motivate their more emotional predictive systems—which are based on a sense of personal identity—to also be interested in pursuing those goals.)
Yeah, as well as whether your siblings have surviving grandchildren, not to mention grandgrandchildren—I was trying to be concise. I could’ve just said “inclusive fitness”, but I was trying to avoid jargon—though given the number of computer science analogies in the post, that wasn’t exactly very successful.
Right, that was intentional. I was making the argument that a sense of personal identity is necessary for a large part of our normal every-day decision-making, and anthropic questions feel so weird exactly because they’re so different from the normal decision-making that we’re used to.
This isn’t an issue of concision- it’s an issue of whether what matters for evolution is internal or external. The answer to “why don’t most people put themselves in delusion boxes?” appears to be “those that don’t are probably hardwired to not want to, because evolutionary selection acts on the algorithm that generates that decision.” That’s an immensely important point for self-modifying AI design, which would like the drive for realism to have an internal justification and representation.
[edit] To be clearer, it looked to me like that comment, as written, is confusing means and ends. Inclusive fitness is what really matters; when enduring personal identity aids inclusive fitness, we should expect it to be encouraged by evolution, and when enduring personal identity impairs inclusive fitness, we should expect it to not be encouraged by evolution.
I agree that anthropic questions feel weird, and if we commonly experience them we would have adapted to them so that they wouldn’t feel weird from the inside.
My claim is that it doesn’t seem complete to argue “we need a sense of identity to run long-run optimization problems well.” I run optimization programs without a sense of identity just fine- you tell them the objective function, you tell them the decision variables, you tell them the constraints, and then they process until they’ve got an answer. It doesn’t seem to me like you’re claiming the ‘sense of personal identity’ boils down to ‘the set of decision variables and the objective function,’ but I think that’s only as far as your argument goes.
It feels much more likely to me that the sense of personal identity is an internalized representation of our reputation, and where we would like to push our reputation. A sophisticated consequence-prediction or probability estimation system would be of use to a solitary hunter, but it’s not clear to me that a sense of subjective experience / personal identity / etc. would be nearly as useful for a solitary hunter than a social animal.
Hmm, looks like I expressed myself badly, as several people seem to have this confusion. I wasn’t saying that long-term optimization problems in general would require a sense of identity, just that the specific optimization program that’s implemented in our current mental architecture seems to require it.
(Yes, a utilitarian could in principle decide that they want to minimize the amount of suffering in the world and then do a calculation about how to best achieve that which didn’t refer to a sense of identity at all… but they’ll have a hard time getting themselves to actually take action based on that calculation, unless they can somehow also motivate their more emotional predictive systems—which are based on a sense of personal identity—to also be interested in pursuing those goals.)