for evolution, what really matters in the end is whether you’ll personally expect to experience living on until you have a good mate and lots of surviving children.
Huh? For evolution, what really matters is whether you actually have lots of surviving grandchildren, and so it’s okay if you get surprised, so long as that surprise increases the expected number of grandchildren you have.
I like the point that an enduring personal identity is a useful model that we should expect to be mostly hardwired. When I introspect about my personal identity, most of the results that come back are relational / social, not predictive, and so it seems odd to me that there’s not much discussion of social pressures on having an identity model.
The example you gave, of choosing to study or play games, doesn’t strike me as an anthropic question, but rather an aspirational question. Who are you willing to become? An anthropic version of that question would be more like:
We’re going to completely copy you, have one version study and the other version play video games, and then completely merge you and your copy, so it’s as if you did both (you’ll remember doing both, have the positive and negative effects of both, etc.). The current plan is that you’ll study while the copy plays video games; how much would you pay so that you, the original, plays video games while the copy studies?
(To separate out money from the problem, the payment could be in terms of less time spent gaming, or so on.)
Huh? For evolution, what really matters is whether you actually have lots of surviving grandchildren
Yeah, as well as whether your siblings have surviving grandchildren, not to mention grandgrandchildren—I was trying to be concise. I could’ve just said “inclusive fitness”, but I was trying to avoid jargon—though given the number of computer science analogies in the post, that wasn’t exactly very successful.
The example you gave, of choosing to study or play games, doesn’t strike me as an anthropic question, but rather than aspirational question.
Right, that was intentional. I was making the argument that a sense of personal identity is necessary for a large part of our normal every-day decision-making, and anthropic questions feel so weird exactly because they’re so different from the normal decision-making that we’re used to.
Yeah, as well as whether your siblings have surviving grandchildren, not to mention grandgrandchildren—I was trying to be concise. I could’ve just said “inclusive fitness”, but I was trying to avoid jargon—though given the number of computer science analogies in the post, that wasn’t exactly very successful.
This isn’t an issue of concision- it’s an issue of whether what matters for evolution is internal or external. The answer to “why don’t most people put themselves in delusion boxes?” appears to be “those that don’t are probably hardwired to not want to, because evolutionary selection acts on the algorithm that generates that decision.” That’s an immensely important point for self-modifying AI design, which would like the drive for realism to have an internal justification and representation.
[edit] To be clearer, it looked to me like that comment, as written, is confusing means and ends. Inclusive fitness is what really matters; when enduring personal identity aids inclusive fitness, we should expect it to be encouraged by evolution, and when enduring personal identity impairs inclusive fitness, we should expect it to not be encouraged by evolution.
I was making the argument that a sense of personal identity is necessary for a large part of our normal every-day decision-making, and anthropic questions feel so weird exactly because they’re so different from the normal decision-making that we’re used to.
I agree that anthropic questions feel weird, and if we commonly experience them we would have adapted to them so that they wouldn’t feel weird from the inside.
My claim is that it doesn’t seem complete to argue “we need a sense of identity to run long-run optimization problems well.” I run optimization programs without a sense of identity just fine- you tell them the objective function, you tell them the decision variables, you tell them the constraints, and then they process until they’ve got an answer. It doesn’t seem to me like you’re claiming the ‘sense of personal identity’ boils down to ‘the set of decision variables and the objective function,’ but I think that’s only as far as your argument goes.
It feels much more likely to me that the sense of personal identity is an internalized representation of our reputation, and where we would like to push our reputation. A sophisticated consequence-prediction or probability estimation system would be of use to a solitary hunter, but it’s not clear to me that a sense of subjective experience / personal identity / etc. would be nearly as useful for a solitary hunter than a social animal.
My claim is that it doesn’t seem complete to argue “we need a sense of identity to run long-run optimization problems well.” I run optimization programs without a sense of identity just fine- you tell them the objective function, you tell them the decision variables, you tell them the constraints, and then they process until they’ve got an answer. It doesn’t seem to me like you’re claiming the ‘sense of personal identity’ boils down to ‘the set of decision variables and the objective function,’ but I think that’s only as far as your argument goes.
Hmm, looks like I expressed myself badly, as several people seem to have this confusion. I wasn’t saying that long-term optimization problems in general would require a sense of identity, just that the specific optimization program that’s implemented in our current mental architecture seems to require it.
(Yes, a utilitarian could in principle decide that they want to minimize the amount of suffering in the world and then do a calculation about how to best achieve that which didn’t refer to a sense of identity at all… but they’ll have a hard time getting themselves to actually take action based on that calculation, unless they can somehow also motivate their more emotional predictive systems—which are based on a sense of personal identity—to also be interested in pursuing those goals.)
Huh? For evolution, what really matters is whether you actually have lots of surviving grandchildren, and so it’s okay if you get surprised, so long as that surprise increases the expected number of grandchildren you have.
I like the point that an enduring personal identity is a useful model that we should expect to be mostly hardwired. When I introspect about my personal identity, most of the results that come back are relational / social, not predictive, and so it seems odd to me that there’s not much discussion of social pressures on having an identity model.
The example you gave, of choosing to study or play games, doesn’t strike me as an anthropic question, but rather an aspirational question. Who are you willing to become? An anthropic version of that question would be more like:
(To separate out money from the problem, the payment could be in terms of less time spent gaming, or so on.)
Yeah, as well as whether your siblings have surviving grandchildren, not to mention grandgrandchildren—I was trying to be concise. I could’ve just said “inclusive fitness”, but I was trying to avoid jargon—though given the number of computer science analogies in the post, that wasn’t exactly very successful.
Right, that was intentional. I was making the argument that a sense of personal identity is necessary for a large part of our normal every-day decision-making, and anthropic questions feel so weird exactly because they’re so different from the normal decision-making that we’re used to.
This isn’t an issue of concision- it’s an issue of whether what matters for evolution is internal or external. The answer to “why don’t most people put themselves in delusion boxes?” appears to be “those that don’t are probably hardwired to not want to, because evolutionary selection acts on the algorithm that generates that decision.” That’s an immensely important point for self-modifying AI design, which would like the drive for realism to have an internal justification and representation.
[edit] To be clearer, it looked to me like that comment, as written, is confusing means and ends. Inclusive fitness is what really matters; when enduring personal identity aids inclusive fitness, we should expect it to be encouraged by evolution, and when enduring personal identity impairs inclusive fitness, we should expect it to not be encouraged by evolution.
I agree that anthropic questions feel weird, and if we commonly experience them we would have adapted to them so that they wouldn’t feel weird from the inside.
My claim is that it doesn’t seem complete to argue “we need a sense of identity to run long-run optimization problems well.” I run optimization programs without a sense of identity just fine- you tell them the objective function, you tell them the decision variables, you tell them the constraints, and then they process until they’ve got an answer. It doesn’t seem to me like you’re claiming the ‘sense of personal identity’ boils down to ‘the set of decision variables and the objective function,’ but I think that’s only as far as your argument goes.
It feels much more likely to me that the sense of personal identity is an internalized representation of our reputation, and where we would like to push our reputation. A sophisticated consequence-prediction or probability estimation system would be of use to a solitary hunter, but it’s not clear to me that a sense of subjective experience / personal identity / etc. would be nearly as useful for a solitary hunter than a social animal.
Hmm, looks like I expressed myself badly, as several people seem to have this confusion. I wasn’t saying that long-term optimization problems in general would require a sense of identity, just that the specific optimization program that’s implemented in our current mental architecture seems to require it.
(Yes, a utilitarian could in principle decide that they want to minimize the amount of suffering in the world and then do a calculation about how to best achieve that which didn’t refer to a sense of identity at all… but they’ll have a hard time getting themselves to actually take action based on that calculation, unless they can somehow also motivate their more emotional predictive systems—which are based on a sense of personal identity—to also be interested in pursuing those goals.)