I actually don’t think any of those things are problematic. A reductionist view of personal identity mainly feels like an ontology shift—you need to redefine all the terms in your utility function (or other decision-making system), but most outcomes will actually be the same (with the advantage that some decisions that were previously confusing should now be clear). Specifically:
Does the reductionist view of personal identity affect how we should ethically evaluate death?
I don’t think so! You can redefine death as a particular (optionally animal-shaped) optimization process ceasing operation, which is not reliant on personal identity. (Throw in a more explicit reference to lack of continuity if you care about physical continuity.) The only side-effect of the reductionist view, I feel, is that it makes out preferences feel more arbitrary, but I think that’s something you have to accept either way in the end.
For instance, if continuing to exist is like bringing new conscious selves into existence (by omission of not killing oneself), and if we consider continued existence ethically valuable, wouldn’t this imply classical total utilitarianism, the view that we try to fill the universe with happy moments?
Not really. You can focus your utility function on one particular optimization process and its potential future execution, which may be appropriate given that the utility function defines the preference over outcomes of that optimization process.
Also, the idea of “living as long as possible” appears odd under this view, like an arbitrary grouping of certain future conscious moments one just happens to care about (for evolutionary reasons having nothing to do with “making the world a better place”).
This is true enough. If you have strong preferences for the world outside of yourself (general “you”), you can argue that continuing the operation of the optimization process with these preferences increases the probability of the world more closely matching these preferences. If you care mostly about yourself, you have to bite the bullet and admit that that’s very arbitrary. But since preferences are generally arbitrary, I don’t see this as a problem.
Finally, in the comments someone remarked that he still has an aversion to creating repetitive conscious moments, but wouldn’t the reductionist view on personal identity also undermine that? For *whom” would repetition be a problem?
This basically comes down to the fact that just because you believe that there’s no continuity of personal identity, you don’t have to go catatonic (or epileptic). You can still have preferences over what to do, because why not? The optimization process that is your body and brain continues to obey the laws of physics and optimize, even though the concept of “personal identity” doesn’t mean much. (I’m really having a lot of trouble writing the preceding sentence in a clear and persuasive way, although I don’t think that means it’s incorrect.)
And in case someone thinks that I over-rely on the term “optimization process” and the comment would collapse if it’s tabooed, I’m pretty sure that’s not the case! The notion should be emergent as a pattern that allows more efficient modelling of the world (e.g. it’s easier to consider a human’s actions than the interaction of all particles that make up a human), and the comment should be robust to a reformulation along these lines.
I strongly second this comment. I have been utterly horrified the few times in my life when I have come across arguments along the lines of “personal identity isn’t a coherent concept, so there’s no reason to care about individual people.” You are absolutely right that it is easy to steel-man the concept of personal identity so that it is perfectly coherent, and that rejecting personal identity is not a valid argument for total utilitarianism (or any ethical system, really).
In my opinion the OP is a good piece of scientific analysis. But I don’t believe it has any major moral implications, except maybe “don’t angst about the Ship of Theseus problem.” The concept of personal identity (after it has been sufficiently steel-manned) is one of the wonderful gifts we give to tomorrow, and any ethical system that rejects has lost its way.
Not really. You can focus your utility function on one particular optimization process and its potential future execution, which may be appropriate given that the utility function defines the preference over outcomes of that optimization process.
Well you could focus your utility function on anything you like anyway, the question is why, under utilitarianism, would it be justified to value this particular optimization process? If personal identity was fundamental, then you’d have no choice, conscious existence would be tied to some particular identity. But if it’s not fundamental, then why prefer this particular grouping of conscious-experience-moments, rather than any other? If I have the choice, I might as well choose some other set of these moments, because as you said, “why not”?
I wrote an answer, but upon rereading, I’m not sure it’s answering your particular doubts. It might though, so here:
Well, if we’re talking about utilitarianism specifically, there are two sides to the answer. First, you favour the optimization-that-is-you more than others because you know for sure that it implements utilitarianism and others don’t (thus having it around longer makes utilitarianism more likely to come to fruition). Basically the reason why Harry decides not to sacrifice himself in HPMoR. And second, you’re right, there may well be a point where you should just sacrifice yourself for the greater good if you’re a utilitarian, although that doesn’t really have much to do with dissolution of personal identity.
But I think a better answer might be that:
If I have the choice, I might as well choose some other set of these moments, because as you said, “why not”?
You do not, in fact, have the choice. Or maybe you do, but it’s not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity, and there is no additional motivation for doing so. If you mean something similar to Eliezer writing “how do I know I won’t be Britney +5 five seconds from now” in the original post, that question actually relies on a concept of personal identity and is undefined without it. There’s not really a classical “you” that’s “you” right now, and five seconds from now there will still be no “you” (although obviously there’s still a bunch of molecules following some patterns, and we can assume they’ll keep following similar patterns in five seconds, there’s just no sense in which they could become Britney).
Or maybe you do, but it’s not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity
I think the point is actually similar to this discussion, which also somewhat confuses me.
I actually don’t think any of those things are problematic. A reductionist view of personal identity mainly feels like an ontology shift—you need to redefine all the terms in your utility function (or other decision-making system), but most outcomes will actually be the same (with the advantage that some decisions that were previously confusing should now be clear). Specifically:
I don’t think so! You can redefine death as a particular (optionally animal-shaped) optimization process ceasing operation, which is not reliant on personal identity. (Throw in a more explicit reference to lack of continuity if you care about physical continuity.) The only side-effect of the reductionist view, I feel, is that it makes out preferences feel more arbitrary, but I think that’s something you have to accept either way in the end.
Not really. You can focus your utility function on one particular optimization process and its potential future execution, which may be appropriate given that the utility function defines the preference over outcomes of that optimization process.
This is true enough. If you have strong preferences for the world outside of yourself (general “you”), you can argue that continuing the operation of the optimization process with these preferences increases the probability of the world more closely matching these preferences. If you care mostly about yourself, you have to bite the bullet and admit that that’s very arbitrary. But since preferences are generally arbitrary, I don’t see this as a problem.
This basically comes down to the fact that just because you believe that there’s no continuity of personal identity, you don’t have to go catatonic (or epileptic). You can still have preferences over what to do, because why not? The optimization process that is your body and brain continues to obey the laws of physics and optimize, even though the concept of “personal identity” doesn’t mean much. (I’m really having a lot of trouble writing the preceding sentence in a clear and persuasive way, although I don’t think that means it’s incorrect.)
And in case someone thinks that I over-rely on the term “optimization process” and the comment would collapse if it’s tabooed, I’m pretty sure that’s not the case! The notion should be emergent as a pattern that allows more efficient modelling of the world (e.g. it’s easier to consider a human’s actions than the interaction of all particles that make up a human), and the comment should be robust to a reformulation along these lines.
I strongly second this comment. I have been utterly horrified the few times in my life when I have come across arguments along the lines of “personal identity isn’t a coherent concept, so there’s no reason to care about individual people.” You are absolutely right that it is easy to steel-man the concept of personal identity so that it is perfectly coherent, and that rejecting personal identity is not a valid argument for total utilitarianism (or any ethical system, really).
In my opinion the OP is a good piece of scientific analysis. But I don’t believe it has any major moral implications, except maybe “don’t angst about the Ship of Theseus problem.” The concept of personal identity (after it has been sufficiently steel-manned) is one of the wonderful gifts we give to tomorrow, and any ethical system that rejects has lost its way.
Well you could focus your utility function on anything you like anyway, the question is why, under utilitarianism, would it be justified to value this particular optimization process? If personal identity was fundamental, then you’d have no choice, conscious existence would be tied to some particular identity. But if it’s not fundamental, then why prefer this particular grouping of conscious-experience-moments, rather than any other? If I have the choice, I might as well choose some other set of these moments, because as you said, “why not”?
I wrote an answer, but upon rereading, I’m not sure it’s answering your particular doubts. It might though, so here:
Well, if we’re talking about utilitarianism specifically, there are two sides to the answer. First, you favour the optimization-that-is-you more than others because you know for sure that it implements utilitarianism and others don’t (thus having it around longer makes utilitarianism more likely to come to fruition). Basically the reason why Harry decides not to sacrifice himself in HPMoR. And second, you’re right, there may well be a point where you should just sacrifice yourself for the greater good if you’re a utilitarian, although that doesn’t really have much to do with dissolution of personal identity.
But I think a better answer might be that:
You do not, in fact, have the choice. Or maybe you do, but it’s not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity, and there is no additional motivation for doing so. If you mean something similar to Eliezer writing “how do I know I won’t be Britney +5 five seconds from now” in the original post, that question actually relies on a concept of personal identity and is undefined without it. There’s not really a classical “you” that’s “you” right now, and five seconds from now there will still be no “you” (although obviously there’s still a bunch of molecules following some patterns, and we can assume they’ll keep following similar patterns in five seconds, there’s just no sense in which they could become Britney).
I think the point is actually similar to this discussion, which also somewhat confuses me.