Not sure yet. I have a fledgeling ethics of rights kicking around in the back of my head that I might do something with. Alternately, I could start making noise about my wacky opinions on personal identity and be a metaphysicist. I also like epistemology, and I find philosophy of religion entertaining (although I wouldn’t want to devote much of my time to it). I’m pretty sure I don’t want to do philosophy of math, hardcore logic, or aesthetics.
I think I have to at least graduate before anyone besides me is allowed to write a thesis on my wacky opinions on personal identity ;)
In a nutshell, I think persons just are continuous self-aware experiences, and that it’s possible for two objects to be numerically distinct and personally identical. For instance (assuming I’m not a brain in a vat myself) I could be personally identical to a brain in a vat while being numerically distinct. The upshot of being personally identical to someone is that you are indifferent between “yourself” and the “other person”. For instance, if Omega turned up, told me I had an identical psychological history with “someone else” (I use terms like that of grammatical necessity), and that one of us was a brain in a vat and one of us was as she perceived herself to be, and that Omega felt like obliterating one of us, “we” would “both” prefer that the brain in a vat version be the one to be obliterated because we’re indifferent between the two as persons, and just have a general preference that (ceteris paribus) non brains-in-vats are better.
Persons can share personal parts in the same way that objects can share physical parts. We should care about our “future selves” because they will include the vast majority of our personal parts (minus forgotten tidbits and diluted over time by new experiences) and respect (to a reasonable extent) the wishes of our (relatively recent) past selves because we consist mostly of those past selves. If we fall into a philosophy example and undergo fission of fusion, fission yields two people who diverge immediately but share a giant personal part. Fusion yields one person who shares a giant personal part each with the two people fused.
In a nutshell, I think persons just are continuous self-aware experiences, and that it’s possible for two objects to be numerically distinct and personally identical.
I’ve found this position to be highly intuitive since it first occurred/was presented to me (don’t recall which, probably the latter from Egan).
One seemingly under-appreciated (disclaimer: haven’t studied much philosophy) corollary of it is that if you value higher quantities of “personality-substance”, you should seek (possibly random) divergence as soon as you recognize too much of yourself in others.
Not really. Outside of philosophy examples and my past and future selves, I don’t actually share any personal parts with anyone; the personal parts are continuity of perspective, not abstract personality traits. I can be very much like someone and still share no personal parts with him or her. Besides, that’s if I value personal uniqueness. Frankly, I’d be thrilled to discover that there are several of me. After all, Omega might take it into his head to obliterate one, and there ought to be backups.
I don’t actually share any personal parts with anyone; the personal parts are continuity of perspective, not abstract personality traits. I can be very much like someone and still share no personal parts with him or her.
The term “continuity of perspective” doesn’t reduce much beyond “identity” for me in this context. How similar can you be without sharing personal parts? If the difference is at all determined by differences in external inputs, how can you be sure that your inputs are effectively all that different?
Frankly, I’d be thrilled to discover that there are several of me. After all, Omega might take it into his head to obliterate one, and there ought to be backups.
I think the above addresses a slightly different concern. Suppose that some component of your decision-making or other subjective experience is decided by a pseudo-random number generator. It contains no interesting structure or information other than the seed it was given. If you were to create a running (as opposed to static, frozen) copy of yourself, would you prefer to keep the current seed active for both of you, or introduce a divergence by choosing a new seed for one or the other? It seems that you would create the “same amount” of personal backup either way.
I think you’re on the right track. There’ll be a lot of personal-identity assumptions re-evaluated over the next generation as we see more interpenetration of personal parts as we start to offload cognitive capacity to shared resources on the internet.
Semi-related: I did my philosophy masters sub-thesis [15 years ago, not all opinions expressed therein are ones I would necessarily agree with now] on personal identity and the many-world interpretation of quantum physics. Summary: personal identity is spread/shared along all indistinguishable multiversal branches: indeterminacy is a feature of not knowing which branch you’re on. Personal identity across possible worlds may be non-commutative: A=B, B=C, but A≠C.
Technically, that’s non-transitive—non-commutative would be A=B but B≠A.
(Also, it is mildly confusing to use an equality symbol to indicate a relationship which is not a mathematical equality relationship—i.e. reflexive, commutative, and transitive.)
(Also, a Sorites-paradox argument would suggest that identity is a matter of degree.)
Not sure yet. I have a fledgeling ethics of rights kicking around in the back of my head that I might do something with. Alternately, I could start making noise about my wacky opinions on personal identity and be a metaphysicist. I also like epistemology, and I find philosophy of religion entertaining (although I wouldn’t want to devote much of my time to it). I’m pretty sure I don’t want to do philosophy of math, hardcore logic, or aesthetics.
I hope we get to hear your wacky opinions on personal identity some time, I think my senior thesis will be on that subject.
I think I have to at least graduate before anyone besides me is allowed to write a thesis on my wacky opinions on personal identity ;)
In a nutshell, I think persons just are continuous self-aware experiences, and that it’s possible for two objects to be numerically distinct and personally identical. For instance (assuming I’m not a brain in a vat myself) I could be personally identical to a brain in a vat while being numerically distinct. The upshot of being personally identical to someone is that you are indifferent between “yourself” and the “other person”. For instance, if Omega turned up, told me I had an identical psychological history with “someone else” (I use terms like that of grammatical necessity), and that one of us was a brain in a vat and one of us was as she perceived herself to be, and that Omega felt like obliterating one of us, “we” would “both” prefer that the brain in a vat version be the one to be obliterated because we’re indifferent between the two as persons, and just have a general preference that (ceteris paribus) non brains-in-vats are better.
Persons can share personal parts in the same way that objects can share physical parts. We should care about our “future selves” because they will include the vast majority of our personal parts (minus forgotten tidbits and diluted over time by new experiences) and respect (to a reasonable extent) the wishes of our (relatively recent) past selves because we consist mostly of those past selves. If we fall into a philosophy example and undergo fission of fusion, fission yields two people who diverge immediately but share a giant personal part. Fusion yields one person who shares a giant personal part each with the two people fused.
I’ve found this position to be highly intuitive since it first occurred/was presented to me (don’t recall which, probably the latter from Egan).
One seemingly under-appreciated (disclaimer: haven’t studied much philosophy) corollary of it is that if you value higher quantities of “personality-substance”, you should seek (possibly random) divergence as soon as you recognize too much of yourself in others.
Not really. Outside of philosophy examples and my past and future selves, I don’t actually share any personal parts with anyone; the personal parts are continuity of perspective, not abstract personality traits. I can be very much like someone and still share no personal parts with him or her. Besides, that’s if I value personal uniqueness. Frankly, I’d be thrilled to discover that there are several of me. After all, Omega might take it into his head to obliterate one, and there ought to be backups.
The term “continuity of perspective” doesn’t reduce much beyond “identity” for me in this context. How similar can you be without sharing personal parts? If the difference is at all determined by differences in external inputs, how can you be sure that your inputs are effectively all that different?
I think the above addresses a slightly different concern. Suppose that some component of your decision-making or other subjective experience is decided by a pseudo-random number generator. It contains no interesting structure or information other than the seed it was given. If you were to create a running (as opposed to static, frozen) copy of yourself, would you prefer to keep the current seed active for both of you, or introduce a divergence by choosing a new seed for one or the other? It seems that you would create the “same amount” of personal backup either way.
I think you’re on the right track. There’ll be a lot of personal-identity assumptions re-evaluated over the next generation as we see more interpenetration of personal parts as we start to offload cognitive capacity to shared resources on the internet.
Semi-related: I did my philosophy masters sub-thesis [15 years ago, not all opinions expressed therein are ones I would necessarily agree with now] on personal identity and the many-world interpretation of quantum physics. Summary: personal identity is spread/shared along all indistinguishable multiversal branches: indeterminacy is a feature of not knowing which branch you’re on. Personal identity across possible worlds may be non-commutative: A=B, B=C, but A≠C.
Technically, that’s non-transitive—non-commutative would be A=B but B≠A.
(Also, it is mildly confusing to use an equality symbol to indicate a relationship which is not a mathematical equality relationship—i.e. reflexive, commutative, and transitive.)
(Also, a Sorites-paradox argument would suggest that identity is a matter of degree.)
I think I understand (and agree with) the other parts, but how is this possible?
See, now I’m going to block quote this :-P