Why? If “I” is arbitrary definition, then “When I step through this doorway, will I have another experience?” depends on this arbitrary definition and so is also arbitrary.
Which things count as “I” isn’t an arbitrary definition; it’s just a fuzzy natural-language concept.
(I guess you can call that “arbitrary” if you want, but then all the other words in the sentence, like “doorway” and “step”, are also “arbitrary”.)
Analogy: When you’re writing in your personal diary, you’re free to define “table” however you want. But in ordinary English-language discourse, if you call all penguins “tables” you’ll just be wrong. And this fact isn’t changed at all by the fact that “table” lacks a perfectly formal physics-level definition.
The same holds for “Will Rob Bensinger’s next experience be of sitting in his bedroom writing a LessWrong comment, or will it be of him grabbing some tomatoes in a supermarket in Beijing?”
Terms like ‘Rob Bensinger’ and ‘I’ aren’t perfectly physically crisp — there may be cases where the answer is “ehh, maybe?” rather than a clear yes or no. And if we live in a Big Universe and we allow that there can be many Beijings out there in space, then we’ll have to give a more nuanced quantitative answer, like “a lot more of Rob’s immediate futures are in his bedroom than in Beijing”.
But if we restrict our attention to this Beijing, then all that complexity goes away and we can pretty much rule out that anyone in Beijing will happen to momentarily exhibit exactly the right brain state to look like “Rob Bensinger plus one time step”.
The nuances and wrinkles don’t bleed over and make it a totally meaningless or arbitrary question; and indeed, if I thought I were likely to spontaneously teleport to Beijing in the next minute, I’d rightly be making very different life-choices! “Will I experience myself spontaneously teleporting to Beijing in the next second?” is a substantive (and easy) question, not a deep philosophical riddle.
So you always anticipate all possible experiences, because of multiverse?
Not all possible experiences; just all experiences of brains that have the same kinds of structural similarities to your current brain as, e.g., “me after I step through a doorway” has to “me before I stepped through the doorway”.
Analogy: When you’re writing in your personal diary, you’re free to define “table” however you want. But in ordinary English-language discourse, if you call all penguins “tables” you’ll just be wrong. And this fact isn’t changed at all by the fact that “table” lacks a perfectly formal physics-level definition.
You’re also free to define “I” however you want in your values. You’re only wrong if your definitions imply wrong physical reality. But defining “I” and “experiences” in such a way that you will not experience anything after teleportation is possible without implying anything physically wrong.
You can be wrong about physical reality of teleportation. But even after you figured out that there is no additional physical process going on that kills your soul, except for the change of location, you still can move from “my soul crashes against an asteroid” to “soul-death in my values means sudden change in location” instead of to “my soul remains alive”.
It’s not like I even expect you specifically to mean “don’t liking teleportation is necessary irrational” much. It’s just that saying that there should be an actual answer to questions about “I” and “experiences” makes people moral-realist.
You’re also free to define “I” however you want in your values.
Sort of!
It’s true that no law of nature will stop you from using “I” in a nonstandard way; your head will not explode if you redefine “table” to mean “penguin”.
And it’s true that there are possible minds in abstract mindspace that have all sorts of values, including strict preferences about whether they want their brain to be made of silicon vs. carbon.
But it’s not true that humans alive today have full and complete control over their own preferences.
And it’s not true that humans can never be mistaken in their beliefs about their own preferences.
In the case of teleportation, I think teleportation-phobic people are mostly making an implicit error of the form “mistakenly modeling situations as though you are a Cartesian Ghost who is observing experiences from outside the universe”, not making a mistake about what their preferences are per se. (Though once you realize that you’re not a Cartesian Ghost, that will have some implications for what experiences you expect to see next in some cases, and implications for what physical world-states you prefer relative to other world-states.)
In the case of teleportation, I think teleportation-phobic people are mostly making an implicit error of the form “mistakenly modeling situations as though you are a Cartesian Ghost who is observing experiences from outside the universe”, not making a mistake about what their preferences are per se.
Why not both? I can imagine that someone would be persuaded to accept teleportation/uploading if they stopped believing in physical Cartesian Ghost. But it’s possible that if you remind them that continuity of experience, like table, is just a description of physical situation and not divinely blessed necessary value, that would be enough to tip the balance toward them valuing carbon or whatever. It’s bad to be wrong about Cartesian Ghosts, but it’s also bad to think that you don’t have a choice about how you value experience.
The problem was that you first seemed to belittle questions about word meanings (“self”) as being “just” about “definitions” that are “purely verbal”. Luckily now you concede that the question about the meaning of “I” isn’t just about (arbitrary) “definitions”, which makes calling it a “purely verbal” (read: arbitrary) question inappropriate. Now of course the meaning of “self” is no more arbitrary than the meaning of “I”, indeed those terms are clearly meant to refer to the same thing (like “me” or “myself”).
The wider point is that the following seems not true:
But this post hasn’t been talking about word definitions. It’s been talking about substantive predictive questions like “What’s the very next thing I’m going to see? The other side of the teleporter? Or nothing at all?”
Wenn we evaluate statements or questions of any kind, including the one above, we need to know two things: 1) Its meaning, in particular the meaning of the involved terms, 2) what the empirical facts are. But we already know all the empirical facts: Someone goes into the teleporter, a bit later someone comes out at the other end and sees something. So the issue can only be about the semantic interpretation of that question, about what we mean with expressions like “I will see x”. Do we mean “A future person that is psychologically continuous with current-me sees x”? That’s not an empirical question, it’s a semantic one, but it’s not in any way arbitrary, as expressions like “just about definitions” or “purely verbal” would suggest. Conceptual analysis is neither arbitrary nor trivial.
The problem was that you first seemed to belittle questions about word meanings (“self”) as being “just” about “definitions” that are “purely verbal”.
I did no such thing!
Luckily now you concede that the question about the meaning of “I” isn’t just about (arbitrary) “definitions”
Read the blog post at the top of this page! It’s my attempt to answer the question of when a mind is “me”, and you’ll notice it’s not talking about definitions.
But we already know all the empirical facts: Someone goes into the teleporter, a bit later someone comes out at the other end and sees something. So the issue can only be about the semantic interpretation of that question, about what we mean with expressions like “I will see x”.
Nope!
There are two perspectives here:
“I don’t want to upload myself, because I wouldn’t get to experience that uploads’ experiences. When I die, this stream of consciousness will end, rather than continuing in another body. Physically dying and then being being copied elsewhere is not phenomenologically indistinguishable from stepping through a doorway.”
“I do want to upload myself, because I would get to experience that uploads’ experiences. Physically dying and then being copied myself is phenomenologically indistinguishable from stepping through a doorway.”
The disagreement between these two perspectives isn’t about word definitions at all; a fear that “when my body dies, there will be nothing but oblivion” is a very real fear about anticipated experiences (and anticipated absences of experience), not a verbal quibble about how we ought to define a specific word.
But it’s also a bit confusing to call the disagreement between these two perspectives “empirical”, because “empirical” here is conflating “third-person empirical” with “first-person empirical”.
The disagreement here is about whether a stream of consciousness can “continue” across temporal and spatial gaps, in the same way that it continues when there are no obvious gaps. It’s about whether there’s a subjective, experiential difference between stepping through a doorway and using a teleporter.
The thing I’m arguing in the OP is that there can’t be an experiential difference here, because there’s no physical difference that could be underlying the supposed experiential difference. So the disagreement about the first-person facts, I claim, stems from a cognitive error, which I characterize as “making predictions as though you believed yourself to be a Cartesian Ghost (even if you don’t on-reflection endorse the claim that Cartesian Ghosts exist)”. This is, again, a very different error from “defining a word in a nonstandard way”.
The thing I’m arguing in the OP is that there can’t be an experiential difference here, because there’s no physical difference that could be underlying the supposed experiential difference.
Is there even anybody claiming there is an experiential difference? It seems you may attacking a strawman.
So the disagreement about the first-person facts, I claim, stems from a cognitive error
The alternative to this is that there is a disagreement about the appropriate semantic interpretation/analysis of the question. E.g. about what we mean when we say “I will (not) experience such and such”. That seems more charitable than hypothesizing beliefs in “ghosts” or “magic”.
Is there even anybody claiming there is an experiential difference?
Yep! Ask someone with this view whether the current stream of consciousness continues from their pre-uploaded self to their post-uploaded self, like it continues when they pass through a doorway. The typical claim is some version of “this stream of consciousness will end, what comes next is only oblivion”, not “oh sure, the stream of consciousness is going to continue in the same way it always does, but I prefer not to use the English word ‘me’ to refer to the later parts of that stream of consciousness”.
This is why the disagreement here has policy implications: people with different views of personal identity have different beliefs about the desirability of mind uploading. They aren’t just disagreeing about how to use words, and if they were, you’d be forced into the equally “uncharitable” perspective that someone here is very confused about how relevant word choice is to the desirability of uploading.
The alternative to this is that there is a disagreement about the appropriate semantic interpretation/analysis of the question. E.g. about what we mean when we say “I will (not) experience such and such”. That seems more charitable than hypothesizing beliefs in “ghosts” or “magic”.
I didn’t say that the relevant people endorse a belief in ghosts or magic. (Some may do so, but many explicitly don’t!)
It’s a bit darkly funny that you’ve reached for a clearly false and super-uncharitable interpretation of what I said, in the same sentence you’re chastising me for being uncharitable! But also, “charity” is a bad approach to trying to understand other people, and bad epistemology can get in the way of a lot of stuff.
I was a bit annoyingly repetitive with trying to confirm and re-confirm what their view is, but I think it’s clear from the exchange that my interpretation is correct at least for this person.
Is there even anybody claiming there is an experiential difference?
Yep! Ask someone with this view whether the current stream of consciousness continues from their pre-uploaded self to their post-uploaded self, like it continues when they pass through a doorway. The typical claim is some version of “this stream of consciousness will end, what comes next is only oblivion”, not “oh sure, the stream of consciousness is going to continue in the same way it always does, but I prefer not to use the English word ‘me’ to refer to the later parts of that stream of consciousness”.
This doesn’t show they believe there is a difference in experience. It can be simply a different analysis of the meaning of “the current stream of consciousness continuing”. That’s a semantic difference, not an empirical one.
Which things count as “I” isn’t an arbitrary definition; it’s just a fuzzy natural-language concept.
(I guess you can call that “arbitrary” if you want, but then all the other words in the sentence, like “doorway” and “step”, are also “arbitrary”.)
Analogy: When you’re writing in your personal diary, you’re free to define “table” however you want. But in ordinary English-language discourse, if you call all penguins “tables” you’ll just be wrong. And this fact isn’t changed at all by the fact that “table” lacks a perfectly formal physics-level definition.
The same holds for “Will Rob Bensinger’s next experience be of sitting in his bedroom writing a LessWrong comment, or will it be of him grabbing some tomatoes in a supermarket in Beijing?”
Terms like ‘Rob Bensinger’ and ‘I’ aren’t perfectly physically crisp — there may be cases where the answer is “ehh, maybe?” rather than a clear yes or no. And if we live in a Big Universe and we allow that there can be many Beijings out there in space, then we’ll have to give a more nuanced quantitative answer, like “a lot more of Rob’s immediate futures are in his bedroom than in Beijing”.
But if we restrict our attention to this Beijing, then all that complexity goes away and we can pretty much rule out that anyone in Beijing will happen to momentarily exhibit exactly the right brain state to look like “Rob Bensinger plus one time step”.
The nuances and wrinkles don’t bleed over and make it a totally meaningless or arbitrary question; and indeed, if I thought I were likely to spontaneously teleport to Beijing in the next minute, I’d rightly be making very different life-choices! “Will I experience myself spontaneously teleporting to Beijing in the next second?” is a substantive (and easy) question, not a deep philosophical riddle.
Not all possible experiences; just all experiences of brains that have the same kinds of structural similarities to your current brain as, e.g., “me after I step through a doorway” has to “me before I stepped through the doorway”.
You’re also free to define “I” however you want in your values. You’re only wrong if your definitions imply wrong physical reality. But defining “I” and “experiences” in such a way that you will not experience anything after teleportation is possible without implying anything physically wrong.
You can be wrong about physical reality of teleportation. But even after you figured out that there is no additional physical process going on that kills your soul, except for the change of location, you still can move from “my soul crashes against an asteroid” to “soul-death in my values means sudden change in location” instead of to “my soul remains alive”.
It’s not like I even expect you specifically to mean “don’t liking teleportation is necessary irrational” much. It’s just that saying that there should be an actual answer to questions about “I” and “experiences” makes people moral-realist.
Sort of!
It’s true that no law of nature will stop you from using “I” in a nonstandard way; your head will not explode if you redefine “table” to mean “penguin”.
And it’s true that there are possible minds in abstract mindspace that have all sorts of values, including strict preferences about whether they want their brain to be made of silicon vs. carbon.
But it’s not true that humans alive today have full and complete control over their own preferences.
And it’s not true that humans can never be mistaken in their beliefs about their own preferences.
In the case of teleportation, I think teleportation-phobic people are mostly making an implicit error of the form “mistakenly modeling situations as though you are a Cartesian Ghost who is observing experiences from outside the universe”, not making a mistake about what their preferences are per se. (Though once you realize that you’re not a Cartesian Ghost, that will have some implications for what experiences you expect to see next in some cases, and implications for what physical world-states you prefer relative to other world-states.)
Why not both? I can imagine that someone would be persuaded to accept teleportation/uploading if they stopped believing in physical Cartesian Ghost. But it’s possible that if you remind them that continuity of experience, like table, is just a description of physical situation and not divinely blessed necessary value, that would be enough to tip the balance toward them valuing carbon or whatever. It’s bad to be wrong about Cartesian Ghosts, but it’s also bad to think that you don’t have a choice about how you value experience.
The problem was that you first seemed to belittle questions about word meanings (“self”) as being “just” about “definitions” that are “purely verbal”. Luckily now you concede that the question about the meaning of “I” isn’t just about (arbitrary) “definitions”, which makes calling it a “purely verbal” (read: arbitrary) question inappropriate. Now of course the meaning of “self” is no more arbitrary than the meaning of “I”, indeed those terms are clearly meant to refer to the same thing (like “me” or “myself”).
The wider point is that the following seems not true:
Wenn we evaluate statements or questions of any kind, including the one above, we need to know two things: 1) Its meaning, in particular the meaning of the involved terms, 2) what the empirical facts are. But we already know all the empirical facts: Someone goes into the teleporter, a bit later someone comes out at the other end and sees something. So the issue can only be about the semantic interpretation of that question, about what we mean with expressions like “I will see x”. Do we mean “A future person that is psychologically continuous with current-me sees x”? That’s not an empirical question, it’s a semantic one, but it’s not in any way arbitrary, as expressions like “just about definitions” or “purely verbal” would suggest. Conceptual analysis is neither arbitrary nor trivial.
I did no such thing!
Read the blog post at the top of this page! It’s my attempt to answer the question of when a mind is “me”, and you’ll notice it’s not talking about definitions.
Nope!
There are two perspectives here:
“I don’t want to upload myself, because I wouldn’t get to experience that uploads’ experiences. When I die, this stream of consciousness will end, rather than continuing in another body. Physically dying and then being being copied elsewhere is not phenomenologically indistinguishable from stepping through a doorway.”
“I do want to upload myself, because I would get to experience that uploads’ experiences. Physically dying and then being copied myself is phenomenologically indistinguishable from stepping through a doorway.”
The disagreement between these two perspectives isn’t about word definitions at all; a fear that “when my body dies, there will be nothing but oblivion” is a very real fear about anticipated experiences (and anticipated absences of experience), not a verbal quibble about how we ought to define a specific word.
But it’s also a bit confusing to call the disagreement between these two perspectives “empirical”, because “empirical” here is conflating “third-person empirical” with “first-person empirical”.
The disagreement here is about whether a stream of consciousness can “continue” across temporal and spatial gaps, in the same way that it continues when there are no obvious gaps. It’s about whether there’s a subjective, experiential difference between stepping through a doorway and using a teleporter.
The thing I’m arguing in the OP is that there can’t be an experiential difference here, because there’s no physical difference that could be underlying the supposed experiential difference. So the disagreement about the first-person facts, I claim, stems from a cognitive error, which I characterize as “making predictions as though you believed yourself to be a Cartesian Ghost (even if you don’t on-reflection endorse the claim that Cartesian Ghosts exist)”. This is, again, a very different error from “defining a word in a nonstandard way”.
Is there even anybody claiming there is an experiential difference? It seems you may attacking a strawman.
The alternative to this is that there is a disagreement about the appropriate semantic interpretation/analysis of the question. E.g. about what we mean when we say “I will (not) experience such and such”. That seems more charitable than hypothesizing beliefs in “ghosts” or “magic”.
Yep! Ask someone with this view whether the current stream of consciousness continues from their pre-uploaded self to their post-uploaded self, like it continues when they pass through a doorway. The typical claim is some version of “this stream of consciousness will end, what comes next is only oblivion”, not “oh sure, the stream of consciousness is going to continue in the same way it always does, but I prefer not to use the English word ‘me’ to refer to the later parts of that stream of consciousness”.
This is why the disagreement here has policy implications: people with different views of personal identity have different beliefs about the desirability of mind uploading. They aren’t just disagreeing about how to use words, and if they were, you’d be forced into the equally “uncharitable” perspective that someone here is very confused about how relevant word choice is to the desirability of uploading.
I didn’t say that the relevant people endorse a belief in ghosts or magic. (Some may do so, but many explicitly don’t!)
It’s a bit darkly funny that you’ve reached for a clearly false and super-uncharitable interpretation of what I said, in the same sentence you’re chastising me for being uncharitable! But also, “charity” is a bad approach to trying to understand other people, and bad epistemology can get in the way of a lot of stuff.
As a test, I asked a non-philosopher friend of mine what their view is. Here’s a transcript of our short conversation: https://docs.google.com/document/d/1s1HOhrWrcYQ5S187vmpfzZcBfolYFIbeTYgqeebNIA0/edit
I was a bit annoyingly repetitive with trying to confirm and re-confirm what their view is, but I think it’s clear from the exchange that my interpretation is correct at least for this person.
This doesn’t show they believe there is a difference in experience. It can be simply a different analysis of the meaning of “the current stream of consciousness continuing”. That’s a semantic difference, not an empirical one.