So I think I figured this whole thing out. Are people familiar with the type-token distinction and resulting ambiguities? If I have five copies of the book Catcher in the Rye and you ask me how many books I have there is an ambiguity. I could say one or five. One refers to the type, “Catcher in the Rye is a coming of age novel” is a sentence about the type. Five refers to the number of tokens, “I tossed Catcher in the Rye onto the bookshelf” is a sentence about the token. The distinction is ubiquitous and leads to occasional confusion, enough that the subject is at the top of my Less Wrong to-do list. The type token distinction becomes an issue whenever we introduce identical copies and the distinction dominates my views on personal identity.
In the Sleeping Beauty case, the amnesia means the experience of waking up on Monday and the experience of waking up on Tuesday, while token-distinct are type-identical. If we decide the right thing to update on isn’t the token experience but the type experience: well the calculations are really easy. The type experience “waking up” has P=1 for heads and tails. So the prior never changes. I think there are some really good reasons for worrying about types rather than tokens in this context but won’t go into until I make sure the above makes sense to someone.
How are you accounting for the fact that—on awakening—beauty has lost information that she previously had—namely that she no longer knows which day of the week it is?
Maybe it’s just because I haven’t thought about this in a couple of weeks but you’re going to have to clarify this. When does beauty know which day of the week it is?
Before consuming the memory-loss drugs she knows her own temporal history. After consuming the drugs, she doesn’t. She is more uncertain—because her memory has been meddled with, and important information has been deleted from it.
Information wasn’t deleted. Conditions changed and she didn’t receive enough information about the change. There is a type (with a single token) that is Beauty before the experiment and that type includes a property ‘knows what day of the week it is’, then the experiment begins and the day changes. During the experiment there is another type which is also Beauty, this type has two tokens. This type only has enough information to narrow down the date to one of two days. But she still knows what day of the week it was when the experiment began, it’s just your usual indexical shift (instead of knowing the date now she knows the date then but it is the same thing).
Oh sure, the information contained in the memory of waking up is lost (though that information didn’t contain what day of the week it was and you said “namely that she no longer knows which day of the week it is”). I still have zero idea of what you’re trying to ask me.
If she had not ever been given the drug she would be likely to know which day of the week it was. She would know how many times she had been woken up, interviewed, etc. It is because all such information has been chemically deleted from her mind that she has the increased uncertainty that she does.
I might have some issues with that characterization but they aren’t worth going into since I still don’t know what this has to do with my discussion of the type-token ambiguity.
If she had not ever been given the drug she would be likely to know which day of the week it was. She would know how many times she had been woken up, interviewed, etc. It is because all such information has been chemically deleted from her mind that she has the increased uncertainty that she does.
Yes, counterfactually if she hadn’t been given the drug on the second awakening she would have knowledge of the day. But she was given the drug. This meant a loss of the information and knowledge of the memory of the first awakening. But it doesn’t mean a loss of the knowledge of what day it is, she obviously never had that. It is because all her new experiences keep getting deleted that she is incapable of updating her priors (which were set prior to the beginning of the experiment). In type-theoretic terms:
If the drugs had not been administered she would not have had type experience “waking up” a second time. She would have had type experience “waking up with the memory of waking up yesterday”. If she had had that type experience then she would know what day it is.
Beauty probably knew what day it was before the experiment started. People often do know what day of the week it is.
You don’t seem to respond to my: “Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.”
In this case, that is exactly what happens. Had Beauty not been given the drug, her estimates of p(heads) would be: 0.5 on Monday and 0.0 on Tuesday. Since her knowledge of what day it is has been eliminated by a memory-erasing drug, her probability estimate is intermediate between those figures—reflecting her new uncertainty in the face of the chemical deletion of relevant evidence.
Beauty probably knew what day it was before the experiment started.
Yes. And throughout the experiment she knows what day it was before the experiment started. What she doesn’t know is the new day. This is the second or third time I’ve said this. What don’t you understand about an indexical shift?
“Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.”
The knowledge that Beauty has before the experiment is not deleted. Beauty has a single anticipated experience going into the experiment. That anticipated experience occurs. There is no new information to update on.
You don’t seem to be following what I’m saying at all.
What you said was: “it doesn’t mean a loss of the knowledge of what day it is, she obviously never had that”. Except that she did have that—before the experiment started. Maybe you meant something different—but what readers have to go on is what you say.
Beauty’s memories are deleted. The opinions of an agent can change if they gain information—or if they lose information. Beauty loses information about whether or not she has had a previous awakening and interrogation. She knew that at the start of the experiment, but not during it—so she has lost information that she previously had—it has been deleted by the amnesia-inducing drug. That’s relevant information—and it explains why her priors change.
On Sunday, before the experiment begins Beauty makes observation O1(a). She knows that O1 was made on a Sunday. She says to herself “I know what day it is now” (an indexical statement pointing to O1(a)) She also predicts the coin will flip heads with P=0.5 and predicts the next experience she has after going to sleep will be O2. Then she wakes up and makes observation O2(a). It is Monday but she doesn’t know this because it could just as easily be Tuesday since her memory of waking up on Monday will be erased. “I know what day it is now” is now false, not because knowledge was deleted but because of the indexical shift of ‘now’ which no longer refers to O1(a) but to O2(a). She still knows what day it was at O1(a), that knowledge has not been lost. Then she goes back to sleep and her memory of O2(a) is erased. But O2(a) includes no knowledge of what day it is (thought combined with other information Beauty could have inferred what day it was, she never had that information). Beauty wakes up on Tuesday and has observation O2(b). This observation is type-identical to O2(a) and exactly what she anticipated experiencing. If her memory had not been erased she would have had observation O3-- waking up along with the memory of having woken up the previous day. This would not have been an experience Beauty would have predicted with P=1 and therefore would require her to update her belief P(heads) from 0.5 to 0 as she would know it was Tuesday. But she doesn’t get to do that she just has a token of experience O2. She still knows what day it was at O1(a), no knowledge has been lost. And she still doesn’t know what day it is ‘now’.
[For those following this, note that spatio-temporality is a strictly property of tokens (though we have a linguistic convention of letting types inherit the properties of tokens like “the red-breasted woodpecker can be found in North America”… what that really means is that tokens of they type ‘red-breasted woodpecker’ can be found in North America). This, admittedly, might lead to confusing results that need clarification and I’m still working on that.]
I’ve been following, but I’m still nonplussed as to your use of the type-token distinction in this context. The comment of mine which was the parent for your type-token observation had a specific request: show me the specific mistake in my math, rather than appeal to a verbal presentation of a non-formal, intuitive explanation.
Take a bag with 1 red marble and 9 green marbles. There is a type “green marble” and it has 9 tokens. The experiences of drawing any particular green marble, while token-distinct are type-identical. It seems that what matters when we compute our credence for the proposition “the next marble I draw will be green” is the tokens, not the types. When you formalize the bag problem accordingly, probability theory gives you answers that seem quite robust from a math point of view.
If you start out ignorant of how many marbles the bag has of each color, you can ask questions like “given that I just took two green marbles in a row, what is my credence in the proposition ‘the next marble I draw will be green’”. You can compute things like the expected number of green marbles left in the bag. In the bag problem, IOW, we are quantifying our uncertainty over tokens, while taking types to be a fixed feature of the situation. (Which of course is only a convention of this kind of exercise: with precise enough instruments we could distinguish all ten individual marbles.)
Statements like “information is gained” or “information is lost” are vague and imprecise, with the consequence that a motivated interpretation of the problem statement will support whichever statement we happen to favor. The point of formalizing probability is precisely that we get to replace such vague statements with precisely quantifiable formalizations, which leave no wiggle room for interpretation.
If you have a formalism which shows, in that manner, why the answer to the Sleeping Beauty question is 1⁄2, I would love to see it: I have no attachment any longer to “my opinion” on the topic.
My questions to you, then, are: a) given your reasons for “worrying about types rather than tokens” in this situation, how do you formally quantify your uncertainty over various propositions, as I do in the spreadsheet I’ve linked to earlier? b) what justifies “worrying about types rather than tokens” in this situation, where every other discussion of probability “worries about tokens” in the sense I’ve outlined above in reference to the bag of marbles? c) how do you apply the type-token distinction in other problems, say, in the case of the Tuesday Boy?
show me the specific mistake in my math, rather than appeal to a verbal presentation of a non-formal, intuitive explanation.
My point was that I didn’t think anything was wrong with your math. If you count tokens the answer you get is 1⁄3. If you count types the answer you get is 1⁄2 (did you need more math for that?). Similarly, you can design payouts where the right choice is 1⁄3 and payouts where the right choice is 1⁄2.
You can compute things like the expected number of green marbles left in the bag. In the bag problem, IOW, we are quantifying our uncertainty over tokens, while taking types to be a fixed feature of the situation.
b) what justifies “worrying about types rather than tokens” in this situation, where every other discussion of probability “worries about tokens” in the sense I’ve outlined above in reference to the bag of marbles?
This was a helpful comment for me. What we’re dealing with is actually a special case of the type-token ambiguity: the tokens are actually indistinguishable. Say I flip a coin. I, If tails I put six red marbles in a bag which already contains three red marbles bag, if heads do nothing to the bag with three red marbles. I draw a marble and tell Beauty “red”. And then I ask Beauty her credence for the coin landing heads. I think that is basically isomorphic to the Sleep Beauty problem. In the original she is woken up twice if heads, but thats just like having more red marbles to choose from, the experiences are indistinguishable just like the marbles.
Statements like “information is gained” or “information is lost” are vague and imprecise,
I don’t really think they are. That’s my major problem with the 1⁄3 answer. No one has ever shown me the unexpected experience Beauty must have to update from 0.5. But if you feel that way I’ll try other methods.
c) how do you apply the type-token distinction in other problems, say, in the case of the Tuesday Boy?
Off hand there is no reason to worry about types, as the possible answers to the questions “Do you have exactly two children?” and “Is one of them a boy born on a Tuesday?” are all distinguishable. But I haven’t thought really hard about that problem, maybe there is something I’m missing. My approach does suggest a reason for why the Self-Indication Assumption is wrong: the necessary features of an observer are indistinguishable. So it returns 0.5 for the Presumptuous Philosopher problem.
I’ll come back with an answer to (a). Bug me about it if I don’t. There is admittedly a problem which I haven’t worked out: I’m not sure how to relate the experience-type to the day of the week (time is a property of tokens). Basically, the type by itself doesn’t seem to tell us anything about the day (just like picking the red marble doesn’t tell us whether or not it was added after the coin flip. And maybe that’s a reason to reject my approach. I don’t know.
Memories are knowledge—they are knowledge about past perceptions. They have been lost—because they have been chemically deleted by the amnesia-inducing drug. If they had not been lost, Beauty’s probability estimates would be very different at each interview—so evidently the lost information was important in influencing Beauty’s probability estimates.
That should be all you need to know to establish that the deletion of Beauty’s memories changes her priors, and thereby alters her subjective probability estimates. Beauty awakens, not knowing if she has previously been interviewed—because of her memory loss. She knew whether she had previously been interviewed at the start of the experiment—she hadn’t. So: that illustrates which memories have been deleted, and why her uncertainty has increased.
Yes. The memories have been lost (and the knowledge that accompanies them). The knowledge of what day of the week it is has not been lost because she never had this… as I’ve said four times. I’m just going to keep referring you back to my previous comments because I’ve addressed all this already.
You seem to have got stuck on this “day of the week” business :-(
The point is that beauty has lost knowledge that she once had—and that is why her priors change. That that knowledge is “what day of the week it currently is” seems like a fine way of thinking about what information beauty loses to me. However, it clearly bugs you—so try thinking about the lost knowledge another way: beauty starts off knowing with a high degree of certainty whether or not she has previously been interviewed—but then she loses this information as the experiment progresses—and that is why her priors change.
This example, like the last one, is indexed to a specific time. You don’t lose knowledge about conditions at t1 just because it is now t2 and the conditions are different.
Beauty loses information about whether she has previously attended interviews because her memories of them are chemically deleted by an amnesia-inducing drug—not because it is later on.
Cool. Now I haven’t quite thought through all this so it’ll be a little vague. It isn’t anywhere close to being an analytic, formalized argument. I’m just going to dump a bunch of examples that invite intuitions. Basically the notion is: all information is type, not token. Consider, to begin with the Catcher in the Rye example. The sentence about the type was about the information contained in the book. This isn’t a coincidence. The most abundant source of types in the history of the world is pure information: not just every piece of text every written but every single computer program or file is a type (with it’s backups and copies as tokens). Our entire information-theoretic understanding of the universe involves this notion of writing the universe like a computer program (with the possibility of running multiple simulations), k-complexity is a fact about types not tokens (of course this is confusing since when we think of tokens we often attribute them the features of the their type, but the difference is there). Persons are types (at least in part, I think our concept of personhood confuses types and tokens). That’s why most people here think they could survive by being uploaded. When Dennett swtiches between his two brains it seems like there is only one person because there is only one person-type, though two person-tokens. I forget who it was, but someone here has argued in regard to decision theory, that we when we act we should take into account all the simulations of us that may some day be run and act for them as well. This is merely decision theory representing the fact that what matters about persons is the type.
So if agents are types, and in particular if information is types… well then they type experiences are what we update on, they’re the ones that contain information. There is no information to tokens beyond their type. RIght? Of course, this is just an intuition that needs to be formalized. But is the intuition clear?
I’m sorry this isn’t better formulated. The complexity justifies a top level post which I don’t have time for until next week.
So I think I figured this whole thing out. Are people familiar with the type-token distinction and resulting ambiguities? If I have five copies of the book Catcher in the Rye and you ask me how many books I have there is an ambiguity. I could say one or five. One refers to the type, “Catcher in the Rye is a coming of age novel” is a sentence about the type. Five refers to the number of tokens, “I tossed Catcher in the Rye onto the bookshelf” is a sentence about the token. The distinction is ubiquitous and leads to occasional confusion, enough that the subject is at the top of my Less Wrong to-do list. The type token distinction becomes an issue whenever we introduce identical copies and the distinction dominates my views on personal identity.
In the Sleeping Beauty case, the amnesia means the experience of waking up on Monday and the experience of waking up on Tuesday, while token-distinct are type-identical. If we decide the right thing to update on isn’t the token experience but the type experience: well the calculations are really easy. The type experience “waking up” has P=1 for heads and tails. So the prior never changes. I think there are some really good reasons for worrying about types rather than tokens in this context but won’t go into until I make sure the above makes sense to someone.
How are you accounting for the fact that—on awakening—beauty has lost information that she previously had—namely that she no longer knows which day of the week it is?
Maybe it’s just because I haven’t thought about this in a couple of weeks but you’re going to have to clarify this. When does beauty know which day of the week it is?
Before consuming the memory-loss drugs she knows her own temporal history. After consuming the drugs, she doesn’t. She is more uncertain—because her memory has been meddled with, and important information has been deleted from it.
Information wasn’t deleted. Conditions changed and she didn’t receive enough information about the change. There is a type (with a single token) that is Beauty before the experiment and that type includes a property ‘knows what day of the week it is’, then the experiment begins and the day changes. During the experiment there is another type which is also Beauty, this type has two tokens. This type only has enough information to narrow down the date to one of two days. But she still knows what day of the week it was when the experiment began, it’s just your usual indexical shift (instead of knowing the date now she knows the date then but it is the same thing).
Her memories were DELETED. That’s the whole point of the amnesia-inducing drug.
Amnesia = memory LOSS: http://dictionary.reference.com/browse/Amnesia
Oh sure, the information contained in the memory of waking up is lost (though that information didn’t contain what day of the week it was and you said “namely that she no longer knows which day of the week it is”). I still have zero idea of what you’re trying to ask me.
If she had not ever been given the drug she would be likely to know which day of the week it was. She would know how many times she had been woken up, interviewed, etc. It is because all such information has been chemically deleted from her mind that she has the increased uncertainty that she does.
I might have some issues with that characterization but they aren’t worth going into since I still don’t know what this has to do with my discussion of the type-token ambiguity.
It is what was missing from this analysis:
“The type experience “waking up” has P=1 for heads and tails. So the prior never changes.”
Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.
K.
Yes, counterfactually if she hadn’t been given the drug on the second awakening she would have knowledge of the day. But she was given the drug. This meant a loss of the information and knowledge of the memory of the first awakening. But it doesn’t mean a loss of the knowledge of what day it is, she obviously never had that. It is because all her new experiences keep getting deleted that she is incapable of updating her priors (which were set prior to the beginning of the experiment). In type-theoretic terms:
If the drugs had not been administered she would not have had type experience “waking up” a second time. She would have had type experience “waking up with the memory of waking up yesterday”. If she had had that type experience then she would know what day it is.
Beauty probably knew what day it was before the experiment started. People often do know what day of the week it is.
You don’t seem to respond to my: “Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.”
In this case, that is exactly what happens. Had Beauty not been given the drug, her estimates of p(heads) would be: 0.5 on Monday and 0.0 on Tuesday. Since her knowledge of what day it is has been eliminated by a memory-erasing drug, her probability estimate is intermediate between those figures—reflecting her new uncertainty in the face of the chemical deletion of relevant evidence.
Yes. And throughout the experiment she knows what day it was before the experiment started. What she doesn’t know is the new day. This is the second or third time I’ve said this. What don’t you understand about an indexical shift?
The knowledge that Beauty has before the experiment is not deleted. Beauty has a single anticipated experience going into the experiment. That anticipated experience occurs. There is no new information to update on.
You don’t seem to be following what I’m saying at all.
What you said was: “it doesn’t mean a loss of the knowledge of what day it is, she obviously never had that”. Except that she did have that—before the experiment started. Maybe you meant something different—but what readers have to go on is what you say.
Beauty’s memories are deleted. The opinions of an agent can change if they gain information—or if they lose information. Beauty loses information about whether or not she has had a previous awakening and interrogation. She knew that at the start of the experiment, but not during it—so she has lost information that she previously had—it has been deleted by the amnesia-inducing drug. That’s relevant information—and it explains why her priors change.
I’m going to try this one more time.
On Sunday, before the experiment begins Beauty makes observation O1(a). She knows that O1 was made on a Sunday. She says to herself “I know what day it is now” (an indexical statement pointing to O1(a)) She also predicts the coin will flip heads with P=0.5 and predicts the next experience she has after going to sleep will be O2. Then she wakes up and makes observation O2(a). It is Monday but she doesn’t know this because it could just as easily be Tuesday since her memory of waking up on Monday will be erased. “I know what day it is now” is now false, not because knowledge was deleted but because of the indexical shift of ‘now’ which no longer refers to O1(a) but to O2(a). She still knows what day it was at O1(a), that knowledge has not been lost. Then she goes back to sleep and her memory of O2(a) is erased. But O2(a) includes no knowledge of what day it is (thought combined with other information Beauty could have inferred what day it was, she never had that information). Beauty wakes up on Tuesday and has observation O2(b). This observation is type-identical to O2(a) and exactly what she anticipated experiencing. If her memory had not been erased she would have had observation O3-- waking up along with the memory of having woken up the previous day. This would not have been an experience Beauty would have predicted with P=1 and therefore would require her to update her belief P(heads) from 0.5 to 0 as she would know it was Tuesday. But she doesn’t get to do that she just has a token of experience O2. She still knows what day it was at O1(a), no knowledge has been lost. And she still doesn’t know what day it is ‘now’.
[For those following this, note that spatio-temporality is a strictly property of tokens (though we have a linguistic convention of letting types inherit the properties of tokens like “the red-breasted woodpecker can be found in North America”… what that really means is that tokens of they type ‘red-breasted woodpecker’ can be found in North America). This, admittedly, might lead to confusing results that need clarification and I’m still working on that.]
I’ve been following, but I’m still nonplussed as to your use of the type-token distinction in this context. The comment of mine which was the parent for your type-token observation had a specific request: show me the specific mistake in my math, rather than appeal to a verbal presentation of a non-formal, intuitive explanation.
Take a bag with 1 red marble and 9 green marbles. There is a type “green marble” and it has 9 tokens. The experiences of drawing any particular green marble, while token-distinct are type-identical. It seems that what matters when we compute our credence for the proposition “the next marble I draw will be green” is the tokens, not the types. When you formalize the bag problem accordingly, probability theory gives you answers that seem quite robust from a math point of view.
If you start out ignorant of how many marbles the bag has of each color, you can ask questions like “given that I just took two green marbles in a row, what is my credence in the proposition ‘the next marble I draw will be green’”. You can compute things like the expected number of green marbles left in the bag. In the bag problem, IOW, we are quantifying our uncertainty over tokens, while taking types to be a fixed feature of the situation. (Which of course is only a convention of this kind of exercise: with precise enough instruments we could distinguish all ten individual marbles.)
Statements like “information is gained” or “information is lost” are vague and imprecise, with the consequence that a motivated interpretation of the problem statement will support whichever statement we happen to favor. The point of formalizing probability is precisely that we get to replace such vague statements with precisely quantifiable formalizations, which leave no wiggle room for interpretation.
If you have a formalism which shows, in that manner, why the answer to the Sleeping Beauty question is 1⁄2, I would love to see it: I have no attachment any longer to “my opinion” on the topic.
My questions to you, then, are: a) given your reasons for “worrying about types rather than tokens” in this situation, how do you formally quantify your uncertainty over various propositions, as I do in the spreadsheet I’ve linked to earlier? b) what justifies “worrying about types rather than tokens” in this situation, where every other discussion of probability “worries about tokens” in the sense I’ve outlined above in reference to the bag of marbles? c) how do you apply the type-token distinction in other problems, say, in the case of the Tuesday Boy?
My point was that I didn’t think anything was wrong with your math. If you count tokens the answer you get is 1⁄3. If you count types the answer you get is 1⁄2 (did you need more math for that?). Similarly, you can design payouts where the right choice is 1⁄3 and payouts where the right choice is 1⁄2.
This was a helpful comment for me. What we’re dealing with is actually a special case of the type-token ambiguity: the tokens are actually indistinguishable. Say I flip a coin. I, If tails I put six red marbles in a bag which already contains three red marbles bag, if heads do nothing to the bag with three red marbles. I draw a marble and tell Beauty “red”. And then I ask Beauty her credence for the coin landing heads. I think that is basically isomorphic to the Sleep Beauty problem. In the original she is woken up twice if heads, but thats just like having more red marbles to choose from, the experiences are indistinguishable just like the marbles.
I don’t really think they are. That’s my major problem with the 1⁄3 answer. No one has ever shown me the unexpected experience Beauty must have to update from 0.5. But if you feel that way I’ll try other methods.
Off hand there is no reason to worry about types, as the possible answers to the questions “Do you have exactly two children?” and “Is one of them a boy born on a Tuesday?” are all distinguishable. But I haven’t thought really hard about that problem, maybe there is something I’m missing. My approach does suggest a reason for why the Self-Indication Assumption is wrong: the necessary features of an observer are indistinguishable. So it returns 0.5 for the Presumptuous Philosopher problem.
I’ll come back with an answer to (a). Bug me about it if I don’t. There is admittedly a problem which I haven’t worked out: I’m not sure how to relate the experience-type to the day of the week (time is a property of tokens). Basically, the type by itself doesn’t seem to tell us anything about the day (just like picking the red marble doesn’t tell us whether or not it was added after the coin flip. And maybe that’s a reason to reject my approach. I don’t know.
“No knowledge has been lost”?!?
Memories are knowledge—they are knowledge about past perceptions. They have been lost—because they have been chemically deleted by the amnesia-inducing drug. If they had not been lost, Beauty’s probability estimates would be very different at each interview—so evidently the lost information was important in influencing Beauty’s probability estimates.
That should be all you need to know to establish that the deletion of Beauty’s memories changes her priors, and thereby alters her subjective probability estimates. Beauty awakens, not knowing if she has previously been interviewed—because of her memory loss. She knew whether she had previously been interviewed at the start of the experiment—she hadn’t. So: that illustrates which memories have been deleted, and why her uncertainty has increased.
Yes. The memories have been lost (and the knowledge that accompanies them). The knowledge of what day of the week it is has not been lost because she never had this… as I’ve said four times. I’m just going to keep referring you back to my previous comments because I’ve addressed all this already.
You seem to have got stuck on this “day of the week” business :-(
The point is that beauty has lost knowledge that she once had—and that is why her priors change. That that knowledge is “what day of the week it currently is” seems like a fine way of thinking about what information beauty loses to me. However, it clearly bugs you—so try thinking about the lost knowledge another way: beauty starts off knowing with a high degree of certainty whether or not she has previously been interviewed—but then she loses this information as the experiment progresses—and that is why her priors change.
This example, like the last one, is indexed to a specific time. You don’t lose knowledge about conditions at t1 just because it is now t2 and the conditions are different.
Beauty loses information about whether she has previously attended interviews because her memories of them are chemically deleted by an amnesia-inducing drug—not because it is later on.
Makes sense to me.
Cool. Now I haven’t quite thought through all this so it’ll be a little vague. It isn’t anywhere close to being an analytic, formalized argument. I’m just going to dump a bunch of examples that invite intuitions. Basically the notion is: all information is type, not token. Consider, to begin with the Catcher in the Rye example. The sentence about the type was about the information contained in the book. This isn’t a coincidence. The most abundant source of types in the history of the world is pure information: not just every piece of text every written but every single computer program or file is a type (with it’s backups and copies as tokens). Our entire information-theoretic understanding of the universe involves this notion of writing the universe like a computer program (with the possibility of running multiple simulations), k-complexity is a fact about types not tokens (of course this is confusing since when we think of tokens we often attribute them the features of the their type, but the difference is there). Persons are types (at least in part, I think our concept of personhood confuses types and tokens). That’s why most people here think they could survive by being uploaded. When Dennett swtiches between his two brains it seems like there is only one person because there is only one person-type, though two person-tokens. I forget who it was, but someone here has argued in regard to decision theory, that we when we act we should take into account all the simulations of us that may some day be run and act for them as well. This is merely decision theory representing the fact that what matters about persons is the type.
So if agents are types, and in particular if information is types… well then they type experiences are what we update on, they’re the ones that contain information. There is no information to tokens beyond their type. RIght? Of course, this is just an intuition that needs to be formalized. But is the intuition clear?
I’m sorry this isn’t better formulated. The complexity justifies a top level post which I don’t have time for until next week.