The principled distinction is not about the type of coin, that was a summary. The principled distinction is about sets of observations and how frequently they correspond to which outcome. And because we don’t have perfect deductive skills, sets of observations that are indistinguishable to the observer with respect to the proposition in question are summarized into one equivalence class.
If you set up the experiment that way, then the equivalence class of the agent’s set of observations is something like “I’m doing a sleeping beauty experiment, the experimenter gave me a hash of the coin’s outcome”. This observation is made lots of times in different worlds, and the outcome varies. it behaves randomly (=SIA answer is correct).
It also behaves randomly if you choose the number of interviews based on the chromatic number of the plane, because sleeping beauty cannot differentiate that from other logically uncertain problems that have other outcomes. That was the example I used in the post.
if existing isn’t subjectively surprising, and if there’s only one universe (or if all universes are equally large), then my theory is indifferent between a universe with N and one with a trillion N observers, whereas SIA says the latter one is a trillion times as likely. SIA Doomsday which avturchin mentioned is also a good one. If the filter is always at the same position and if, again, existing isn’t subjectively surprising, my theory rejects it but SIA obviously doesn’t.
The assumptions are necessary. If there are lots of different (simulated) universes, some large some small, then living in a larger universe is more likely. If existence is subjectively surprising, if it’s actually a trillion times more likely in the larger universe, then the smaller universe is unlikely. That’s the same as updating downward on extinction risk if Many Worlds is false.
There might be a cleaner example I haven’t thought of yet. You’d ned something where every similar observation is guaranteed to refer to the same proposition, and where you can’t update on having subjective experience at all.
Can you explain what you mean by “existing isn’t subjectively surprising”, without reference to the rest of your theory? I tried to read the explanation in your post but somehow it didn’t click.
The view that most people who have thought about it at all have of consciousness is that there’s some subjective experience that they have and some that other people have and that they’re different. They don’t imagine that if they die they keep living as other people, they imagine that if they die it’s lights out. I call that individual consciousness (idk if there’s a more common term). In that case, existence would be subjectively surprising. The alternative theory, or at least one alternative theory, is that “you are everyone”, that the observer that’s sitting in your brain and the one that’s sitting in my brain is in fact the same. If you can imagine that there’s an alternate you in another universe that has the same consciousness, then all you need to do is to extend that concept to everyone. Or if you can imagine that you could wake up as me tomorrow, then all you need to do is to imagine that you wake up as everyone tomorrow. I call that singular consciousness.
If individual consciousness is in fact true, then it gets very hard to claim that a smaller universe is as likely as a large one, independent of SIA or SSA. I know most people would probably claim both things, but that leads to some pretty absurd consequences if you think it through.
But if singular consciousness is true there’s no problem. And my honest opinion is that it probably is true. Individual consciousness seems incredibly implausible. If I put you in a coma and make a perfect clone, either that clone and you are the same person or not. If not, then the universe has super-material information, and if so, then there has to be a qualitative difference between a perfect clone and a clone with one atom in the wrong place. Either way seems ridiculous.
If I put you in a coma and make a perfect clone, either that clone and you are the same person or not. If not, then the universe has super-material information, and if so, then there has to be a qualitative difference between a perfect clone and a clone with one atom in the wrong place
The two bodies aren’t the selfsame body (numerical identity) , they are two entities with identical properties (qualitative identity). You seem to be allowing qualitative identity without numerical identity in the case of the body, but not in the case of consciousness.
If not, then the universe has super-material information,
That would be spatio temporal location. Even in austere physicalism, you have to accept that not all information is an intrinsic property of a material body.
I mean numerical non identity given qualitative identity (both bodies are made of identical particles in identical configurations). Those are terms of art you can look up.
Giving up on numerical non-identity given qualitative identity is not an option given physics.
I was under the impression that the opposite was the case, that numerical non-identity given qualitative identity is moonshine. I’m not a physician though, so I can’t argue with you on the object level. Do you think that your position would be a majority view on LW?
I was under the impression that the opposite was the case, that numerical non-identity given qualitative identity is moonshine.
If you believe that, you shouldn’t be talking about cloning except to say it is impossible.
Consider a thought experiment: you make a very nearly identical copy of something, differing in only one atom, you move the copy and the original to opposite ends of the galaxy, and you add the missing atom to the copy. What happens next?
Do you think that your position would be a majority view on LW?
Ok, so I think our exchange can be summarized like this: I am operating on the assumption that numerical non-identity given qualitative identity is not a thing, and you doubt that assumption. We both agree that the assumption is necessary for the argument I made to be convincing.
That’s still confusing to me, maybe let’s try a different tack. The Sleeping Beauty problem, which differentiates between SIA and SSA, can be described procedurally—flip a coin, give someone an amnesia pill, etc. Is there a problem that can be described procedurally and makes your theory disagree with SIA?
No, I don’t think there is. The examples I already gave you were my actual best guesses for the cleanest case. Anything purely procedural seems like it will inevitably come up one way sometimes and another other times, if we lump it together with similar-seeming procedures, which we have to do. In those cases SIA is always correct. You could probably come up with something not involving consciousness, but you do need some logically uncertain fact to check, and it needs to be very distinct.
I definitely think of part of the point of this post to argue against SSA. Anything that’s covered by the model of randomness I laid out seems very clear-cut to me. That includes normal Sleeping Beauty.
But I really want to know, what is confusing about the consciousness distinction? Is it unclear what the difference is, or do you just doubt that it is allowed to matter?
I think you’ve explained your intuition well, but without examples it doesn’t feel like understanding to me. You’ve said some things that seem interesting, like “super-material information” or “one atom in the wrong place”, maybe you could try making them as precise as possible?
Ok, but I said those two things you quoted only in a short argument why I think individual consciousness is not true. That’s not required for anything relating to the theory. All I need there is that there are different ways that consciousness could work, and that they can play a role for probability. I think that can be kept totally separate from a discussion about which of them is true.
So the argument I made was meant to illustrate that individual consciousness requires a mechanism by which the universe remembers that your conscious experience is anchored to your body in particular, and that it’s hard to see how such a mechanism could exist. People generally fear death not because they are afraid of losing the particular conscious experience of their mind, but because they are afraid of losing all conscious conscious experience, period. This only makes sense if there is such a mechanism.
The reductio ad absurdum is making a perfect clone of someone. Either both versions are subjectively different people, so that if one of them died it wouldn’t be any consolation for her that the other one is still alive; or they are one person living in two different bodies, and either one would have to care about the other as much as about herself, even on a purely egoistical level. One of those two things has to be the case.
If it’s the former, that means the universe somehow knows which one is which even though they are identical on a material level. That’s why I meant by super-material information. There must be something not encoded in particles that the universe can use to tell them apart. I think many of us would agree that such a thing doesn’t exist.
If it’s the latter, then that begs the question what happens if you have one copy be slightly imperfect. Is it a different person once you change one atom? Maybe not. But there must be some number such that if you change that many atoms, then they are subjectively different people. If there is such a number, there’s also a smallest number that does this. What follows is that if you change 342513 atoms they are subjectively the same, but if you change 342514 they’re subjectively different. Or alternatively it could turn a few particular atoms?
Either way seems ridiculous, so my conclusion is that there most likely is no mechanism for conscious individuality, period. That means I rationally have no reason to care about my own well-being any more than about anyone else’s, because anyone else is really just another versions of myself. I think most people find this super unintuitive, but it’s actually a simpler theory, it doesn’t give you any trouble with the cloning experiment because now both cones are always the same person no matter how much you change, and it solves the problem of “what a surprise that I happen to be born instead of person-X-who-never-existed!”. It seems to be the far more plausible theory.
But again, you don’t need to agree that one theory of consciousness is more plausible for any of the probability stuff, you only need to agree that there are two different ways it could work.
So one of those ways will agree with SIA and the other will disagree, right? Let’s focus on the second one then. Can you give a procedural problem where the second way disagrees with SIA?
No; like I said, procedures tend to be repeatable. Maybe there is one, but I haven’t come up with one yet. What’s wrong with the presumptuous philosopher problem (about two possible universes) as an example?
Let’s say God flipped a logical coin to choose between creating a billion or a trillion observers in a single universe. Is that equivalent to your example?
What if God does that many times, but you can distinguish between them? First flip a blue coin to decide between creating a billion or a trillion blue people. Then flip a purple coin to decide between creating a billion or a trillion purple people. And so on, for many different colors. You know your own color: green. What are your beliefs about the green coin?
1000:1 on tails (with tails → create large universe). It’s a very good question. My answer is late because it made me think about some stuff that confused me at first, an I wanted to make sure that everything I say now is coherent with everything I said in the post.
If god flipped enough logical coins for you to be able to make the approximation that half of them came up heads, you can update on the color of your logical coin based on the fact that your current version is green. This is a thousand times as likely if the green coin came up tails vs heads. You can’t do the same if god only created one universe.
If god created more than one but still only a few universes, let’s say two, then the chance that your coin came up heads is a bit more than a quarter, which comes from the heads-heads case. The heads-tails case is possible but highly unlikely.
If it’s the former, that means the universe somehow knows which one is which even though they are identical on a material level
Note that “the universe” is already keeping track of two identical bodies..which are, of course, in different places...which gives you a hint as to how the trick is pulled off.
Under dualism , there is problem of how to match up 7 billion souls to 7 billion bodies. Under physicalism, the individual self just is the body-brain, there is not logical possibility of a mismatch , and whatever mechanism (ie different spatial location) that allows the universe to have two identical but distinct bodies allows it to have two identical but distinct consciosunesses.
No.
The principled distinction is not about the type of coin, that was a summary. The principled distinction is about sets of observations and how frequently they correspond to which outcome. And because we don’t have perfect deductive skills, sets of observations that are indistinguishable to the observer with respect to the proposition in question are summarized into one equivalence class.
If you set up the experiment that way, then the equivalence class of the agent’s set of observations is something like “I’m doing a sleeping beauty experiment, the experimenter gave me a hash of the coin’s outcome”. This observation is made lots of times in different worlds, and the outcome varies. it behaves randomly (=SIA answer is correct).
It also behaves randomly if you choose the number of interviews based on the chromatic number of the plane, because sleeping beauty cannot differentiate that from other logically uncertain problems that have other outcomes. That was the example I used in the post.
I see. What’s the clearest example of a problem where your theory disagrees with SIA?
if existing isn’t subjectively surprising, and if there’s only one universe (or if all universes are equally large), then my theory is indifferent between a universe with N and one with a trillion N observers, whereas SIA says the latter one is a trillion times as likely. SIA Doomsday which avturchin mentioned is also a good one. If the filter is always at the same position and if, again, existing isn’t subjectively surprising, my theory rejects it but SIA obviously doesn’t.
The assumptions are necessary. If there are lots of different (simulated) universes, some large some small, then living in a larger universe is more likely. If existence is subjectively surprising, if it’s actually a trillion times more likely in the larger universe, then the smaller universe is unlikely. That’s the same as updating downward on extinction risk if Many Worlds is false.
There might be a cleaner example I haven’t thought of yet. You’d ned something where every similar observation is guaranteed to refer to the same proposition, and where you can’t update on having subjective experience at all.
Can you explain what you mean by “existing isn’t subjectively surprising”, without reference to the rest of your theory? I tried to read the explanation in your post but somehow it didn’t click.
The view that most people who have thought about it at all have of consciousness is that there’s some subjective experience that they have and some that other people have and that they’re different. They don’t imagine that if they die they keep living as other people, they imagine that if they die it’s lights out. I call that individual consciousness (idk if there’s a more common term). In that case, existence would be subjectively surprising. The alternative theory, or at least one alternative theory, is that “you are everyone”, that the observer that’s sitting in your brain and the one that’s sitting in my brain is in fact the same. If you can imagine that there’s an alternate you in another universe that has the same consciousness, then all you need to do is to extend that concept to everyone. Or if you can imagine that you could wake up as me tomorrow, then all you need to do is to imagine that you wake up as everyone tomorrow. I call that singular consciousness.
If individual consciousness is in fact true, then it gets very hard to claim that a smaller universe is as likely as a large one, independent of SIA or SSA. I know most people would probably claim both things, but that leads to some pretty absurd consequences if you think it through.
But if singular consciousness is true there’s no problem. And my honest opinion is that it probably is true. Individual consciousness seems incredibly implausible. If I put you in a coma and make a perfect clone, either that clone and you are the same person or not. If not, then the universe has super-material information, and if so, then there has to be a qualitative difference between a perfect clone and a clone with one atom in the wrong place. Either way seems ridiculous.
The two bodies aren’t the selfsame body (numerical identity) , they are two entities with identical properties (qualitative identity). You seem to be allowing qualitative identity without numerical identity in the case of the body, but not in the case of consciousness.
That would be spatio temporal location. Even in austere physicalism, you have to accept that not all information is an intrinsic property of a material body.
You mean identity of particles? I was just assuming that there is no such thing. I agree that if there was, that would be a simpler explanation.
I mean numerical non identity given qualitative identity (both bodies are made of identical particles in identical configurations). Those are terms of art you can look up.
Giving up on numerical non-identity given qualitative identity is not an option given physics.
I was under the impression that the opposite was the case, that numerical non-identity given qualitative identity is moonshine. I’m not a physician though, so I can’t argue with you on the object level. Do you think that your position would be a majority view on LW?
If you believe that, you shouldn’t be talking about cloning except to say it is impossible.
Consider a thought experiment: you make a very nearly identical copy of something, differing in only one atom, you move the copy and the original to opposite ends of the galaxy, and you add the missing atom to the copy. What happens next?
I neither know nor care.
Ok, so I think our exchange can be summarized like this: I am operating on the assumption that numerical non-identity given qualitative identity is not a thing, and you doubt that assumption. We both agree that the assumption is necessary for the argument I made to be convincing.
That’s still confusing to me, maybe let’s try a different tack. The Sleeping Beauty problem, which differentiates between SIA and SSA, can be described procedurally—flip a coin, give someone an amnesia pill, etc. Is there a problem that can be described procedurally and makes your theory disagree with SIA?
No, I don’t think there is. The examples I already gave you were my actual best guesses for the cleanest case. Anything purely procedural seems like it will inevitably come up one way sometimes and another other times, if we lump it together with similar-seeming procedures, which we have to do. In those cases SIA is always correct. You could probably come up with something not involving consciousness, but you do need some logically uncertain fact to check, and it needs to be very distinct.
I definitely think of part of the point of this post to argue against SSA. Anything that’s covered by the model of randomness I laid out seems very clear-cut to me. That includes normal Sleeping Beauty.
But I really want to know, what is confusing about the consciousness distinction? Is it unclear what the difference is, or do you just doubt that it is allowed to matter?
I think you’ve explained your intuition well, but without examples it doesn’t feel like understanding to me. You’ve said some things that seem interesting, like “super-material information” or “one atom in the wrong place”, maybe you could try making them as precise as possible?
Ok, but I said those two things you quoted only in a short argument why I think individual consciousness is not true. That’s not required for anything relating to the theory. All I need there is that there are different ways that consciousness could work, and that they can play a role for probability. I think that can be kept totally separate from a discussion about which of them is true.
So the argument I made was meant to illustrate that individual consciousness requires a mechanism by which the universe remembers that your conscious experience is anchored to your body in particular, and that it’s hard to see how such a mechanism could exist. People generally fear death not because they are afraid of losing the particular conscious experience of their mind, but because they are afraid of losing all conscious conscious experience, period. This only makes sense if there is such a mechanism.
The reductio ad absurdum is making a perfect clone of someone. Either both versions are subjectively different people, so that if one of them died it wouldn’t be any consolation for her that the other one is still alive; or they are one person living in two different bodies, and either one would have to care about the other as much as about herself, even on a purely egoistical level. One of those two things has to be the case.
If it’s the former, that means the universe somehow knows which one is which even though they are identical on a material level. That’s why I meant by super-material information. There must be something not encoded in particles that the universe can use to tell them apart. I think many of us would agree that such a thing doesn’t exist.
If it’s the latter, then that begs the question what happens if you have one copy be slightly imperfect. Is it a different person once you change one atom? Maybe not. But there must be some number such that if you change that many atoms, then they are subjectively different people. If there is such a number, there’s also a smallest number that does this. What follows is that if you change 342513 atoms they are subjectively the same, but if you change 342514 they’re subjectively different. Or alternatively it could turn a few particular atoms?
Either way seems ridiculous, so my conclusion is that there most likely is no mechanism for conscious individuality, period. That means I rationally have no reason to care about my own well-being any more than about anyone else’s, because anyone else is really just another versions of myself. I think most people find this super unintuitive, but it’s actually a simpler theory, it doesn’t give you any trouble with the cloning experiment because now both cones are always the same person no matter how much you change, and it solves the problem of “what a surprise that I happen to be born instead of person-X-who-never-existed!”. It seems to be the far more plausible theory.
But again, you don’t need to agree that one theory of consciousness is more plausible for any of the probability stuff, you only need to agree that there are two different ways it could work.
So one of those ways will agree with SIA and the other will disagree, right? Let’s focus on the second one then. Can you give a procedural problem where the second way disagrees with SIA?
No; like I said, procedures tend to be repeatable. Maybe there is one, but I haven’t come up with one yet. What’s wrong with the presumptuous philosopher problem (about two possible universes) as an example?
Let’s say God flipped a logical coin to choose between creating a billion or a trillion observers in a single universe. Is that equivalent to your example?
Yes.
I’m not used to the concept of a logical coin, but yes, that’s equivalent.
You need the consciousness condition & that god only does this once. Then my theory outputs the SSA answer.
What if God does that many times, but you can distinguish between them? First flip a blue coin to decide between creating a billion or a trillion blue people. Then flip a purple coin to decide between creating a billion or a trillion purple people. And so on, for many different colors. You know your own color: green. What are your beliefs about the green coin?
1000:1 on tails (with tails → create large universe). It’s a very good question. My answer is late because it made me think about some stuff that confused me at first, an I wanted to make sure that everything I say now is coherent with everything I said in the post.
If god flipped enough logical coins for you to be able to make the approximation that half of them came up heads, you can update on the color of your logical coin based on the fact that your current version is green. This is a thousand times as likely if the green coin came up tails vs heads. You can’t do the same if god only created one universe.
If god created more than one but still only a few universes, let’s say two, then the chance that your coin came up heads is a bit more than a quarter, which comes from the heads-heads case. The heads-tails case is possible but highly unlikely.
Note that “the universe” is already keeping track of two identical bodies..which are, of course, in different places...which gives you a hint as to how the trick is pulled off.
Under dualism , there is problem of how to match up 7 billion souls to 7 billion bodies. Under physicalism, the individual self just is the body-brain, there is not logical possibility of a mismatch , and whatever mechanism (ie different spatial location) that allows the universe to have two identical but distinct bodies allows it to have two identical but distinct consciosunesses.