The non-intuitive form of SIA simply says that universes with many observers are more likely than those with few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).
Isn’t this inserting a hidden assumption about what kind of observers we’re talking about? What definition of “observer” do you get to use, and why? In order to “observe”, all that’s necessary is that you form mutual information with another part of the universe, and conscious entities are a tiny sliver of this set in the observed universe. So the SIA already puts a low probability on the data.
I made a similar point before, but apparenlty there’s a flaw in the logic somewhere.
SIA does not require a definition of observer. You need only compare the number of experiences exactly like yours (otherwise you can compare those like yours in some aspects, then update on the other info you have, which would get you to the same place).
SSA requires a definition of observers, because it involves asking how many of those are having an experience like yours.
The debate about what consitutes an “observer class” is one of the most subtle in the whole area (see Nick Bostrom’s book). Technically, SIA and similar will only work as “given this definition of observers, SIA implies...”, but some definitions are more sensible than others.
It’s obvious you can’t seperate two observers with the same subjective experiences, but how much of a difference does there need to be before the observers are in different classes?
I tend to work with something like “observers who think they are human”, or something like that, tweaking the issue of longeveity (does someone who lives 60 years count as the same, or twice as much an observer, as the person who lives 30 years?) as needed in the question.
Okay, but it’s a pretty significant change when you go to “observers who think they are human”. Why should you expect a universe with many of that kind of observer? At the very least, you would be conditioning on more than just your own existence, but rather, additional observations about your “suit”.
As I said, it’s a complicated point. For most of the toy models, “observers who think they are human” is enough, and avoids having to go into these issues.
Not unless you can explain why “universes with many observers who think they are human” are more common than “universes with few observers who think they are human”. Even when you condition on your own existence, you have no reason to believe that most Everett branches have humans.
Er no—they are not more common, at all. The SIA says that you are more likely to be existing in a universe with many humans, not that these universes are more common.
The non-intuitive form of SIA simply says that universes with many observers are more likely than those with few.
And you just replaced “observers” with “observers who think they are human”, so it seems like the SIA does in fact say that universes with many observers who think they are human are more likely than those with few.
So then the full anthrocentric SIA would be, “you, being an observer that believes you are human, are more likely to exist in a universe with many observers who believe they are human”.
Is that correct? If so, does your proof prove this stronger claim?
Wouldn’t the principle be independent of the form of the observer? If we said “universes with many human observers are more likely than universes with few,” the logic would apply just as well as with matter-based observers or observers defined as mutual-information-formers.
If we said “universes with many human observers are more likely than universes with few,” the logic would apply just as well as with matter-based observers or observers defined as mutual-information-formers.
But why is the assumption that universes with human observers are more likely (than those with few) plausible or justifiable? That’s a fundamentally different claim!
I agree that it’s a different claim, and not the one I was trying to make. I was just noting that however one defines “observer,” the SIA would suggest that such observers should be many. Thus, I don’t think that the SIA is inserting a hidden assumption about the type of observers we are discussing.
Right, but my point was that your definition of observer has a big impact on your SIA’s plausibility. Yes, universes with observers in the general sense are more likely, but why universes with more human observers?
Why would being human change the calculus of the SIA? According to its logic, if a universe only has more human observers, there are still more opportunities for me to exist, no?
My point was that the SIA(human) is less plausible, meaning you shouldn’t base conclusions on it, not that the resulting calculus (conditional on its truth) would be different.
Surely the extremes “update on all available information” and “never update on anything” are each more plausible than any mixture like “update on the observation that I exist, but not on the observation that I’m human”.
I’m relatively green on the Doomsday debate, but:
Isn’t this inserting a hidden assumption about what kind of observers we’re talking about? What definition of “observer” do you get to use, and why? In order to “observe”, all that’s necessary is that you form mutual information with another part of the universe, and conscious entities are a tiny sliver of this set in the observed universe. So the SIA already puts a low probability on the data.
I made a similar point before, but apparenlty there’s a flaw in the logic somewhere.
SIA does not require a definition of observer. You need only compare the number of experiences exactly like yours (otherwise you can compare those like yours in some aspects, then update on the other info you have, which would get you to the same place).
SSA requires a definition of observers, because it involves asking how many of those are having an experience like yours.
The debate about what consitutes an “observer class” is one of the most subtle in the whole area (see Nick Bostrom’s book). Technically, SIA and similar will only work as “given this definition of observers, SIA implies...”, but some definitions are more sensible than others.
It’s obvious you can’t seperate two observers with the same subjective experiences, but how much of a difference does there need to be before the observers are in different classes?
I tend to work with something like “observers who think they are human”, or something like that, tweaking the issue of longeveity (does someone who lives 60 years count as the same, or twice as much an observer, as the person who lives 30 years?) as needed in the question.
Okay, but it’s a pretty significant change when you go to “observers who think they are human”. Why should you expect a universe with many of that kind of observer? At the very least, you would be conditioning on more than just your own existence, but rather, additional observations about your “suit”.
As I said, it’s a complicated point. For most of the toy models, “observers who think they are human” is enough, and avoids having to go into these issues.
Not unless you can explain why “universes with many observers who think they are human” are more common than “universes with few observers who think they are human”. Even when you condition on your own existence, you have no reason to believe that most Everett branches have humans.
Er no—they are not more common, at all. The SIA says that you are more likely to be existing in a universe with many humans, not that these universes are more common.
Your TL post said:
And you just replaced “observers” with “observers who think they are human”, so it seems like the SIA does in fact say that universes with many observers who think they are human are more likely than those with few.
Sorry, sloppy language—I meant “you, being an observer, are more likely to exist in a universe with many observers”.
So then the full anthrocentric SIA would be, “you, being an observer that believes you are human, are more likely to exist in a universe with many observers who believe they are human”.
Is that correct? If so, does your proof prove this stronger claim?
Wouldn’t the principle be independent of the form of the observer? If we said “universes with many human observers are more likely than universes with few,” the logic would apply just as well as with matter-based observers or observers defined as mutual-information-formers.
But why is the assumption that universes with human observers are more likely (than those with few) plausible or justifiable? That’s a fundamentally different claim!
I agree that it’s a different claim, and not the one I was trying to make. I was just noting that however one defines “observer,” the SIA would suggest that such observers should be many. Thus, I don’t think that the SIA is inserting a hidden assumption about the type of observers we are discussing.
Right, but my point was that your definition of observer has a big impact on your SIA’s plausibility. Yes, universes with observers in the general sense are more likely, but why universes with more human observers?
Why would being human change the calculus of the SIA? According to its logic, if a universe only has more human observers, there are still more opportunities for me to exist, no?
My point was that the SIA(human) is less plausible, meaning you shouldn’t base conclusions on it, not that the resulting calculus (conditional on its truth) would be different.
That’s what I meant, though: you don’t calculate the probability of SIA(human) any differently than you would for any other category of observer.
Surely the extremes “update on all available information” and “never update on anything” are each more plausible than any mixture like “update on the observation that I exist, but not on the observation that I’m human”.