Instead of examining how SIA behaves given multiverse theories are true, a better approach is to examine what SIA says about the validity of multiverse theories.
And the result is simple, SIA heavily favours multiverse theories, as they greatly inflate the total number of observers in existence. It does not matter what kind of multiverse theories they are. It could be a very-very large universe (thus many casually independent regions), it could be a plethora of universes with different physical parameters, it could be the many-worlds interpretations of quantum mechanics, it could also be the simulation argument where a super majority of observers are computer-generated.
In my experience, most people are unwilling to bite this bullet and just say those theories are true simply because I exist. Two common ways I have seen people attempting to save SIA. 1: play with the reference class by arguing the reference class should not include observers from other universes. e.g. “I could not have been an observer from another universe.” or “I reject the assumption that I could be a computer programme completetly”. 2: play with infinity. e.g. “It is difficult to apply probabilistic judgments when infinity is part of the problem, and many multiverse theories imply infinity.” But neither is very convincing.
SIA favours many universes, but only of the sort that could produce you (not ones with different physical parameters such that they can only produce intelligent octopuses, not bipeds, since you are not an octopus).
I don’t see why multiverse theories would necessarily involve infinities. There could be a finite number of universes, each of finite size.
True, SIA favours theory more likely to produce “me”. However in the context of SIA, it does not define “me” by the specific physical parameters. SIA does not concern about the physical parameter of the observer, but the subjective state. Take the simulation argument as an example. My existence favours there are many computer-run simulations of human civilization. The physical parameters of me is unknown. I could very well be a programme living in a simulation instead of a real human. Or use your example of intelligent octopuses. If a theory says there are many intelligent octopuses who each think they are a human being (maybe by octopuses-in-a-vet kind of experiment) , then SIA would still favour such a theory. And I could very well be such an octopus instead of a human physically.
In my past experience, SIA supporters not liking this often resolves to limit the reference class. Something like “I reject the possibility that I could be a programme” outright.
I agree with you completely on the infinity argument. I don’t think it is a valid defense of SIA. Yet I have seen it used from time to time by its supporters.
Yes, it’s “more likely to produce me” in terms of subjective experience that counts, but if one ignores simulation-style scenarios, octopuses and bipeds will of course have distinct subjective experiences of seeing their own arms.
In simulation scenarios, the simulated world needn’t have the same physical laws as the actual one (limited only by the imagination of the programmers), so people who think they’re intelligent bipeds could exist in an actual universe where only intelligent octopuses can evolve. But there are so many unresolved issues in such scenarios that I’m at a loss of how to think about them. (For example, once the programmer has written the simulation program, for a deterministic computer, is it necessary to actually run the program in order for the simulated people to exist?)
I don’t get the problem with octopuses here. What I am writing here is not causally connected with the number of my hands. Octopus-world-LW can have similar conversations. But as I find myself a biped, it a an argument that biped-world-LW are more often.
Sure, octopuses could write too. But you are not, in fact, an octopus (assuming reality is what it seems). So the evidence you have for evaluating cosmological theories does not favour universes/multiverses with large numbers of intelligent octopuses, since you have no evidence that intelligent octopuses exist. But you do know that you exist.
If you like, you can bump up the probability of cosmological theories that posit a universe with a large number of intelligent observers, whether octopuses or bipeds (SIA), but then you have to push down the probability of those theories in which most of these observers are octopuses, since you aren’t one (SSA). The net effect is to just favour cosmological theories that make it more likely that you exist.
Yes, I agree there are unresolved issues. There simply is no widely accepted way of reasoning about subjective experience. Given this, it seems more unreasonable to assert that simulation of subjective experience is absolutely impossible (prior probability=0). Yet even if we give a small prior to it, due to the overwhelmingly great number of human-like experience simulation theory suggests, SIA would push its probability to near unity. So many SIA supporters would make the above-mentioned assertion.
Yes, agree and actually I said in the end of the post that “So, SIA proves that universe is infinite and stops here”.
The problem here is that after we use SIA to show that the world is infinite, we can’t use it for anything else. For example, for any world with heads in Sleeping Beauty, there is a world where all the same except that there coin is tails. What do you think about this?
Yes I saw it and I agree. I was suggesting SIA=SSA given multiverse is not surprising because their difference lies in assessing whether or not multiverse is true. If we look past that part then they behave the same.
For the sleeping beauty problem, accepting a multiverse theory does not change the conclusion for SIA supporters. It is still 1⁄3. Since there are 2 awakenings in the tails world and 1 in heads world, which all exists. Now applying an SSA type of updating would give 1⁄3.
Of course one can argue if the world is infinite then there are infinite awakenings for both heads and tails. How to do a Bayesian update in this case becomes unclear. However, I just don’t think that is a very powerful counter to SIA. Infinity is always a difficult concept, anthropics paradoxes already exist without it.
My opinion is that while SSA and SIA are both wrong, SIA does so consistently. It generates less paradox than SSA. The “free lunch” of confirming theories with more observers simply because “I exist” is the exception.
Understandable. Most if not all thinkers treat anthropic reasoning as some sort of observation selection effect. It has been the case since Nick Bostrom laid the foundation of the subject. I might be the only one thinking treating first-person concepts (like “myslef” “I” “now” and “here”)as random sample is where all the mistake is.
Yes, I know about it. In fact, it is originally purposed by Prof. Radford Neal from university of Toronto. And he is replying right below you. :)
No, it is very different from my view. In my opinion, FNC is similar to SIA and SSA. It still assumes a particular sampling procedure and considers indexical (like “I” or “now”) as the outcome of this sampling process. Though FNC is more subtle as it does not explicitly state anything in the like of “consider oneself as randomly selected from such and such”. Nonetheless, it performs probability calculations as if someone with all the specific experience of “me” (first person) is being sampled and successfully found among existing observers.
My opinion is indexicals cannot be regarded as a sampling outcome and they cannot be removed from anthropic problems. They should be treated as concepts inherently understood. e.g. I don’t need anything to differentiate myself from all people. I just inherently know who “I” is. Anthropic problems are set up using indexicals as such. They should be solved as such.
There is something lost in the discussion of the octopus example between me and Prof Neal. What I meant is that if a theory suggests there are many intelligent octopuses whose subjective experience is human-like. (maybe like matrix-style octopus-in-a-vets experiment). i.e. the octopus thinks they have are biped humans, even though they are physically not. Then no matter how crazy that sounds, as long as the theory greatly inflates the total number of observers with human-like experience, SIA will endorse it with a high degree of confidence. FNC does so too.
Instead of examining how SIA behaves given multiverse theories are true, a better approach is to examine what SIA says about the validity of multiverse theories.
And the result is simple, SIA heavily favours multiverse theories, as they greatly inflate the total number of observers in existence. It does not matter what kind of multiverse theories they are. It could be a very-very large universe (thus many casually independent regions), it could be a plethora of universes with different physical parameters, it could be the many-worlds interpretations of quantum mechanics, it could also be the simulation argument where a super majority of observers are computer-generated.
In my experience, most people are unwilling to bite this bullet and just say those theories are true simply because I exist. Two common ways I have seen people attempting to save SIA. 1: play with the reference class by arguing the reference class should not include observers from other universes. e.g. “I could not have been an observer from another universe.” or “I reject the assumption that I could be a computer programme completetly”. 2: play with infinity. e.g. “It is difficult to apply probabilistic judgments when infinity is part of the problem, and many multiverse theories imply infinity.” But neither is very convincing.
SIA favours many universes, but only of the sort that could produce you (not ones with different physical parameters such that they can only produce intelligent octopuses, not bipeds, since you are not an octopus).
I don’t see why multiverse theories would necessarily involve infinities. There could be a finite number of universes, each of finite size.
True, SIA favours theory more likely to produce “me”. However in the context of SIA, it does not define “me” by the specific physical parameters. SIA does not concern about the physical parameter of the observer, but the subjective state. Take the simulation argument as an example. My existence favours there are many computer-run simulations of human civilization. The physical parameters of me is unknown. I could very well be a programme living in a simulation instead of a real human. Or use your example of intelligent octopuses. If a theory says there are many intelligent octopuses who each think they are a human being (maybe by octopuses-in-a-vet kind of experiment) , then SIA would still favour such a theory. And I could very well be such an octopus instead of a human physically.
In my past experience, SIA supporters not liking this often resolves to limit the reference class. Something like “I reject the possibility that I could be a programme” outright.
I agree with you completely on the infinity argument. I don’t think it is a valid defense of SIA. Yet I have seen it used from time to time by its supporters.
Yes, it’s “more likely to produce me” in terms of subjective experience that counts, but if one ignores simulation-style scenarios, octopuses and bipeds will of course have distinct subjective experiences of seeing their own arms.
In simulation scenarios, the simulated world needn’t have the same physical laws as the actual one (limited only by the imagination of the programmers), so people who think they’re intelligent bipeds could exist in an actual universe where only intelligent octopuses can evolve. But there are so many unresolved issues in such scenarios that I’m at a loss of how to think about them. (For example, once the programmer has written the simulation program, for a deterministic computer, is it necessary to actually run the program in order for the simulated people to exist?)
I don’t get the problem with octopuses here. What I am writing here is not causally connected with the number of my hands. Octopus-world-LW can have similar conversations. But as I find myself a biped, it a an argument that biped-world-LW are more often.
Sure, octopuses could write too. But you are not, in fact, an octopus (assuming reality is what it seems). So the evidence you have for evaluating cosmological theories does not favour universes/multiverses with large numbers of intelligent octopuses, since you have no evidence that intelligent octopuses exist. But you do know that you exist.
If you like, you can bump up the probability of cosmological theories that posit a universe with a large number of intelligent observers, whether octopuses or bipeds (SIA), but then you have to push down the probability of those theories in which most of these observers are octopuses, since you aren’t one (SSA). The net effect is to just favour cosmological theories that make it more likely that you exist.
See my paper at http://www.cs.utoronto.ca/~radford/anth.abstract.html for a more extended exposition.
Thanks for the link. I saw your article before, but this explanation helps me to understand FNC better.
Yes, I agree there are unresolved issues. There simply is no widely accepted way of reasoning about subjective experience. Given this, it seems more unreasonable to assert that simulation of subjective experience is absolutely impossible (prior probability=0). Yet even if we give a small prior to it, due to the overwhelmingly great number of human-like experience simulation theory suggests, SIA would push its probability to near unity. So many SIA supporters would make the above-mentioned assertion.
Yes, agree and actually I said in the end of the post that “So, SIA proves that universe is infinite and stops here”.
The problem here is that after we use SIA to show that the world is infinite, we can’t use it for anything else. For example, for any world with heads in Sleeping Beauty, there is a world where all the same except that there coin is tails. What do you think about this?
Yes I saw it and I agree. I was suggesting SIA=SSA given multiverse is not surprising because their difference lies in assessing whether or not multiverse is true. If we look past that part then they behave the same.
For the sleeping beauty problem, accepting a multiverse theory does not change the conclusion for SIA supporters. It is still 1⁄3. Since there are 2 awakenings in the tails world and 1 in heads world, which all exists. Now applying an SSA type of updating would give 1⁄3.
Of course one can argue if the world is infinite then there are infinite awakenings for both heads and tails. How to do a Bayesian update in this case becomes unclear. However, I just don’t think that is a very powerful counter to SIA. Infinity is always a difficult concept, anthropics paradoxes already exist without it.
My opinion is that while SSA and SIA are both wrong, SIA does so consistently. It generates less paradox than SSA. The “free lunch” of confirming theories with more observers simply because “I exist” is the exception.
I agree with all you said except conclusion :)
For me, both are right. SIA works by showing that all possible observers exists.
SSA works now by looking on the distribution of observables.
Understandable. Most if not all thinkers treat anthropic reasoning as some sort of observation selection effect. It has been the case since Nick Bostrom laid the foundation of the subject. I might be the only one thinking treating first-person concepts (like “myslef” “I” “now” and “here”)as random sample is where all the mistake is.
There is a theory called “full non-indexical conditioning”.
Do you know about it? Is it close to your view? I am not yet very good in it, but I saw some papers on arXiv.
Yes, I know about it. In fact, it is originally purposed by Prof. Radford Neal from university of Toronto. And he is replying right below you. :)
No, it is very different from my view. In my opinion, FNC is similar to SIA and SSA. It still assumes a particular sampling procedure and considers indexical (like “I” or “now”) as the outcome of this sampling process. Though FNC is more subtle as it does not explicitly state anything in the like of “consider oneself as randomly selected from such and such”. Nonetheless, it performs probability calculations as if someone with all the specific experience of “me” (first person) is being sampled and successfully found among existing observers.
My opinion is indexicals cannot be regarded as a sampling outcome and they cannot be removed from anthropic problems. They should be treated as concepts inherently understood. e.g. I don’t need anything to differentiate myself from all people. I just inherently know who “I” is. Anthropic problems are set up using indexicals as such. They should be solved as such.
The octopus example helped me to grok the FNC, but I still don’t have a clear example which will help me to better understand your point of view.
There is something lost in the discussion of the octopus example between me and Prof Neal. What I meant is that if a theory suggests there are many intelligent octopuses whose subjective experience is human-like. (maybe like matrix-style octopus-in-a-vets experiment). i.e. the octopus thinks they have are biped humans, even though they are physically not. Then no matter how crazy that sounds, as long as the theory greatly inflates the total number of observers with human-like experience, SIA will endorse it with a high degree of confidence. FNC does so too.
For my position, see this post for a start.
Thanks for the link.