That makes sense. But to be clear, it makes intuitive sense to me that the simulators would want to make their observers so ‘lucky’ as I am, so I assigned 0.5 probability to this hypothesis. Now I realize this is not the same as Pr(I’m distinct | I’m in a simulation) since there’s some weird anthropic reasoning going on since only one side of this probability has billions of observers. But what would be the correct way of approaching this problem? Should I have divided 0.5 by 8 billion? That seems too much. What is the correct mathematical approach?
Think MMORPGs—what are the chances of simulation being like that vs a simulation with just a few special beings, and the rest NPCs?. Even if you say it’s 50⁄50, then given that MMORPG-style simulations have billions of observes and “observers are special” ones only have a few, then an overwhelming majority of simulates observers are actually not that special in their simulations.
Thank you Anon User. I thought a little more about the question and I now think it’s basically the Presumptuous Philosopher problem in disguise. Consider the following two theories that are equally likely:
T1 : I’m the only real observer
T2: I’m not the only real observer
For SIA, the ratio is 1:(8 billion / 10,000)=800,000, so indeed, as you said above, most copies of myself are not simulated.
For the SSA, the ratio is instead 10,000:1, so in most universes in the “multiverse of possibilities”, I am the only real observer.
So it’s just a typical Presumptuous Philosopher problem. Does this sound right to you?
There is no correct mathematical treatment, since this is a disagreement about models of reality. Your prior could be correct if reality is one way, though I think it’s very unlikely.
I will point out though that for your reasoning to be correct, you must literally have Main Character Syndrome, believing that the vast majority of other apparently conscious humans in such worlds as ours are actually NPCs with no consciousness.
I’m not sure why you think that simulators will be sparse with conscious entities. If consciousness is possible at all for simulated beings, it seems likely that it’s not some “special sauce” that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves. So in my view, an exceptionally tall human won’t be given “special sauce” to make them An Observer, but all sufficiently non-brain-damaged simulated humans will be observers (or none of them).
It might be different if the medically and behaviourally similar (within simulation) “extremest” and “other” humans are not actually structurally similar (in the system underlying the simulation), but are actually very different types of entities that are just designed to appear almost identical from examination within the simulation. There may well be such types of simulations, but that seems like a highly complex additional hypothesis, not the default.
I suspect it’s quite possible to give a mathematical treatment for this question, I just don’t know what that treatment is. I suspect it has to do with anthropics. Can’t anthropics deal with different potential models of reality?
The second part of your answer isn’t convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn’t apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I’m not particularly concerned with it.
The largest part of my second part is “If consciousness is possible at all for simulated beings, it seems likely that it’s not some “special sauce” that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves.” This mostly isn’t about simulators and their motivations, but about the nature of consciousness in simulated entities in general.
On the other hand your argument is about simulators and their motivations, in that you believe they largely both can and will apply “special sauce” to simulated entities that are the most extreme in some human-obvious way and almost never to the others.
I don’t think we have any qualitative disagreements, just about what fraction of classes of simulated entities may or may not have consciousness.
Yes okay fair enough. I’m not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.
But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I’m likely in / how exactly to update my understanding.
That makes sense. But to be clear, it makes intuitive sense to me that the simulators would want to make their observers so ‘lucky’ as I am, so I assigned 0.5 probability to this hypothesis. Now I realize this is not the same as Pr(I’m distinct | I’m in a simulation) since there’s some weird anthropic reasoning going on since only one side of this probability has billions of observers. But what would be the correct way of approaching this problem? Should I have divided 0.5 by 8 billion? That seems too much. What is the correct mathematical approach?
Think MMORPGs—what are the chances of simulation being like that vs a simulation with just a few special beings, and the rest NPCs?. Even if you say it’s 50⁄50, then given that MMORPG-style simulations have billions of observes and “observers are special” ones only have a few, then an overwhelming majority of simulates observers are actually not that special in their simulations.
Thank you Anon User. I thought a little more about the question and I now think it’s basically the Presumptuous Philosopher problem in disguise. Consider the following two theories that are equally likely:
T1 : I’m the only real observer
T2: I’m not the only real observer
For SIA, the ratio is 1:(8 billion / 10,000)=800,000, so indeed, as you said above, most copies of myself are not simulated.
For the SSA, the ratio is instead 10,000:1, so in most universes in the “multiverse of possibilities”, I am the only real observer.
So it’s just a typical Presumptuous Philosopher problem. Does this sound right to you?
There is no correct mathematical treatment, since this is a disagreement about models of reality. Your prior could be correct if reality is one way, though I think it’s very unlikely.
I will point out though that for your reasoning to be correct, you must literally have Main Character Syndrome, believing that the vast majority of other apparently conscious humans in such worlds as ours are actually NPCs with no consciousness.
I’m not sure why you think that simulators will be sparse with conscious entities. If consciousness is possible at all for simulated beings, it seems likely that it’s not some “special sauce” that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves. So in my view, an exceptionally tall human won’t be given “special sauce” to make them An Observer, but all sufficiently non-brain-damaged simulated humans will be observers (or none of them).
It might be different if the medically and behaviourally similar (within simulation) “extremest” and “other” humans are not actually structurally similar (in the system underlying the simulation), but are actually very different types of entities that are just designed to appear almost identical from examination within the simulation. There may well be such types of simulations, but that seems like a highly complex additional hypothesis, not the default.
I suspect it’s quite possible to give a mathematical treatment for this question, I just don’t know what that treatment is. I suspect it has to do with anthropics. Can’t anthropics deal with different potential models of reality?
The second part of your answer isn’t convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn’t apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I’m not particularly concerned with it.
The largest part of my second part is “If consciousness is possible at all for simulated beings, it seems likely that it’s not some “special sauce” that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves.” This mostly isn’t about simulators and their motivations, but about the nature of consciousness in simulated entities in general.
On the other hand your argument is about simulators and their motivations, in that you believe they largely both can and will apply “special sauce” to simulated entities that are the most extreme in some human-obvious way and almost never to the others.
I don’t think we have any qualitative disagreements, just about what fraction of classes of simulated entities may or may not have consciousness.
Yes okay fair enough. I’m not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.
But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I’m likely in / how exactly to update my understanding.