There is no correct mathematical treatment, since this is a disagreement about models of reality. Your prior could be correct if reality is one way, though I think it’s very unlikely.
I will point out though that for your reasoning to be correct, you must literally have Main Character Syndrome, believing that the vast majority of other apparently conscious humans in such worlds as ours are actually NPCs with no consciousness.
I’m not sure why you think that simulators will be sparse with conscious entities. If consciousness is possible at all for simulated beings, it seems likely that it’s not some “special sauce” that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves. So in my view, an exceptionally tall human won’t be given “special sauce” to make them An Observer, but all sufficiently non-brain-damaged simulated humans will be observers (or none of them).
It might be different if the medically and behaviourally similar (within simulation) “extremest” and “other” humans are not actually structurally similar (in the system underlying the simulation), but are actually very different types of entities that are just designed to appear almost identical from examination within the simulation. There may well be such types of simulations, but that seems like a highly complex additional hypothesis, not the default.
I suspect it’s quite possible to give a mathematical treatment for this question, I just don’t know what that treatment is. I suspect it has to do with anthropics. Can’t anthropics deal with different potential models of reality?
The second part of your answer isn’t convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn’t apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I’m not particularly concerned with it.
The largest part of my second part is “If consciousness is possible at all for simulated beings, it seems likely that it’s not some “special sauce” that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves.” This mostly isn’t about simulators and their motivations, but about the nature of consciousness in simulated entities in general.
On the other hand your argument is about simulators and their motivations, in that you believe they largely both can and will apply “special sauce” to simulated entities that are the most extreme in some human-obvious way and almost never to the others.
I don’t think we have any qualitative disagreements, just about what fraction of classes of simulated entities may or may not have consciousness.
Yes okay fair enough. I’m not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.
But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I’m likely in / how exactly to update my understanding.
There is no correct mathematical treatment, since this is a disagreement about models of reality. Your prior could be correct if reality is one way, though I think it’s very unlikely.
I will point out though that for your reasoning to be correct, you must literally have Main Character Syndrome, believing that the vast majority of other apparently conscious humans in such worlds as ours are actually NPCs with no consciousness.
I’m not sure why you think that simulators will be sparse with conscious entities. If consciousness is possible at all for simulated beings, it seems likely that it’s not some “special sauce” that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves. So in my view, an exceptionally tall human won’t be given “special sauce” to make them An Observer, but all sufficiently non-brain-damaged simulated humans will be observers (or none of them).
It might be different if the medically and behaviourally similar (within simulation) “extremest” and “other” humans are not actually structurally similar (in the system underlying the simulation), but are actually very different types of entities that are just designed to appear almost identical from examination within the simulation. There may well be such types of simulations, but that seems like a highly complex additional hypothesis, not the default.
I suspect it’s quite possible to give a mathematical treatment for this question, I just don’t know what that treatment is. I suspect it has to do with anthropics. Can’t anthropics deal with different potential models of reality?
The second part of your answer isn’t convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn’t apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I’m not particularly concerned with it.
The largest part of my second part is “If consciousness is possible at all for simulated beings, it seems likely that it’s not some “special sauce” that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves.” This mostly isn’t about simulators and their motivations, but about the nature of consciousness in simulated entities in general.
On the other hand your argument is about simulators and their motivations, in that you believe they largely both can and will apply “special sauce” to simulated entities that are the most extreme in some human-obvious way and almost never to the others.
I don’t think we have any qualitative disagreements, just about what fraction of classes of simulated entities may or may not have consciousness.
Yes okay fair enough. I’m not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.
But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I’m likely in / how exactly to update my understanding.