I suspect it’s quite possible to give a mathematical treatment for this question, I just don’t know what that treatment is. I suspect it has to do with anthropics. Can’t anthropics deal with different potential models of reality?
The second part of your answer isn’t convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn’t apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I’m not particularly concerned with it.
The largest part of my second part is “If consciousness is possible at all for simulated beings, it seems likely that it’s not some “special sauce” that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves.” This mostly isn’t about simulators and their motivations, but about the nature of consciousness in simulated entities in general.
On the other hand your argument is about simulators and their motivations, in that you believe they largely both can and will apply “special sauce” to simulated entities that are the most extreme in some human-obvious way and almost never to the others.
I don’t think we have any qualitative disagreements, just about what fraction of classes of simulated entities may or may not have consciousness.
Yes okay fair enough. I’m not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.
But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I’m likely in / how exactly to update my understanding.
I suspect it’s quite possible to give a mathematical treatment for this question, I just don’t know what that treatment is. I suspect it has to do with anthropics. Can’t anthropics deal with different potential models of reality?
The second part of your answer isn’t convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn’t apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I’m not particularly concerned with it.
The largest part of my second part is “If consciousness is possible at all for simulated beings, it seems likely that it’s not some “special sauce” that they can apply separately to some entities and not to otherwise identical entities, but a property of the structure of the entities themselves.” This mostly isn’t about simulators and their motivations, but about the nature of consciousness in simulated entities in general.
On the other hand your argument is about simulators and their motivations, in that you believe they largely both can and will apply “special sauce” to simulated entities that are the most extreme in some human-obvious way and almost never to the others.
I don’t think we have any qualitative disagreements, just about what fraction of classes of simulated entities may or may not have consciousness.
Yes okay fair enough. I’m not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.
But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I’m likely in / how exactly to update my understanding.