I assume simulated observers are quite likely to be ‘special’ or ‘distinct’ with respect to the class of other entities in their simulated world that appear to be observers. (Though perhaps this assumption is precisely my error.
Yes, it is your main error. Think how justified this assumption is according to your knowledge state. How much evidence do you actually have? Have you check many simulations before generalizing that principle? Or are you just speculating based on total ignorance?
Should I be applying SIA here to argue that this latter probability is much smaller? Because simulated worlds in which the other observers are real and not ‘illusory’ would have low probability of distinctiveness and far more observers? I don’t know if this is sound. Should be using SSA instead here to make an entirely separate argument?
For your own sake, please don’t. Both SIA and SSA are also unjustified assumptions out of nowhere and lead to more counterintuitive conclusions.
Instead consider these two problems.
Problem 1:
There is a grey bag filled with equal proportion with balls of a hundred distinct colors. And there is a blue bag, half of which balls are blue. Someone has put their hand in one of the bag, picked a random ball from it and given it to you. The ball happened to be blue. What are the odds that it’s from the blue bag?
Problem 2:
There is a grey bag with some balls. And there is a blue bag with some balls. Someone has put their hand in one of the bag, picked a random ball from it and given it to you. The ball happened to be blue. What are the odds that it’s from the blue bag?
Are you justified to believe that Problem 2 has the same answer as Problem 1? That you can simply assume that half of the balls in blue bag are blue? Not after you went and checked a hundred random blue bags and in all of them half the balls were blue but just a priori? And likewise with a grey bag. Where would these assumptions be coming from?
You can come up with some plausibly sounding just-so story. That people who were filling the bag felt the urge to put blue balls in a blue bag. But what about the opposite just-so story, where people were disincentivized to put blue balls in a blue bag? Or where people payed no attention to the color of bag? Or all the other possible just-so stories? Why do you prioritize this one in particular?
Maybe you imagine yourself tasked with filling two bags with balls of different colors. And when you inspect your thinking process in such situation, you feel the urge to put a lot of blue balls in blue bag.
But why would the way you’d fill the bags, be entangled with the actual causal process that filled these bags in a general case? You don’t know that bags were filled by people with your sensibilities. You don’t know that they were filled by people, to begin with.
Or spin it the other way. Suppose, you could systematically produce correct reasoning by simply assuming things like that. What would be the point in gathering evidence then? Why spend extra energy on checking the way blue bags and grey bags are organized if you can confidently a priori deduce that?
But, on second thought, why are you confident that the way I’d fill the bags is not “entangled with the actual causal process that filled these bags in a general case?” It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Actually, in addition, my argument still works if we only consider simulations in which I’m the only human and I’m distinct (on my aforementioned axis) from other human-seeming entities. So the 0.5 probability becomes identically 1, and I sidestep your argument. So if I assign any non-zero prior on this theory whatsoever, the observation that I’m distinct makes this theory way way way more likely.
The only part of your comment I still agree with is that SIA and SSA may not be justified. Which means my actual error may have been to set Pr(I’m distinct | I’m not in a sim)=0.0001 instead of identically 1 — since 0.0001 assumes SSA. Does that make sense to you?
But thank you for responding to me; you are clearly an expert in anthropic reasoning, as I can see from your posts.
why are you confident that the way I’d fill the bags is not “entangled with the actual causal process that filled these bags in a general case?”
Most ways of reasoning are not entangled with most causal processes. When we do not have much reason to think that a particular way of reasoning is entangled, we don’t expect it to be. It’s possible to simply guess correctly, but it’s not probable. That’s not the way to systematically arrive to truth.
It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Even if it’s true, how could you know that it’s true? Where does this “seeming” comes from? Why do you think that it’s more likely that a creator would imprint their own sensibilities in you instead of literally every other possibility?
If you are in a simulation, you are trying to speculate about the reality outside of simulation, based on the information from inside the simulation. None of this information is particularly trustworthy, unless you already know for a fact that properties of simulation represent the properties of base reality.
my argument still works if we only consider simulations in which I’m the only human and I’m distinct (on my aforementioned axis) from other human-seeming entities.
I recommend you read the linked post and think for a couple of minutes of how it applies to your comment before further reading my answer. Try to track yourself the flow of improbability and understand, why the total value doesn’t decrease when consider only a specific type of simulations.
So.
You indeed can consider only a specific type of simulations. But if you don’t have actual evidence which would justify prioritizing this hypothesis from all the other, the overall improbability stays the same, you just pass the buck of it to other factors.
Consider Problem 2 once again.
You can reason conditionally on the assumption that all the balls in the blue bag are blue while balls in the grey bag have random colors. That would give you a very strong update in favor of blue bag… conditionally on your assumption being true.
The prior probability of this assumption to be true is very low. It’s exactly proportionally low to how much you updated in favor of blue bag conditionally on it, so that when you try to calculate the total probability it stays the same.
Only when you have observed actual evidence in favor of your assumption the improbability goes somewhere. And the more improbable observation you got, the more improbability is removed.
There is no free energy in the engine of cognition.
Yes, it is your main error. Think how justified this assumption is according to your knowledge state. How much evidence do you actually have? Have you check many simulations before generalizing that principle? Or are you just speculating based on total ignorance?
For your own sake, please don’t. Both SIA and SSA are also unjustified assumptions out of nowhere and lead to more counterintuitive conclusions.
Instead consider these two problems.
Problem 1:
Problem 2:
Are you justified to believe that Problem 2 has the same answer as Problem 1? That you can simply assume that half of the balls in blue bag are blue? Not after you went and checked a hundred random blue bags and in all of them half the balls were blue but just a priori? And likewise with a grey bag. Where would these assumptions be coming from?
You can come up with some plausibly sounding just-so story. That people who were filling the bag felt the urge to put blue balls in a blue bag. But what about the opposite just-so story, where people were disincentivized to put blue balls in a blue bag? Or where people payed no attention to the color of bag? Or all the other possible just-so stories? Why do you prioritize this one in particular?
Maybe you imagine yourself tasked with filling two bags with balls of different colors. And when you inspect your thinking process in such situation, you feel the urge to put a lot of blue balls in blue bag.
But why would the way you’d fill the bags, be entangled with the actual causal process that filled these bags in a general case? You don’t know that bags were filled by people with your sensibilities. You don’t know that they were filled by people, to begin with.
Or spin it the other way. Suppose, you could systematically produce correct reasoning by simply assuming things like that. What would be the point in gathering evidence then? Why spend extra energy on checking the way blue bags and grey bags are organized if you can confidently a priori deduce that?
But, on second thought, why are you confident that the way I’d fill the bags is not “entangled with the actual causal process that filled these bags in a general case?” It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Actually, in addition, my argument still works if we only consider simulations in which I’m the only human and I’m distinct (on my aforementioned axis) from other human-seeming entities. So the 0.5 probability becomes identically 1, and I sidestep your argument. So if I assign any non-zero prior on this theory whatsoever, the observation that I’m distinct makes this theory way way way more likely.
The only part of your comment I still agree with is that SIA and SSA may not be justified. Which means my actual error may have been to set Pr(I’m distinct | I’m not in a sim)=0.0001 instead of identically 1 — since 0.0001 assumes SSA. Does that make sense to you?
But thank you for responding to me; you are clearly an expert in anthropic reasoning, as I can see from your posts.
Most ways of reasoning are not entangled with most causal processes. When we do not have much reason to think that a particular way of reasoning is entangled, we don’t expect it to be. It’s possible to simply guess correctly, but it’s not probable. That’s not the way to systematically arrive to truth.
Even if it’s true, how could you know that it’s true? Where does this “seeming” comes from? Why do you think that it’s more likely that a creator would imprint their own sensibilities in you instead of literally every other possibility?
If you are in a simulation, you are trying to speculate about the reality outside of simulation, based on the information from inside the simulation. None of this information is particularly trustworthy, unless you already know for a fact that properties of simulation represent the properties of base reality.
Have you heard about Follow-The-Improbability game?
I recommend you read the linked post and think for a couple of minutes of how it applies to your comment before further reading my answer. Try to track yourself the flow of improbability and understand, why the total value doesn’t decrease when consider only a specific type of simulations.
So.
You indeed can consider only a specific type of simulations. But if you don’t have actual evidence which would justify prioritizing this hypothesis from all the other, the overall improbability stays the same, you just pass the buck of it to other factors.
Consider Problem 2 once again.
You can reason conditionally on the assumption that all the balls in the blue bag are blue while balls in the grey bag have random colors. That would give you a very strong update in favor of blue bag… conditionally on your assumption being true.
The prior probability of this assumption to be true is very low. It’s exactly proportionally low to how much you updated in favor of blue bag conditionally on it, so that when you try to calculate the total probability it stays the same.
Only when you have observed actual evidence in favor of your assumption the improbability goes somewhere. And the more improbable observation you got, the more improbability is removed.
There is no free energy in the engine of cognition.
Thank you Ape, this sounds right.