But, on second thought, why are you confident that the way I’d fill the bags is not “entangled with the actual causal process that filled these bags in a general case?” It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Actually, in addition, my argument still works if we only consider simulations in which I’m the only human and I’m distinct (on my aforementioned axis) from other human-seeming entities. So the 0.5 probability becomes identically 1, and I sidestep your argument. So if I assign any non-zero prior on this theory whatsoever, the observation that I’m distinct makes this theory way way way more likely.
The only part of your comment I still agree with is that SIA and SSA may not be justified. Which means my actual error may have been to set Pr(I’m distinct | I’m not in a sim)=0.0001 instead of identically 1 — since 0.0001 assumes SSA. Does that make sense to you?
But thank you for responding to me; you are clearly an expert in anthropic reasoning, as I can see from your posts.
why are you confident that the way I’d fill the bags is not “entangled with the actual causal process that filled these bags in a general case?”
Most ways of reasoning are not entangled with most causal processes. When we do not have much reason to think that a particular way of reasoning is entangled, we don’t expect it to be. It’s possible to simply guess correctly, but it’s not probable. That’s not the way to systematically arrive to truth.
It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Even if it’s true, how could you know that it’s true? Where does this “seeming” comes from? Why do you think that it’s more likely that a creator would imprint their own sensibilities in you instead of literally every other possibility?
If you are in a simulation, you are trying to speculate about the reality outside of simulation, based on the information from inside the simulation. None of this information is particularly trustworthy, unless you already know for a fact that properties of simulation represent the properties of base reality.
my argument still works if we only consider simulations in which I’m the only human and I’m distinct (on my aforementioned axis) from other human-seeming entities.
I recommend you read the linked post and think for a couple of minutes of how it applies to your comment before further reading my answer. Try to track yourself the flow of improbability and understand, why the total value doesn’t decrease when consider only a specific type of simulations.
So.
You indeed can consider only a specific type of simulations. But if you don’t have actual evidence which would justify prioritizing this hypothesis from all the other, the overall improbability stays the same, you just pass the buck of it to other factors.
Consider Problem 2 once again.
You can reason conditionally on the assumption that all the balls in the blue bag are blue while balls in the grey bag have random colors. That would give you a very strong update in favor of blue bag… conditionally on your assumption being true.
The prior probability of this assumption to be true is very low. It’s exactly proportionally low to how much you updated in favor of blue bag conditionally on it, so that when you try to calculate the total probability it stays the same.
Only when you have observed actual evidence in favor of your assumption the improbability goes somewhere. And the more improbable observation you got, the more improbability is removed.
There is no free energy in the engine of cognition.
But, on second thought, why are you confident that the way I’d fill the bags is not “entangled with the actual causal process that filled these bags in a general case?” It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Actually, in addition, my argument still works if we only consider simulations in which I’m the only human and I’m distinct (on my aforementioned axis) from other human-seeming entities. So the 0.5 probability becomes identically 1, and I sidestep your argument. So if I assign any non-zero prior on this theory whatsoever, the observation that I’m distinct makes this theory way way way more likely.
The only part of your comment I still agree with is that SIA and SSA may not be justified. Which means my actual error may have been to set Pr(I’m distinct | I’m not in a sim)=0.0001 instead of identically 1 — since 0.0001 assumes SSA. Does that make sense to you?
But thank you for responding to me; you are clearly an expert in anthropic reasoning, as I can see from your posts.
Most ways of reasoning are not entangled with most causal processes. When we do not have much reason to think that a particular way of reasoning is entangled, we don’t expect it to be. It’s possible to simply guess correctly, but it’s not probable. That’s not the way to systematically arrive to truth.
Even if it’s true, how could you know that it’s true? Where does this “seeming” comes from? Why do you think that it’s more likely that a creator would imprint their own sensibilities in you instead of literally every other possibility?
If you are in a simulation, you are trying to speculate about the reality outside of simulation, based on the information from inside the simulation. None of this information is particularly trustworthy, unless you already know for a fact that properties of simulation represent the properties of base reality.
Have you heard about Follow-The-Improbability game?
I recommend you read the linked post and think for a couple of minutes of how it applies to your comment before further reading my answer. Try to track yourself the flow of improbability and understand, why the total value doesn’t decrease when consider only a specific type of simulations.
So.
You indeed can consider only a specific type of simulations. But if you don’t have actual evidence which would justify prioritizing this hypothesis from all the other, the overall improbability stays the same, you just pass the buck of it to other factors.
Consider Problem 2 once again.
You can reason conditionally on the assumption that all the balls in the blue bag are blue while balls in the grey bag have random colors. That would give you a very strong update in favor of blue bag… conditionally on your assumption being true.
The prior probability of this assumption to be true is very low. It’s exactly proportionally low to how much you updated in favor of blue bag conditionally on it, so that when you try to calculate the total probability it stays the same.
Only when you have observed actual evidence in favor of your assumption the improbability goes somewhere. And the more improbable observation you got, the more improbability is removed.
There is no free energy in the engine of cognition.