I’m not sure what you mean by “vanilla anthropics”. Both SSA and SIA are “simple object-level rules for assigning anthropic probabilities”. Vanilla anthropics seems to be vague enough that doesn’t give an answer to the doomsday argument or the presumptuous philosopher problem.
I’m not sure what you mean by “vanilla anthropics”.
Am working on it—as a placeholder, for many problems, one can use Stuart Armstrong’s proposed algorithm of finding the best strategy according to a non-anthropic viewpoint that adds the utilities of different copies of you, and then doing what that strategy says.
Both SSA and SIA are “simple object-level rules for assigning anthropic probabilities”
Yup. Don’t trust them outside their respective ranges of validity.
if you assume [stuff about the nature of the universe]
You will predict [consequences of those assumptions, including anthropic consequences]. However, before assuming [stuff about the universe], you should have [observational data supporting that stuff].
Am working on it—as a placeholder, for many problems, one can use Stuart Armstrong’s proposed algorithm of finding the best strategy according to a non-anthropic viewpoint that adds the utilities of different copies of you, and then doing what that strategy says.
I think this essentially leads to SIA. Since you’re adding utilities over different copies of you, it follows that you care more about universes in which there are more copies of you. So your copies should behave as if they anticipate the probability of being in a universe containing lots of copies to be higher.
However, before assuming [stuff about the universe], you should have [observational data supporting that stuff].
It’s definitely not a completely justified assumption. But we do have evidence that the universe supports arbitrary computations, that it’s extremely large, and that some things are determined randomly, so as a result it will be running many different computations in parallel. This provides some evidence that, if there is a multiverse, it will have similar properties.
I think this essentially leads to SIA. Since you’re adding utilities over different copies of you, it follows that you care more about universes in which there are more copies of you.
Of course, it’s slightly different from SIA because SIA wants more copies of anyone, whether you or not. If the proportion of individuals who are you remains constant, then SIA is equivalent.
Elsewhere in my essay, I discuss a prudential argument (which I didn’t invent) for assuming there are lots of copies of you. Not sure if that’s the same as Armstrong’s proposal.
PSA is essentially favoring more copies of you per unit of spacetime / physics / computation / etc. -- as long as we understand “copy of you” to mean “instance of perceiving all the data you perceive right now” rather than just a copy of your body/brain but in a different environment.
I’m not sure what you mean by “vanilla anthropics”. Both SSA and SIA are “simple object-level rules for assigning anthropic probabilities”. Vanilla anthropics seems to be vague enough that doesn’t give an answer to the doomsday argument or the presumptuous philosopher problem.
On another note, if you assume that a nonzero percentage of the multiverse’s computation power is spent simulating arbitrary universes with computation power in proportion to the probabilities of their laws of physics, then both SSA and SIA will end up giving you very similar predictions to Brian_Tomasik’s proposal, although I think they might be slightly different.
Am working on it—as a placeholder, for many problems, one can use Stuart Armstrong’s proposed algorithm of finding the best strategy according to a non-anthropic viewpoint that adds the utilities of different copies of you, and then doing what that strategy says.
Yup. Don’t trust them outside their respective ranges of validity.
You will predict [consequences of those assumptions, including anthropic consequences]. However, before assuming [stuff about the universe], you should have [observational data supporting that stuff].
I think this essentially leads to SIA. Since you’re adding utilities over different copies of you, it follows that you care more about universes in which there are more copies of you. So your copies should behave as if they anticipate the probability of being in a universe containing lots of copies to be higher.
It’s definitely not a completely justified assumption. But we do have evidence that the universe supports arbitrary computations, that it’s extremely large, and that some things are determined randomly, so as a result it will be running many different computations in parallel. This provides some evidence that, if there is a multiverse, it will have similar properties.
Of course, it’s slightly different from SIA because SIA wants more copies of anyone, whether you or not. If the proportion of individuals who are you remains constant, then SIA is equivalent.
Elsewhere in my essay, I discuss a prudential argument (which I didn’t invent) for assuming there are lots of copies of you. Not sure if that’s the same as Armstrong’s proposal.
PSA is essentially favoring more copies of you per unit of spacetime / physics / computation / etc. -- as long as we understand “copy of you” to mean “instance of perceiving all the data you perceive right now” rather than just a copy of your body/brain but in a different environment.