If I understand correctly, this approach to anthropics strongly favours a simulation hypothesis: the universe is most likely densely packed with computing material (“computronium”) and much of the computational resource is dedicated to simulating beings like us. Further, it also supports a form of Doomsday Hypothesis: simulations mostly get switched off before they start to simulate lots of post-human people (who are not like us) and the resource is then assigned to running new simulations (back at a human level).
Yes, that’s right. Note that SIA also favors sim hypotheses, but it does so less strongly because it doesn’t care whether the sims are of Earth-like humans or of weirder creatures.
Here’s a note I wrote to myself yesterday:
Like SIA, my PSA anthropics favors the sim arg in a stronger way than normal anthropics.
The sim arg works regardless of one’s anthropic theory because it requires only a principle of indifference over indistinguishable experiences. But it’s a trilemma, so it might be that humans go extinct or post-humans don’t run early-seeming sims.
Given the existence of aliens and other universes, the ordinary sim arg pushes more strongly for us being a sim because even if humans go extinct or don’t run sims, whichever civilization out there runs lots of sims should have lots of sims of minds like ours, so we should be in their sims.
PSA doesn’t even need aliens. It directly penalizes hypotheses that predict fewer copies of us in a given region of spacetime. Say we’re deciding between
H1: no sims of us
and
H2: 1 billion sims of us.
H1 would have a billion-fold bigger probability penalty than H2. Even if H2 started out being millions of times less probable than H1, it would end up being hundreds of times more probable.
Also note that even if we’re not in a sim, then PSA, like SIA, yields Katja’s doomsday argument based on the Great Filter.
Either way it looks very unlikely there will be a far future, ignoring model uncertainty and unknown unknowns.
Upvoted for acknowledging a counterintuitive consequence, and “biting the bullet”.
One of the most striking things about anthropics is that (seemingly) whatever approach is taken, there are very weird conclusions. For example: Doomsday arguments, Simulation arguments, Boltzmann brains, or a priori certainties that the universe is infinite. Sometimes all at once.
I had a look at this: the KCA (Kolmogorov Complexity) approach seems to match my own thoughts best.
I’m not convinced about the “George Washington” objection. It strikes me that a program which extracts George Washington as an observer from insider a wider program “u” (modelling the universe) wouldn’t be significantly shorter than a program which extracts any other human observer living at about the same time. Or indeed, any other animal meeting some crude definition of an observer.
Searching for features of human interest (like “leader of a nation”) is likely to be pretty complicated, and require a long program. To reduce the program size as much as possible, it ought to just scan for physical quantities which are easy to specify but very diagnostic of a observer. For example, scan for a physical mass with persistent low entropy compared to its surroundings, persistent matter and energy throughput (low entropy in, high entropy out, maintaining its own low entropy state), a large number of internally structured electrical discharges, and high correlation between said discharges and events surrounding said mass. The program then builds a long list of such “observers” encountered while stepping through u, and simply picks out the nth entry on the list, giving the “nth” observer complexity about K(n). Unless George Washington happened to be a very special n (why would he be?) he would be no simpler to find than anyone else.
That said, your example suggests a different difficulty: People who happen to be special numbers n get higher weight for apparently no reason. Maybe one way to address this fact is to note that what number n someone has is relative to (1) how the list is enumerated and (2) what universal Turing machine is being used for KC in the first place, and maybe averaging over these arbitrary details would blur the specialness of, say, the 1-billionth observer according to any particular coding scheme. Still, I doubt the KCs of different people would be exactly equal even after such adjustments.
If I understand correctly, this approach to anthropics strongly favours a simulation hypothesis: the universe is most likely densely packed with computing material (“computronium”) and much of the computational resource is dedicated to simulating beings like us. Further, it also supports a form of Doomsday Hypothesis: simulations mostly get switched off before they start to simulate lots of post-human people (who are not like us) and the resource is then assigned to running new simulations (back at a human level).
Have I misunderstood?
Yes, that’s right. Note that SIA also favors sim hypotheses, but it does so less strongly because it doesn’t care whether the sims are of Earth-like humans or of weirder creatures.
Here’s a note I wrote to myself yesterday:
Like SIA, my PSA anthropics favors the sim arg in a stronger way than normal anthropics.
The sim arg works regardless of one’s anthropic theory because it requires only a principle of indifference over indistinguishable experiences. But it’s a trilemma, so it might be that humans go extinct or post-humans don’t run early-seeming sims.
Given the existence of aliens and other universes, the ordinary sim arg pushes more strongly for us being a sim because even if humans go extinct or don’t run sims, whichever civilization out there runs lots of sims should have lots of sims of minds like ours, so we should be in their sims.
PSA doesn’t even need aliens. It directly penalizes hypotheses that predict fewer copies of us in a given region of spacetime. Say we’re deciding between
and
H1 would have a billion-fold bigger probability penalty than H2. Even if H2 started out being millions of times less probable than H1, it would end up being hundreds of times more probable.
Also note that even if we’re not in a sim, then PSA, like SIA, yields Katja’s doomsday argument based on the Great Filter.
Either way it looks very unlikely there will be a far future, ignoring model uncertainty and unknown unknowns.
Upvoted for acknowledging a counterintuitive consequence, and “biting the bullet”.
One of the most striking things about anthropics is that (seemingly) whatever approach is taken, there are very weird conclusions. For example: Doomsday arguments, Simulation arguments, Boltzmann brains, or a priori certainties that the universe is infinite. Sometimes all at once.
Yes. :) The first paragraph here identifies at least one problem with every anthropic theory I’m aware of.
I had a look at this: the KCA (Kolmogorov Complexity) approach seems to match my own thoughts best.
I’m not convinced about the “George Washington” objection. It strikes me that a program which extracts George Washington as an observer from insider a wider program “u” (modelling the universe) wouldn’t be significantly shorter than a program which extracts any other human observer living at about the same time. Or indeed, any other animal meeting some crude definition of an observer.
Searching for features of human interest (like “leader of a nation”) is likely to be pretty complicated, and require a long program. To reduce the program size as much as possible, it ought to just scan for physical quantities which are easy to specify but very diagnostic of a observer. For example, scan for a physical mass with persistent low entropy compared to its surroundings, persistent matter and energy throughput (low entropy in, high entropy out, maintaining its own low entropy state), a large number of internally structured electrical discharges, and high correlation between said discharges and events surrounding said mass. The program then builds a long list of such “observers” encountered while stepping through u, and simply picks out the nth entry on the list, giving the “nth” observer complexity about K(n). Unless George Washington happened to be a very special n (why would he be?) he would be no simpler to find than anyone else.
Nice point. :)
That said, your example suggests a different difficulty: People who happen to be special numbers n get higher weight for apparently no reason. Maybe one way to address this fact is to note that what number n someone has is relative to (1) how the list is enumerated and (2) what universal Turing machine is being used for KC in the first place, and maybe averaging over these arbitrary details would blur the specialness of, say, the 1-billionth observer according to any particular coding scheme. Still, I doubt the KCs of different people would be exactly equal even after such adjustments.