The SDO paper ignores Katja Grace’s result about SIA doomsday (https://meteuphoric.com/2010/03/23/sia-doomsday-the-filter-is-ahead/), that tell us, in short, that if there are two types of the universes, one where rare Earth is true, and another, where civilizations are common but die because of the Late Great Filter, we are more likely to be in the second type of the Universe.
Grace’s argument becomes especially strong if we assume that all variability is because of the pure randomness variation of some parameters because we should find ourselves in the Universe where all parameters are optimised for the creation of many civilizations of our type.
In other words, we should take all parameters in Drake’s equation at maximum, not median, as suggested by SDO.
I am not a fan of using the untestable assumptions like SSA or SIA to do anything beyond armchair handwaving, but then again, SDO might be one of those.
Still, SDO used some anthropic approaches to improve their best guess for some parameter distributions. I have not looked at what they did in any detail though.
SSA and SIA aren’t exactly untestable. They both make predictions, and can be evaluated according to them, e.g. SIA predicts larger universes. It could be said to predict an infinite universe with probability 1, insofar as it at all works with infinities.
The anthropic bits in their paper looks like SSA, rather than SIA.
Well, SSA and SIA are statements about subjective probabilities. How do you test a statement about subjective probabilities? Let’s try an easier example: “this coin is biased toward heads”. You just flip it a few times and see. The more you flip, the more certain you become. So to test SSA vs SIA, we need to flip an “anthropic coin” repeatedly.
What could such an anthropic coin look like? We could set up an event after which, with 1:N odds, N^2 copies of you exist. Otherwise, with N:1 odds, nothing happens and you stay as one copy. Going through this experiment once is guaranteed to give you an N:1 update in favor of either SSA (if you didn’t get copied) or SIA (if you got copied). Then we can have everyone coming out of this experiment go through it again and again, keeping all memory of previous iterations. The population of copies will grow fast, but that’s okay.
Imagine yourself after a million iterations, finding out that the percentage of times you got copied agrees closely with SIA. You try a thousand more iterations and it still checks out. Would that be evidence for you that SIA is “true” in some sense? For me it would! It’s the same as with a regular coin, after seeing a lot of heads you believe that it’s either biased toward heads or you’re having one hell of a coincidence.
That way of thinking about anthropic updates can be formalized in UDT: after a few iterations it learns to act as if SSA or SIA were “true”. So I’m pretty sure it’s right.
We could set up an event after which, with 1:N odds, N^2 copies of you exist.
So that’s a pure thought experiment then. There is no actual way to test those As. Besides, in the universe where we are able to copy humans, SSA vs SIA would be the least interesting question to talk about :) I am more interested in “testing” that applies to this universe.
Would that be evidence for you that SIA is “true” in some sense? For me it would!
“For me”? I don’t understand. Presumably you mean some kind of objective truth? Not a personal truth? Or do you mean adhering to one of the two is useful for, I don’t know, navigating the world?
It would be nice to have an realistic example one could point at and say “Thinking in this way pays rent.”
I don’t know, do you like chocolate? If yes, does that fact pay rent? Our preferences about happiness of observers vs. number of observers are part of what needs to be encoded into FAI’s utility function. So we need to figure them out, with thought experiments if we have to.
As to objective vs personal truth, I think anthropic probabilities aren’t much different from regular probabilities in that sense. Seeing a quantum coin come up heads half the time is the same kind of “personal truth” as getting anthropic evidence in the game I described. Either way there will be many copies of you seeing different things and you need to figure out the weighting.
When you repeat this experiment a bunch of times, I think an SSA advocate can choose their reference class to include all iterations of the experiment. This will result in them assigning similar credences as SIA, since a randomly chosen awakening from all iterations of the experiment is likely to be one of the new copies. So the update towards SIA won’t be that strong.
This way of choosing the reference class lets SSA avoid a lot of unintuitive results. But it’s kind of a symmetric way of avoiding unintuitive results, in that it might work even if the theory is false.
Wouldn’t you *also* fall prey to Pascal’s Mugging, assigning ever larger weight to more complicated hypotheses that posit absurd amounts of copies of you? If you are just trying to calculate your best action, you still go for that which the most optimized Mugger asks. Which needn’t converge.
I’m not sure. The simplest way that more copies of me could exist is that the universe is larger, which doesn’t imply any crazy actions, except possible to bet that the universe is large/infinite. That isn’t a huge bullet to bite. From there you could probably get even more weight if you thought that copies of you were more densely distributed, or something like that, but I’m not sure what actions that would imply.
Speculation: The hypothesis that future civilisations spend all their resources simulating copies of you get a large update. However, if you contrast it with the hypothesis that they simulate all possible humans, and your prior probability that they would simulate you is proportional to the number of possible humans (by some principle of indifference), the update is proportional to the prior and is thus overwhelmed by the fact that it seems more interesting to simulate all humans than to simulate one of them over and over again.
Do you have any ideas of weird hypothesis that imply some specific actions?
Posit that physics allows a perpetuum mobile and the infinities make the bounded calculation break down and cry, as is common. If we by fiat disregard unbounded hypotheses: Also posit a Doomsday clock beyond the Towers of Hanoi, as specified by when some Turing machine halts. This breaks the calculation unless your complexity penalty assigner is uncomputable, even unspecifiable by possible laws of physics.
Sure, there are lots of ways to break calculations. That’s true for any theory that’s trying to calculate expected value, though, so I can’t see how that’s particularly relevant for anthropics, unless we have reason to believe that any of these situations should warrant some special action. Using anthropic decision theory you’re not even updating your probabilities based on number of copies, so it really is only calculating expected value.
It’s not true if potential value is bounded, which makes me sceptical that we should include a potentially unbounded term in how we weight hypotheses when we pick actions.
The SDO paper ignores Katja Grace’s result about SIA doomsday (https://meteuphoric.com/2010/03/23/sia-doomsday-the-filter-is-ahead/), that tell us, in short, that if there are two types of the universes, one where rare Earth is true, and another, where civilizations are common but die because of the Late Great Filter, we are more likely to be in the second type of the Universe.
Grace’s argument becomes especially strong if we assume that all variability is because of the pure randomness variation of some parameters because we should find ourselves in the Universe where all parameters are optimised for the creation of many civilizations of our type.
In other words, we should take all parameters in Drake’s equation at maximum, not median, as suggested by SDO.
Thus Fermi Paradox is far from being solved.
I am not a fan of using the untestable assumptions like SSA or SIA to do anything beyond armchair handwaving, but then again, SDO might be one of those.
Still, SDO used some anthropic approaches to improve their best guess for some parameter distributions. I have not looked at what they did in any detail though.
SSA and SIA aren’t exactly untestable. They both make predictions, and can be evaluated according to them, e.g. SIA predicts larger universes. It could be said to predict an infinite universe with probability 1, insofar as it at all works with infinities.
The anthropic bits in their paper looks like SSA, rather than SIA.
I am not sure how one can test SSA or SIA. What kind of experiment would need to be set up, or what data needs to be collected?
Well, SSA and SIA are statements about subjective probabilities. How do you test a statement about subjective probabilities? Let’s try an easier example: “this coin is biased toward heads”. You just flip it a few times and see. The more you flip, the more certain you become. So to test SSA vs SIA, we need to flip an “anthropic coin” repeatedly.
What could such an anthropic coin look like? We could set up an event after which, with 1:N odds, N^2 copies of you exist. Otherwise, with N:1 odds, nothing happens and you stay as one copy. Going through this experiment once is guaranteed to give you an N:1 update in favor of either SSA (if you didn’t get copied) or SIA (if you got copied). Then we can have everyone coming out of this experiment go through it again and again, keeping all memory of previous iterations. The population of copies will grow fast, but that’s okay.
Imagine yourself after a million iterations, finding out that the percentage of times you got copied agrees closely with SIA. You try a thousand more iterations and it still checks out. Would that be evidence for you that SIA is “true” in some sense? For me it would! It’s the same as with a regular coin, after seeing a lot of heads you believe that it’s either biased toward heads or you’re having one hell of a coincidence.
That way of thinking about anthropic updates can be formalized in UDT: after a few iterations it learns to act as if SSA or SIA were “true”. So I’m pretty sure it’s right.
So that’s a pure thought experiment then. There is no actual way to test those As. Besides, in the universe where we are able to copy humans, SSA vs SIA would be the least interesting question to talk about :) I am more interested in “testing” that applies to this universe.
“For me”? I don’t understand. Presumably you mean some kind of objective truth? Not a personal truth? Or do you mean adhering to one of the two is useful for, I don’t know, navigating the world?
It would be nice to have an realistic example one could point at and say “Thinking in this way pays rent.”
I don’t know, do you like chocolate? If yes, does that fact pay rent? Our preferences about happiness of observers vs. number of observers are part of what needs to be encoded into FAI’s utility function. So we need to figure them out, with thought experiments if we have to.
As to objective vs personal truth, I think anthropic probabilities aren’t much different from regular probabilities in that sense. Seeing a quantum coin come up heads half the time is the same kind of “personal truth” as getting anthropic evidence in the game I described. Either way there will be many copies of you seeing different things and you need to figure out the weighting.
When you repeat this experiment a bunch of times, I think an SSA advocate can choose their reference class to include all iterations of the experiment. This will result in them assigning similar credences as SIA, since a randomly chosen awakening from all iterations of the experiment is likely to be one of the new copies. So the update towards SIA won’t be that strong.
This way of choosing the reference class lets SSA avoid a lot of unintuitive results. But it’s kind of a symmetric way of avoiding unintuitive results, in that it might work even if the theory is false.
(Which I think it is.)
Wouldn’t this argument also become convinced of the simulation hypothesis?
Sure, SIA assigns very high probability to us being in a simulation. That conclusions isn’t necessarily absurd, though I think anthropic decision theory (https://arxiv.org/abs/1110.6437) with aggregative ethics is a better way to think about it, and yields similar conclusions. Brian Tomasik has an excellent article about the implications https://foundational-research.org/how-the-simulation-argument-dampens-future-fanaticism
Wouldn’t you *also* fall prey to Pascal’s Mugging, assigning ever larger weight to more complicated hypotheses that posit absurd amounts of copies of you? If you are just trying to calculate your best action, you still go for that which the most optimized Mugger asks. Which needn’t converge.
I’m not sure. The simplest way that more copies of me could exist is that the universe is larger, which doesn’t imply any crazy actions, except possible to bet that the universe is large/infinite. That isn’t a huge bullet to bite. From there you could probably get even more weight if you thought that copies of you were more densely distributed, or something like that, but I’m not sure what actions that would imply.
Speculation: The hypothesis that future civilisations spend all their resources simulating copies of you get a large update. However, if you contrast it with the hypothesis that they simulate all possible humans, and your prior probability that they would simulate you is proportional to the number of possible humans (by some principle of indifference), the update is proportional to the prior and is thus overwhelmed by the fact that it seems more interesting to simulate all humans than to simulate one of them over and over again.
Do you have any ideas of weird hypothesis that imply some specific actions?
Posit that physics allows a perpetuum mobile and the infinities make the bounded calculation break down and cry, as is common. If we by fiat disregard unbounded hypotheses: Also posit a Doomsday clock beyond the Towers of Hanoi, as specified by when some Turing machine halts. This breaks the calculation unless your complexity penalty assigner is uncomputable, even unspecifiable by possible laws of physics.
Sure, there are lots of ways to break calculations. That’s true for any theory that’s trying to calculate expected value, though, so I can’t see how that’s particularly relevant for anthropics, unless we have reason to believe that any of these situations should warrant some special action. Using anthropic decision theory you’re not even updating your probabilities based on number of copies, so it really is only calculating expected value.
It’s not true if potential value is bounded, which makes me sceptical that we should include a potentially unbounded term in how we weight hypotheses when we pick actions.