A,B are worlds where the filter happens before life, and X,Y are where it happens before intelligence. You aren’t including any worlds where the filter happens after where we are, so of course you don’t see the main effect, of concluding it more likely to happen after now than before now. You say you are introducing inference based on time and not just development level, but I don’t see you using that in your example.
You say you are introducing inference based on time and not just development level, but I don’t see you using that in your example.
That’s why you don’t see any worlds where the filter happens after where we are—these worlds are not in our reference class (to use outdated SSA terminology). We can’t use SIA on them.
There still is a way of combining SIA with the filter argument; it goes something like:
1) Use SIA on the present time to show there are lots of civilizations at our level around now.
2) Use a distribution on possible universes to argue that 1) implies there were lots of civilizations at our level around before.
3) From 2), argue that the filter is in our future.
The problem is 2). There are universes in which there is no great filter, but whose probability is boosted by SIA—say, slow-start simultaneous worlds, where it takes several billion years for life to get going, but life is never filtered at all, and now the galaxy is filled with civilizations at approximately our level. This world is very unlikely—but SIA boosts its probability!
So until we have some sensible distributions over possible worlds with filters, we can’t assert SIA+great filter ⇒ DOOM. I feel it’s intuitively likely that SIA does increase doom somewhat, but that’s not a proof.
This dispute about 2) seems a little desperate to me as a way out of doom.
Surely there is high prior probability for universes whose density of civilizations does NOT rise dramatically at a crucial time close to our own (such that at around our time t/o ~ 13 billion years the density of civilizations at our level is high, whereas at times very slightly before t/o in cosmological terms, the density is very low)? If you assume that with high probability, lots of civilizations now implies lots of civilizations a million years ago (but still none of them expanded) then we do get a Doomish conclusion.
Incidentally, another approach is to argue that SIA favours “Big Worlds” (ones containing a spatially-infinite universe, or infinitely many finite universes). But then, among the Big Worlds, SIA doesn’t further favour a high density of civilizations at our level (since all such Big Worlds have infinitely many civilizations anyway, SIA doesn’t “care” whether they appear on the order of once per star system, or once per Galaxy, or less than once per Hubble volume). This approach removes Katja’s particular argument to a “late” filter, but unfortunately it creates another argument instead, since when we now apply SSA we get the usual Doomsday shift—see http://lesswrong.com/lw/9ma/selfindication_assumption_still_doomed/
Broadly I’ve now looked at a number of versions of anthropic reasoning: SSA with and without SIA, variations of reference class a la Bostrom, and attempts to avoid anthropic reasoning completely (such as “full non-indexical conditioning”). Whichever way I cut it, I’m getting a “Doom” conclusion. I’m thinking of putting together a main post on this at some point.
Her example makes no account for the time period at which each civilization reaches each stage. For her example to work, she’d have to come up with a model where civilizations appear and mature at different time intervals, then apply the SIA to civilizations at our level and in our current time period, and then show that this implies a late great filter.
It can be done, quite easily, but there are also models where SIA implies an early filter, or none at all. SIA boosts some worlds, and some of the worlds boosted by SIA have late filters, others have early filters.
A,B are worlds where the filter happens before life, and X,Y are where it happens before intelligence. You aren’t including any worlds where the filter happens after where we are, so of course you don’t see the main effect, of concluding it more likely to happen after now than before now. You say you are introducing inference based on time and not just development level, but I don’t see you using that in your example.
That’s why you don’t see any worlds where the filter happens after where we are—these worlds are not in our reference class (to use outdated SSA terminology). We can’t use SIA on them.
There still is a way of combining SIA with the filter argument; it goes something like:
1) Use SIA on the present time to show there are lots of civilizations at our level around now.
2) Use a distribution on possible universes to argue that 1) implies there were lots of civilizations at our level around before.
3) From 2), argue that the filter is in our future.
The problem is 2). There are universes in which there is no great filter, but whose probability is boosted by SIA—say, slow-start simultaneous worlds, where it takes several billion years for life to get going, but life is never filtered at all, and now the galaxy is filled with civilizations at approximately our level. This world is very unlikely—but SIA boosts its probability!
So until we have some sensible distributions over possible worlds with filters, we can’t assert SIA+great filter ⇒ DOOM. I feel it’s intuitively likely that SIA does increase doom somewhat, but that’s not a proof.
This dispute about 2) seems a little desperate to me as a way out of doom.
Surely there is high prior probability for universes whose density of civilizations does NOT rise dramatically at a crucial time close to our own (such that at around our time t/o ~ 13 billion years the density of civilizations at our level is high, whereas at times very slightly before t/o in cosmological terms, the density is very low)? If you assume that with high probability, lots of civilizations now implies lots of civilizations a million years ago (but still none of them expanded) then we do get a Doomish conclusion.
Incidentally, another approach is to argue that SIA favours “Big Worlds” (ones containing a spatially-infinite universe, or infinitely many finite universes). But then, among the Big Worlds, SIA doesn’t further favour a high density of civilizations at our level (since all such Big Worlds have infinitely many civilizations anyway, SIA doesn’t “care” whether they appear on the order of once per star system, or once per Galaxy, or less than once per Hubble volume). This approach removes Katja’s particular argument to a “late” filter, but unfortunately it creates another argument instead, since when we now apply SSA we get the usual Doomsday shift—see http://lesswrong.com/lw/9ma/selfindication_assumption_still_doomed/
Broadly I’ve now looked at a number of versions of anthropic reasoning: SSA with and without SIA, variations of reference class a la Bostrom, and attempts to avoid anthropic reasoning completely (such as “full non-indexical conditioning”). Whichever way I cut it, I’m getting a “Doom” conclusion. I’m thinking of putting together a main post on this at some point.
Katja’s example clearly included worlds with a filter past our level, and I see nothing wrong with her example.
Her example makes no account for the time period at which each civilization reaches each stage. For her example to work, she’d have to come up with a model where civilizations appear and mature at different time intervals, then apply the SIA to civilizations at our level and in our current time period, and then show that this implies a late great filter.
It can be done, quite easily, but there are also models where SIA implies an early filter, or none at all. SIA boosts some worlds, and some of the worlds boosted by SIA have late filters, others have early filters.
The argument is not yet completed.