I’m not sure which part of the argument you are referring to. Are you talking about estimates that most of the Great Filter is in front of us? If so, I’d be inclined to tentatively agree. (Although I’ve been updating more in the direction of more filtration in front for a variety of reasons.) I was talking about that the observation that we shouldn’t expect AI to be a substantial fraction of the Great Filter. Katja’s observation in that context is simply a comment about what our light cone looks like.
Are you talking about estimates that most of the Great Filter is in front of us? If so, I’d be inclined to tentatively agree.
OK.
I was talking about that the observation that we shouldn’t expect AI to be a substantial fraction of the Great Filter.
Sure. I was saying that this alone (sans SIA) is much less powerful if we assign much weight to early filters. Say (assuming we’re not in a simulation) you assigned 20% probability to intelligence being common and visible (this does invoke observation selection problems inevitably, since colonization could preempt human evolution), 5% to intelligence being common but invisible (environmentalist Von Neumann probes enforce low-visibility; or maybe the interstellar medium shreds even slow starships) 5% to intelligence arising often and self-destructing, and 70% to intelligence being rare. Then you look outside, rule out “common and visible,” and update to 6.25% probability of invisible aliens, 6.25% probability of convergent self-destruction in a fertile universe, and 87.5% probability that intelligence is rare. With the SIA (assuming we’re not in a simulation, even though the SIA would make us confident that we were) we would also chop off the “intelligence is rare” possibility, and wind up with 50% probability of invisible aliens and 50% probability of convergent self-destruction.
And, as Katja agrees, SIA would make us very confident that AI or similar technologies will allow the production of vast numbers of simulations with our experiences, i.e. if we bought SIA we should think that we were simulations, and in the “outside world” AI was feasible, but not to have strong conclusions about late or early filters (within many orders of magnitude) about the outside world.
I agree with most of this. The relevant point is about AI in particular. More specifically, if an AGI is likely to start expanding to control its light cone at a substantial fraction of the speed of light, and this is a major part of the Filter, then we’d expect to see it. In contrast to something like nanotech for example that if it destroys civilization on a planet will be hard for observers to notice. Anthropic approaches (both SIA and SSA) argue for large amounts of filtration in front. The point is that observation suggests that AGI isn’t a major part of that filtration if that’s correct.
An example that might help illustrate the point better. Imagine that someone is worried that the filtration of civilizations generally occurs due to them running some sort of physics experiment that causes a false vacuum collapse that expands at less than the speed of light (say c/10,000). We can discount the likelyhood of such an event because we would see from basic astronomy the result of the civilizations that have wiped themselves out in how they impact the stars near them.
I’m not sure which part of the argument you are referring to. Are you talking about estimates that most of the Great Filter is in front of us? If so, I’d be inclined to tentatively agree. (Although I’ve been updating more in the direction of more filtration in front for a variety of reasons.) I was talking about that the observation that we shouldn’t expect AI to be a substantial fraction of the Great Filter. Katja’s observation in that context is simply a comment about what our light cone looks like.
OK.
Sure. I was saying that this alone (sans SIA) is much less powerful if we assign much weight to early filters. Say (assuming we’re not in a simulation) you assigned 20% probability to intelligence being common and visible (this does invoke observation selection problems inevitably, since colonization could preempt human evolution), 5% to intelligence being common but invisible (environmentalist Von Neumann probes enforce low-visibility; or maybe the interstellar medium shreds even slow starships) 5% to intelligence arising often and self-destructing, and 70% to intelligence being rare. Then you look outside, rule out “common and visible,” and update to 6.25% probability of invisible aliens, 6.25% probability of convergent self-destruction in a fertile universe, and 87.5% probability that intelligence is rare. With the SIA (assuming we’re not in a simulation, even though the SIA would make us confident that we were) we would also chop off the “intelligence is rare” possibility, and wind up with 50% probability of invisible aliens and 50% probability of convergent self-destruction.
And, as Katja agrees, SIA would make us very confident that AI or similar technologies will allow the production of vast numbers of simulations with our experiences, i.e. if we bought SIA we should think that we were simulations, and in the “outside world” AI was feasible, but not to have strong conclusions about late or early filters (within many orders of magnitude) about the outside world.
I agree with most of this. The relevant point is about AI in particular. More specifically, if an AGI is likely to start expanding to control its light cone at a substantial fraction of the speed of light, and this is a major part of the Filter, then we’d expect to see it. In contrast to something like nanotech for example that if it destroys civilization on a planet will be hard for observers to notice. Anthropic approaches (both SIA and SSA) argue for large amounts of filtration in front. The point is that observation suggests that AGI isn’t a major part of that filtration if that’s correct.
An example that might help illustrate the point better. Imagine that someone is worried that the filtration of civilizations generally occurs due to them running some sort of physics experiment that causes a false vacuum collapse that expands at less than the speed of light (say c/10,000). We can discount the likelyhood of such an event because we would see from basic astronomy the result of the civilizations that have wiped themselves out in how they impact the stars near them.