My personal guess would be that the great filter isn’t a filter at all, but a great scatterer, where different types of optimizers do not recognize each other as such, because their goals and appearances are so widely different, and they are sparse in the vast space of possibilities.
See James Miller here. Sure, the space of possible value systems is vaste, but I doubt that much less than (say) 0.1% of them would lead agents to try and take over the future light cone, so this could at most explain a small fraction (logarithmically) of the filter.
See James Miller here. Sure, the space of possible value systems is vaste, but I doubt that much less than (say) 0.1% of them would lead agents to try and take over the future light cone, so this could at most explain a small fraction (logarithmically) of the filter.