If one takes the Fermi paradox seriously and takes a not too strong Copernican principle, one concludes that other species would be likely to come up with the notion of the Fermi paradox and that this didn’t help them at all.
How is this different from reasoning more generally? I.e, “one concludes that we will come up with generally the same ideas as other species, and since we infer this didn’t help most or all the other species, nothing we do is likely to help us either.” Or in simpler words: we infer the Great Barrier is really Great.
Different potentially spacefaring, expansionist lifeforms, from completely different evolutions, will have an awful lot of differences on average. Those of them who use observation and rational deduction (a subset), will observe Fermi paradox and predict Great Filter just like we do, and on natural-selection principles at least some would try to avoid it, but we see none who have succeeded around. That’s my reading of your argument.
But if we allow that they use observation and rational deduction to plan actions—that they are intelligent in a way comparable to ours—then it is also likely they are similar to us in other consequences of such intelligence. Should we conclude that no product of a generalized capacity for intelligence is likely to save us from the Great Filter, and we should instead try to use uniquely human advantages less likely to evolve twice, like e.g. our social-political behaviors?
How is this different from reasoning more generally? I.e, “one concludes that we will come up with generally the same ideas as other species, and since we infer this didn’t help most or all the other species, nothing we do is likely to help us either.” Or in simpler words: we infer the Great Barrier is really Great.
Different potentially spacefaring, expansionist lifeforms, from completely different evolutions, will have an awful lot of differences on average. Those of them who use observation and rational deduction (a subset), will observe Fermi paradox and predict Great Filter just like we do, and on natural-selection principles at least some would try to avoid it, but we see none who have succeeded around. That’s my reading of your argument.
But if we allow that they use observation and rational deduction to plan actions—that they are intelligent in a way comparable to ours—then it is also likely they are similar to us in other consequences of such intelligence. Should we conclude that no product of a generalized capacity for intelligence is likely to save us from the Great Filter, and we should instead try to use uniquely human advantages less likely to evolve twice, like e.g. our social-political behaviors?
I’m not sure how to respond. Your comment is potentially the most enlightening and disturbing thing I’ve seen on LW for a while.