If x-risks have destroyed <100 civilizations in our galaxy you are probably right. If they have destroyed more than, say, 10,000 my suggestion probably is.
While I agree with your logic, my point should be said as following:
100 risks all happen together in short time period and interact non-lineary with each other. This results in incalculable complexity of the task to prevent them and no civilization is able to solve this complexity in such a short time. If risks were separated in time and no interacting your point is more valid.
This complexity results in trade-offs like: we need safe AI as soon as possible to prevent nano and bio risks, but to create really safe AI we need as much time as possible. There are many such trade-offs and also there are higher levels of complexity of problems than trade-offs.
For civilizations with no singleton I agree with your logic. But there might be/​have been civilizations where one king is the absolute leader and it is able to block research into whatever technologies it wants. What kind of great filter destroys an industrialized planet run by King Lee Kuan Yew who has a 1000 (earth) year lifespan?
I am not sure that exactly this is FP solution. Personally I am more incline to Rare Earth solutions.
But I could suggest that may be the complexity of risks problem is so complex that no simple measures like banning most technologies will work. The weak point is that one von Neumann Probe is enough to colonise all visible universe. So there should be something which prevents civilizations from vNPs creation.
If x-risks have destroyed <100 civilizations in our galaxy you are probably right. If they have destroyed more than, say, 10,000 my suggestion probably is.
While I agree with your logic, my point should be said as following:
100 risks all happen together in short time period and interact non-lineary with each other. This results in incalculable complexity of the task to prevent them and no civilization is able to solve this complexity in such a short time. If risks were separated in time and no interacting your point is more valid.
This complexity results in trade-offs like: we need safe AI as soon as possible to prevent nano and bio risks, but to create really safe AI we need as much time as possible. There are many such trade-offs and also there are higher levels of complexity of problems than trade-offs.
For civilizations with no singleton I agree with your logic. But there might be/​have been civilizations where one king is the absolute leader and it is able to block research into whatever technologies it wants. What kind of great filter destroys an industrialized planet run by King Lee Kuan Yew who has a 1000 (earth) year lifespan?
I am not sure that exactly this is FP solution. Personally I am more incline to Rare Earth solutions.
But I could suggest that may be the complexity of risks problem is so complex that no simple measures like banning most technologies will work. The weak point is that one von Neumann Probe is enough to colonise all visible universe. So there should be something which prevents civilizations from vNPs creation.
While I hope that Rare Earth is right, it implies that we are special. It seems far more likely that we are common.
Doomsday argument for Fermi paradox claims that more likely that Great filter is ahead. It was created by Katja Grace