Really, it seems like any kind of superintelligent AI, friendly or unfriendly, would result in expanding intelligence throughout the universe. So perhaps a good statement would be: “If you believe the Great Filter is ahead of us, that implies that most civilizations get wiped out before achieving any kind of superintelligent AI, meaning that either superintelligent AI is very hard, or wiping out generally comes relatively early.” (It seems possible that we already got lucky with the Cold War… http://www.guardian.co.uk/commentisfree/2012/oct/27/vasili-arkhipov-stopped-nuclear-war)
Unless intelligent life is already almost-extremely rare, that’s not nearly enough ‘luck’ to explain why everyone else is dead, including aliens who happen to be better at solving coordination problems (imagine SF insectoid aliens).
Really, it seems like any kind of superintelligent AI, friendly or unfriendly, would result in expanding intelligence throughout the universe. So perhaps a good statement would be: “If you believe the Great Filter is ahead of us, that implies that most civilizations get wiped out before achieving any kind of superintelligent AI, meaning that either superintelligent AI is very hard, or wiping out generally comes relatively early.” (It seems possible that we already got lucky with the Cold War… http://www.guardian.co.uk/commentisfree/2012/oct/27/vasili-arkhipov-stopped-nuclear-war)
Unless intelligent life is already almost-extremely rare, that’s not nearly enough ‘luck’ to explain why everyone else is dead, including aliens who happen to be better at solving coordination problems (imagine SF insectoid aliens).
Yeah, of course.