There is enough cumulative early filtration that very few civilizations develop, with less than 1 in expectation in a region like our past light-cone.
Interstellar travel is impossible.
Some civilizations have expanded but not engaged in mega-scale engineering that we could see or colonization that would have pre-empted our existence, and enforce their rules on dissenters.
Civilizations very reliably wipe themselves out before they can colonize.
Civilizations very reliably choose not to expand at all.
1-3 account for the Great Filter directly, and whether biological beings make AI they are happy with is irrelevant. For #4 and #5 what difference does it make whether biological beings make ‘FAI’ that helps them or ‘UFAI’ that kills them before going about its business? Either way the civilization (biological, machine, or both) could still wipe itself out or not (AIs could nuke each other out of existence too), and send out colonizers or not.
Unless there is some argument that ‘UFAI’ is much less likely to wipe out civilization (including itself), or much more likely to send out colonizers, how do the odds of alien ‘FAI’ vs ‘UFAI’ matter for explaining the Great Filter any more than whether aliens have scales or feathers? Either way they could produce visible signs or colonize Earth.
2 would be somewhat surprising since there’s no physical law that disallows it.
3 comes close to theology and would imply low AI risk since such entities would probably not allow a potentially dangerous AI to exist within any area they control.
4 is sort of a re-phrasing of 1.
5 is possible but implies some strong reason many would all reliably choose the same options.
For #4 and #5 what difference does it make whether biological beings make ‘FAI’ that helps them or ‘UFAI’ that kills them before going about its business?
Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.
1 is early filter meaning before our current state, #4 would be around or after our current state.
Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.
Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.
I’m trying to get you to explain why you think a belief that “AI is a significant risk” would change our credence in any of #1-5, compared to not believing that.
ok, combinations.
For each 1 to 5 I’m assuming mutually exclusive because I don’t want to mess around with too many scenarios.
For AI risk I’m assuming a paper clipper as a reasonable example of a doomsday AI scenario.
1-high : We’d expect nothing visible.
1-low : We’d expect nothing visible.
2-high : This comes down to “how impossible?” impossible for squishy meatbags or impossible for an AI with a primary goal that implies spreading. We’d still expect to see something weird as entire solar systems are engineered.
2-low :We’d expect nothing visible.
3-high :We’d expect nothing visible.
3-low :We’d expect nothing visible.
4-high : Implies something much more immediately deadly than AI risk which we should be devoting our resources to avoiding.
4-low : We’d expect nothing visible.
5-high : We’d still expect to see the universe being converted into paperclips by someone who screwed up.
5-low : We’d expect nothing visible.
Ok so fair point made, there’s a couple more options implied.
a:early filter,
b:low AI risk,
c:wizards already in charge who enforce low AI risk.
d:AI risk being far less important than some other really horrible soon to come risk.
Let’s consider a few propositions:
There is enough cumulative early filtration that very few civilizations develop, with less than 1 in expectation in a region like our past light-cone.
Interstellar travel is impossible.
Some civilizations have expanded but not engaged in mega-scale engineering that we could see or colonization that would have pre-empted our existence, and enforce their rules on dissenters.
Civilizations very reliably wipe themselves out before they can colonize.
Civilizations very reliably choose not to expand at all.
1-3 account for the Great Filter directly, and whether biological beings make AI they are happy with is irrelevant. For #4 and #5 what difference does it make whether biological beings make ‘FAI’ that helps them or ‘UFAI’ that kills them before going about its business? Either way the civilization (biological, machine, or both) could still wipe itself out or not (AIs could nuke each other out of existence too), and send out colonizers or not.
Unless there is some argument that ‘UFAI’ is much less likely to wipe out civilization (including itself), or much more likely to send out colonizers, how do the odds of alien ‘FAI’ vs ‘UFAI’ matter for explaining the Great Filter any more than whether aliens have scales or feathers? Either way they could produce visible signs or colonize Earth.
yes, 1 is equivalent to an early filter.
2 would be somewhat surprising since there’s no physical law that disallows it.
3 comes close to theology and would imply low AI risk since such entities would probably not allow a potentially dangerous AI to exist within any area they control.
4 is sort of a re-phrasing of 1.
5 is possible but implies some strong reason many would all reliably choose the same options.
Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.
1 is early filter meaning before our current state, #4 would be around or after our current state.
Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.
I’m trying to get you to explain why you think a belief that “AI is a significant risk” would change our credence in any of #1-5, compared to not believing that.
ah I see.
ok, combinations. For each 1 to 5 I’m assuming mutually exclusive because I don’t want to mess around with too many scenarios.
For AI risk I’m assuming a paper clipper as a reasonable example of a doomsday AI scenario.
1-high : We’d expect nothing visible.
1-low : We’d expect nothing visible.
2-high : This comes down to “how impossible?” impossible for squishy meatbags or impossible for an AI with a primary goal that implies spreading. We’d still expect to see something weird as entire solar systems are engineered.
2-low :We’d expect nothing visible.
3-high :We’d expect nothing visible.
3-low :We’d expect nothing visible.
4-high : Implies something much more immediately deadly than AI risk which we should be devoting our resources to avoiding.
4-low : We’d expect nothing visible.
5-high : We’d still expect to see the universe being converted into paperclips by someone who screwed up.
5-low : We’d expect nothing visible.
Ok so fair point made, there’s a couple more options implied.
a:early filter,
b:low AI risk,
c:wizards already in charge who enforce low AI risk.
d:AI risk being far less important than some other really horrible soon to come risk.