The more general version would be: we’re observing from what would seem like very early in history if sentience is successful at spreading sentience. Therefore, it’s probably not. The remainder of history might have very few observers, like the singleton misaligned superintelligences we and others will spawn. This form doesn’t seem to depend on FTL.
Yuck. But I wouldn’t want to remain willfully ignorant of the arguments, so thanks!
Hopefully I’m misunderstanding something about the existing thought on this issue. Corrections are more than welcome.
However, its interaction with the Doomsday Argument is more complicated and potentially weaker (assuming you accept the Doomsday Argument at all). This is because P(we live in a Kardashev ~0.85 civilisation) depends strongly in this scenario on the per-civilisation P(Doom before Kardashev 2); if the latter is importantly different from 1 (even 0.9999), then the vast majority of people still live in K2 civilisations and us being in a Kardashev ~0.85 civilisation is still very unlikely (though less unlikely than it would be in No Doom scenarios where those K2+ civilisations last X trillion years and spread further).
I’m not sure how sane it is for me to be talking about P(P(Doom)), even in this sense (and frankly my entire original argument stinks of Lovecraft, so I’m not sure how sane I am in general), but in my estimation P(P(Doom before Kardashev 2) > 0.9999) < P(FTL is possible). AI would have to be really easy to invent and co-ordination to not build it would have to be fully impossible—whether Butlerian Jihad can work or not for RL humanity, it seems like it wouldn’t need much difference in our risk curves for it to definitely happen, and while we have gotten to a point where we can build AI before we can build a Dyson Sphere, that doesn’t seem like it’s a necessary path. I can buy that P(AI Doom before Kardashev 3) could be extremely high in no-FTL worlds—that’d only require that alignment is impossible, since reaching Kardashev 3 STL takes millennia and co-ordination among chaotic beings is very hard at interstellar scales in a way it’s not within a star system. But assured doom before K2 seems very weird. And FTL doesn’t seem that unlikely to me; time travel is P = ϵ since we don’t see time travellers, but I know one proposed mechanism (quantum vacuum misbehaving upon creation of a CTC system) that might ban time travel specifically and thus break the “FTL implies time travel” implication.
It also gets weird when you start talking about the chance that a given observer will observe the Fermi Paradox or not; my intuitions might be failing me, but it seems like a lot, possibly most, of the people in the “P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae” world would see aliens (due to K2 civilisations being able to be seen from further, and see much further—an Oort Cloud interferometer could detect 2000BC humanity anywhere in the Local Group via the Pyramids and land-use patterns, and detect 2000AD humanity even further via anomalous night-time illumination).
Note also that among “P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae” worlds, there’s not much Outside View evidence that P(Human doom before K2) is high as opposed to low; P(random observer is Us) is not substantially affected by whether there are N or N+1 K2 civilisations the way it is by whether there are 0 or 1 such civilisations (this is what I was talking about with aliens breaking the conventional Doomsday Argument). So this would be substantially more optimistic than my proposal; the “P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae” scenario means we get wiped out eventually, but we (and aliens) could still have astronomically-positive utility before then, as opposed to being Doomed Right Now (though we could still be Doomed Right Now for Inside View reasons).
You’re saying it seems more likely that FTL is possible than that every single civilization wipes itself out. Intuitively, I agree, but it’s hard to be sure.
I’d say it’s not that unilkely that P(doom before K2) > .9999. I know more about AI and alignment than I do physics, and I’d say it’s looking a lot like AGI is surprisingly easy to build once you’ve got the compute (and less of that than we thought), and that coordination is quite difficult. Long-term stable AGI alignment in a selfish and shortsighted species doesn’t seem impossible, but it might be really hard (and I think it’s likely that any species creating AGI will have barely graduated from being animals like we have, so that could well be universal). On the other hand, I haven’t kept up on physics, much less debates on how likely FTL is.
I think there’s another, more likely possibility: other solutions to the Fermi paradox. I don’t remember the author, but there’s an astrophysicist arguing that it’s quite possible we’re the first in our galaxy, based on the frequency of sterilizing nova events, particularly nearer the galactic center. There are a bunch of other galaxies 100,000-1m light years away, which isn’t that far on the timeline of the 14b universe lifespan. But this interacts with the timelines for creating habitable problems, and timelines of nova and supernova events sterilizing most planets frequently enough to prevent intelligent life. Whew.
Hooray, LessWrong for revealing that I don’t understand the Fermi Paradox at all!
Let me just mention my preferred solution, even though I can’t make an argument for its likelihood:
Aliens have visited. And they’re still here, keeping an eye on things. Probably not any of the ones they talk about on Ancient Mysteries (although the current reports from the US military indicates that they believe they’ve observed vehicles we can’t remotely build, and it’s highly unlikely to be a secret US program, or any other world power, so maybe there are some oddly careless aliens buzzing around...)
My proposal is that a civilization that achieves aligned AGI might easily elect to stay dark. No Dyson spheres that can be seen by monkeys, and perhaps more elaborate means to conceal their (largely virtual) civilization. They may fear encountering either a hostile species with its own aligned AGI, or an unaligned AGI. One possible response is to stay hidden, possibly while preparing to fight. It does sound odd if hiding works, because an unaligned AGI should be expanding its paperclipping projects at near light speed anyway, but there are plenty of possible twists to the logic that I haven’t thought through.
That interacts with your premise that K2 civilizations should be easy to spot. I guess it’s a claim that advanced civilizations don’t hit K2, because they prefer to live in virtual worlds, and have little interest in expanding as fast as possible.
Anyway, I should drag my head out of this fun space and go do something more pragmatically useful. I intend to help our odds of survival, even if we’re ultimately doomed based on this anthropic reasoning.
I guess it’s a claim that advanced civilizations don’t hit K2, because they prefer to live in virtual worlds, and have little interest in expanding as fast as possible.
This would be hard. You would need active regulations against designer babies and/or reproduction.
Because, well, suppose 99.9% of your population wants to veg out in the Land of Infinite Fun. The other 0.1% thinks a good use of its time is popping out as many babies as possible. Maybe they can’t make sure their offspring agree with this (hence the mention of regulations against designer babies, although even then natural selection will be selecting at full power for any genes producing a tendency to do this), but they can brute-force through that by having ten thousand babies each—you’ve presumably got immortality if you’ve gotten to this point, so there’s not a lot stopping them. Heck, they could even flee civilisation to escape the persecution and start their own civilisation which rapidly eclipses the original in population and (if the original’s not making maximum use of resources) power.
Giving up on expansion is an exclusive Filter, at the level of civilisations (they all need to do this, because any proportion of expanders will wind up dominating the end-state) but also at the level of individuals (individuals who decide to raise the birth rate of their civilisations can do it unilaterally unless suppressed). Shub-Niggurath always wins by default—it’s possible to subdue her, but you are not going to do it by accident.
(The obvious examples of this in the human case are the Amish and Quiverfulls. The Amish population grows rapidly because it has high fertility and high retention. The Quiverfulls are not currently self-sustaining because they have such low retention that 12 kids/woman isn’t enough to break even, but that will very predictably yield to technology. Unless these are forcibly suppressed, birth rate collapse is not going to make the human race peter out.)
Anyway, I should drag my head out of this fun space and go do something more pragmatically useful. I intend to help our odds of survival, even if we’re ultimately doomed based on this anthropic reasoning.
Yes! Please do! I’m not at all trying to discourage people from fighting the good fight. It’s just, y’know, I noticed it and so I figured I’d mention it.
Expansions to google: self-indicating assumption and self-sampling assumption. These are terrible names and I can never remember which one’s which without a lookup; one of them is a halfer on the sleeping beauty problem and the other is a thirder.
Thanks, I hate it.
The anthropic argument seems to make sense.
The more general version would be: we’re observing from what would seem like very early in history if sentience is successful at spreading sentience. Therefore, it’s probably not. The remainder of history might have very few observers, like the singleton misaligned superintelligences we and others will spawn. This form doesn’t seem to depend on FTL.
Yuck. But I wouldn’t want to remain willfully ignorant of the arguments, so thanks!
Hopefully I’m misunderstanding something about the existing thought on this issue. Corrections are more than welcome.
Your scenario does not depend on FTL.
However, its interaction with the Doomsday Argument is more complicated and potentially weaker (assuming you accept the Doomsday Argument at all). This is because P(we live in a Kardashev ~0.85 civilisation) depends strongly in this scenario on the per-civilisation P(Doom before Kardashev 2); if the latter is importantly different from 1 (even 0.9999), then the vast majority of people still live in K2 civilisations and us being in a Kardashev ~0.85 civilisation is still very unlikely (though less unlikely than it would be in No Doom scenarios where those K2+ civilisations last X trillion years and spread further).
I’m not sure how sane it is for me to be talking about P(P(Doom)), even in this sense (and frankly my entire original argument stinks of Lovecraft, so I’m not sure how sane I am in general), but in my estimation P(P(Doom before Kardashev 2) > 0.9999) < P(FTL is possible). AI would have to be really easy to invent and co-ordination to not build it would have to be fully impossible—whether Butlerian Jihad can work or not for RL humanity, it seems like it wouldn’t need much difference in our risk curves for it to definitely happen, and while we have gotten to a point where we can build AI before we can build a Dyson Sphere, that doesn’t seem like it’s a necessary path. I can buy that P(AI Doom before Kardashev 3) could be extremely high in no-FTL worlds—that’d only require that alignment is impossible, since reaching Kardashev 3 STL takes millennia and co-ordination among chaotic beings is very hard at interstellar scales in a way it’s not within a star system. But assured doom before K2 seems very weird. And FTL doesn’t seem that unlikely to me; time travel is P = ϵ since we don’t see time travellers, but I know one proposed mechanism (quantum vacuum misbehaving upon creation of a CTC system) that might ban time travel specifically and thus break the “FTL implies time travel” implication.
It also gets weird when you start talking about the chance that a given observer will observe the Fermi Paradox or not; my intuitions might be failing me, but it seems like a lot, possibly most, of the people in the “P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae” world would see aliens (due to K2 civilisations being able to be seen from further, and see much further—an Oort Cloud interferometer could detect 2000BC humanity anywhere in the Local Group via the Pyramids and land-use patterns, and detect 2000AD humanity even further via anomalous night-time illumination).
Note also that among “P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae” worlds, there’s not much Outside View evidence that P(Human doom before K2) is high as opposed to low; P(random observer is Us) is not substantially affected by whether there are N or N+1 K2 civilisations the way it is by whether there are 0 or 1 such civilisations (this is what I was talking about with aliens breaking the conventional Doomsday Argument). So this would be substantially more optimistic than my proposal; the “P(Doom before K2) < 0.9999, fate of universe is STL paperclip nebulae” scenario means we get wiped out eventually, but we (and aliens) could still have astronomically-positive utility before then, as opposed to being Doomed Right Now (though we could still be Doomed Right Now for Inside View reasons).
To your first point:
You’re saying it seems more likely that FTL is possible than that every single civilization wipes itself out. Intuitively, I agree, but it’s hard to be sure.
I’d say it’s not that unilkely that P(doom before K2) > .9999. I know more about AI and alignment than I do physics, and I’d say it’s looking a lot like AGI is surprisingly easy to build once you’ve got the compute (and less of that than we thought), and that coordination is quite difficult. Long-term stable AGI alignment in a selfish and shortsighted species doesn’t seem impossible, but it might be really hard (and I think it’s likely that any species creating AGI will have barely graduated from being animals like we have, so that could well be universal). On the other hand, I haven’t kept up on physics, much less debates on how likely FTL is.
I think there’s another, more likely possibility: other solutions to the Fermi paradox. I don’t remember the author, but there’s an astrophysicist arguing that it’s quite possible we’re the first in our galaxy, based on the frequency of sterilizing nova events, particularly nearer the galactic center. There are a bunch of other galaxies 100,000-1m light years away, which isn’t that far on the timeline of the 14b universe lifespan. But this interacts with the timelines for creating habitable problems, and timelines of nova and supernova events sterilizing most planets frequently enough to prevent intelligent life. Whew.
Hooray, LessWrong for revealing that I don’t understand the Fermi Paradox at all!
Let me just mention my preferred solution, even though I can’t make an argument for its likelihood:
Aliens have visited. And they’re still here, keeping an eye on things. Probably not any of the ones they talk about on Ancient Mysteries (although the current reports from the US military indicates that they believe they’ve observed vehicles we can’t remotely build, and it’s highly unlikely to be a secret US program, or any other world power, so maybe there are some oddly careless aliens buzzing around...)
My proposal is that a civilization that achieves aligned AGI might easily elect to stay dark. No Dyson spheres that can be seen by monkeys, and perhaps more elaborate means to conceal their (largely virtual) civilization. They may fear encountering either a hostile species with its own aligned AGI, or an unaligned AGI. One possible response is to stay hidden, possibly while preparing to fight. It does sound odd if hiding works, because an unaligned AGI should be expanding its paperclipping projects at near light speed anyway, but there are plenty of possible twists to the logic that I haven’t thought through.
That interacts with your premise that K2 civilizations should be easy to spot. I guess it’s a claim that advanced civilizations don’t hit K2, because they prefer to live in virtual worlds, and have little interest in expanding as fast as possible.
Anyway, I should drag my head out of this fun space and go do something more pragmatically useful. I intend to help our odds of survival, even if we’re ultimately doomed based on this anthropic reasoning.
This would be hard. You would need active regulations against designer babies and/or reproduction.
Because, well, suppose 99.9% of your population wants to veg out in the Land of Infinite Fun. The other 0.1% thinks a good use of its time is popping out as many babies as possible. Maybe they can’t make sure their offspring agree with this (hence the mention of regulations against designer babies, although even then natural selection will be selecting at full power for any genes producing a tendency to do this), but they can brute-force through that by having ten thousand babies each—you’ve presumably got immortality if you’ve gotten to this point, so there’s not a lot stopping them. Heck, they could even flee civilisation to escape the persecution and start their own civilisation which rapidly eclipses the original in population and (if the original’s not making maximum use of resources) power.
Giving up on expansion is an exclusive Filter, at the level of civilisations (they all need to do this, because any proportion of expanders will wind up dominating the end-state) but also at the level of individuals (individuals who decide to raise the birth rate of their civilisations can do it unilaterally unless suppressed). Shub-Niggurath always wins by default—it’s possible to subdue her, but you are not going to do it by accident.
(The obvious examples of this in the human case are the Amish and Quiverfulls. The Amish population grows rapidly because it has high fertility and high retention. The Quiverfulls are not currently self-sustaining because they have such low retention that 12 kids/woman isn’t enough to break even, but that will very predictably yield to technology. Unless these are forcibly suppressed, birth rate collapse is not going to make the human race peter out.)
Yes! Please do! I’m not at all trying to discourage people from fighting the good fight. It’s just, y’know, I noticed it and so I figured I’d mention it.
I think this depends on whether you use SIA or SSA or some other theory of anthropics.
Pardon my ignorance; I don’t actually know what SIA and SSA stand for.
Expansions to google: self-indicating assumption and self-sampling assumption. These are terrible names and I can never remember which one’s which without a lookup; one of them is a halfer on the sleeping beauty problem and the other is a thirder.
https://www.lesswrong.com/tag/self-sampling-assumption
https://www.lesswrong.com/tag/self-indication-assumption
https://en.wikipedia.org/wiki/Sleeping_Beauty_problem
https://en.wikipedia.org/wiki/Anthropic_Bias_(book)
and here’s some random paper that came up when I googled that:
http://philsci-archive.pitt.edu/16088/1/anthropic.pdf