“Already populated” is a red herring. What’s the probability that >50% of the universe will ever be populated? I don’t see any reason for it to be sensitive to how well things go on Earth in the next 100 years.
I think it is likely that we are the only spontaneously-created intelligent species in the entire 4-manifold that is the universe, space and time included (excluding species which we might create in the future, of course).
Yes, I have a critique. Most of anthropics is gibberish. Until someone makes anthropics work, I refuse to update on any of it. (Apart from the bits that are commonsensical enough to derive without knowing about “anthropics”, e.g. that if your fising net has holes 2 inches big, don’t expect to catch fish smaller then 2 inches wide)
I don’t think you can really avoid anthropic ideas—or the universe stops making sense. Some anthropic ideas can be challenging—but I think we have got to try.
Anyway, you did the critique—but didn’t go for a supporting argument. I can’t think of very much that you could say. We don’t have very much idea yet about what’s out there—and claims to know such things just seem over-confident.
Fermi’s paradox implying no aliens surely applies within-galaxy only. Many galaxies are distant, and intelligent life forming there concurrently (or long before us) is quite compatible with it not having arrived on our doorsteps yet—due to the speed of light limitation.
If you think we should be able to at least see life in distant galaxies, then, in short, not really—or at least we don’t know enough to say yea or nay on that issue with any confidence yet.
The Andromeda Galaxy is 2.5 million light-years away. The universe is about 1250 million years old. Therefore that’s not far enough away to protect us from colonizing aliens travelling at 0.5c or above.
The universe is about 13,750 million years old. The Fermi argument suggests that—if there were intelligent aliens in this galaxy, they should probably have filled it by now—unless they originated very close to us in time—which seems unlikely. The argument applies much more weakly to galaxies, because they are much further away, and they are separated from each other by huge regions of empty space. Also, the Andromeda Galaxy is just one galaxy. Say only one galaxy in 100 has intelligent life—and the Andromeda Galaxy isn’t among them. That bumps the required distance to be travelled up to 10 million light years or so.
Even within this galaxy, the Fermi argument is not that strong. Maybe intelligent aliens formed in the last billion years, and haven’t made it here yet—because space travel is tricky, and 0.1c is about the limit. The universe is only about 14 billion years old. For some of of that there were not too many second generations stars. The odds are against there being aliens nearby—but they are not that heavily stacked. For other galaxies, the argument is much, much less compelling.
There are some more practical and harmless applications as well. In Nick Bostrom’s Anthropic Bias, for example, there is an application of the Self-Sampling Assumption to traffic analysis.
Even Nick Bostrom, who is arguably the leading expert on anthropic problems, rejects SIA for a number of reasons (see his book Anthropic Bias). That alone is a pretty big blow to its credibility.
That is curious. Anyway, the self-indication assumption seems fairly straight-forwards (as much as any anthropic reasoning is, anyway). The critical material from Bostrom on the topic I have read seems unpersuasive. He doesn’t seem to “get” the motivation for the idea in the first place.
If you think there is a significant probability that an intelligence explosion is possible or likely, then that question is sensitive to how well things go on Earth in the next 100 years.
However likely they are, I expect intelligence explosions to be evenly distributed through space and time. If 100 years from now Earth loses by a hair, there are still plenty of folks around the universe who will win or have won by a hair. They’ll make whatever use of the 80 billion galaxies that they can—will they be wasting them?
If Earth wins by a hair, or by a lot, we’ll be competing with those folks. This also significantly reduces the opportunity cost Roko was referring to.
“Already populated” is a red herring. What’s the probability that >50% of the universe will ever be populated? I don’t see any reason for it to be sensitive to how well things go on Earth in the next 100 years.
I think it is likely that we are the only spontaneously-created intelligent species in the entire 4-manifold that is the universe, space and time included (excluding species which we might create in the future, of course).
I’m curious to know how likely, and why. But do you agree that aliens are relevant to evaluating astronomical waste?
That seems contrary to the http://en.wikipedia.org/wiki/Self-Indication_Assumption
Do you have a critique—or a supporting argument?
Yes, I have a critique. Most of anthropics is gibberish. Until someone makes anthropics work, I refuse to update on any of it. (Apart from the bits that are commonsensical enough to derive without knowing about “anthropics”, e.g. that if your fising net has holes 2 inches big, don’t expect to catch fish smaller then 2 inches wide)
I don’t think you can really avoid anthropic ideas—or the universe stops making sense. Some anthropic ideas can be challenging—but I think we have got to try.
Anyway, you did the critique—but didn’t go for a supporting argument. I can’t think of very much that you could say. We don’t have very much idea yet about what’s out there—and claims to know such things just seem over-confident.
Basically Rare Earth seems to me to be the only tenable solution to Fermi’s paradox.
Fermi’s paradox implying no aliens surely applies within-galaxy only. Many galaxies are distant, and intelligent life forming there concurrently (or long before us) is quite compatible with it not having arrived on our doorsteps yet—due to the speed of light limitation.
If you think we should be able to at least see life in distant galaxies, then, in short, not really—or at least we don’t know enough to say yea or nay on that issue with any confidence yet.
The Andromeda Galaxy is 2.5 million light-years away. The universe is about 1250 million years old. Therefore that’s not far enough away to protect us from colonizing aliens travelling at 0.5c or above.
The universe is about 13,750 million years old. The Fermi argument suggests that—if there were intelligent aliens in this galaxy, they should probably have filled it by now—unless they originated very close to us in time—which seems unlikely. The argument applies much more weakly to galaxies, because they are much further away, and they are separated from each other by huge regions of empty space. Also, the Andromeda Galaxy is just one galaxy. Say only one galaxy in 100 has intelligent life—and the Andromeda Galaxy isn’t among them. That bumps the required distance to be travelled up to 10 million light years or so.
Even within this galaxy, the Fermi argument is not that strong. Maybe intelligent aliens formed in the last billion years, and haven’t made it here yet—because space travel is tricky, and 0.1c is about the limit. The universe is only about 14 billion years old. For some of of that there were not too many second generations stars. The odds are against there being aliens nearby—but they are not that heavily stacked. For other galaxies, the argument is much, much less compelling.
There are strained applications of anthropics, like the doomsday argument. “What happened here might happen elsewhere” is much more innocuous.
There are some more practical and harmless applications as well. In Nick Bostrom’s Anthropic Bias, for example, there is an application of the Self-Sampling Assumption to traffic analysis.
Bostrom says: “Cars in the next lane really do go faster”
I agree.
Even Nick Bostrom, who is arguably the leading expert on anthropic problems, rejects SIA for a number of reasons (see his book Anthropic Bias). That alone is a pretty big blow to its credibility.
That is curious. Anyway, the self-indication assumption seems fairly straight-forwards (as much as any anthropic reasoning is, anyway). The critical material from Bostrom on the topic I have read seems unpersuasive. He doesn’t seem to “get” the motivation for the idea in the first place.
If you think there is a significant probability that an intelligence explosion is possible or likely, then that question is sensitive to how well things go on Earth in the next 100 years.
However likely they are, I expect intelligence explosions to be evenly distributed through space and time. If 100 years from now Earth loses by a hair, there are still plenty of folks around the universe who will win or have won by a hair. They’ll make whatever use of the 80 billion galaxies that they can—will they be wasting them?
If Earth wins by a hair, or by a lot, we’ll be competing with those folks. This also significantly reduces the opportunity cost Roko was referring to.