First, in the interest of full disclosure, the reason I’m here on LW is to maximize my contribution to promoting intelligent life. It currently appears that maximizing the number of Quality Adjusted Life Years integrated over the period from now until the heat death of the universe can only be achieved through spaceflight and spreading life/AI through the solar system, and then the galaxy. This can be done through either directed panspermia or by spreading intelligent life/AI directly. I have spent the last year or so trying to find any flaws in my understanding, and so I’m about to do everything I can to tear your initial argument to shreds. That’s not necessarily because I don’t agree with you, (although my reasoning diverges about halfway through) but rather a concerted effort to avoid confirmation bias. I don’t want to devote my entire life to something sub-optimal, just because I’m afraid to put my views under scrutiny.
So, if there is a filter, it probably lies in the future (or at least the new evidence tilts us in that direction).
You mentioned several possibilities for a great filter in the past, but that was by no means a comprehensive list. Here’s a longer list, off the top of my head:
Habitable stars are rare. (roughly sun-sized, minimal solar flares, etc) Poor candidate, as you point out.
Habitable planets are rare. (Orbit within the habitable zone, liquid H2O, ingredients for life) You touched on this, but our understanding of the source of Earth’s water is poor, so I don’t think we can discard this as a possibility. We have an oddly large moon, which may have played a role. First, it’s gravity ensured that the Earth’s rotational axis is roughly parallel to it’s orbital plane most of the time. This means that the planet is baked roughly evenly, rather than spending millions of years with the north pole facing the sun. Tidal forces also effect the mantle, which creates our magnetosphere, which in turn prevents atmospheric loss to space. There are a surprising number of other theories linking the moon to life on Earth.
Panspermia / Abiogenesis is rare. (transport may be limited by radiation/mutations, while genesis of new life may require rare environments or energy sources) We have reasonable evidence that life could survive within rocks blasted off of a planet’s surface long enough to seed nearby planets, but not necessarily that life could survive the long voyage between nearby stars. We’ve demonstrated that most, but not all, essential amino acids can be generated under conditions similar to those of early Earth. Also, there’s a weird coincidence where the formation of the first life on earth seems to coincide well with the end of the late heavy bombardment, which might have created conditions conducive to the formation of life late enough after planetary formation that geological activity could settle down a bit. There doesn’t seem to be any reason why there should have been a second heavy bombardment period, though, so that may be unique to our solar system.
Either photosynthesis is rare, or the Oxygen Catastrophe generally kills off all species. (High concentrations of oxygen are highly poisonous, which caused a massive extinction event. Additionally, losing all that CO2 from the atmosphere cooled earth tremendously since the sun wasn’t so bright. This caused the longest Snowball Earth episode in the planet’s history, in which all the planet’s oceans froze solid and all the land was covered in one massive glacier.) It seems likely that life could never have recovered from this.
Prokaryotic life is common, but Eukaryotic life is rare. (It’s really hard to evolve a cell nucleus.) Eukaryotes only appeared about 2 billion years after Prokaryotes; halfway through the chain of evolution from the first life until today.
Eukaryotic life is common, but multicellular life is rare. We’ve only had it for ~500 million years.
Multicellular life is common, but complex life on land is rare. It’s possible that we could never have developed spines or crawled onto land, or that animal life itself might be rare. This seems much less plausible, since it seems to have sprung directly from the evolution of multicellular life, in a fairly spectacular explosion of complexity.
Complex life is common, but is regularly wiped out before it can become intelligent. There have been 5 big extinction events in earth’s history, most recently the meteor that killed the dinosaurs. Although these weren’t enough to wipe out all life on earth, there are several cosmic threats that could. These include collision with another planet or other sufficiently large object, which might be caused by orbital periods synching up with Jupiter or by passing stars or black-holes. Additionally, Gamma Ray Bursts are extremely common, and might regularly wipe out all life in the inner solar system, where the stars are closer together. This would explain why we evolved out on the edge of a spiral arm of the milky way, and not closer to the galactic center.
Complex life is common, but intelligent life is rare. There seem to be a lot of somewhat intelligent creatures that aren’t closely related to us. (Parrots, octopus, dolphins, etc.) There are even several animals that make limited use of tools. What is rare, however, appears to be the capacity for abstract thought. Chimps can learn from each other by copying, but have a hard time learning or teaching each other without demonstrating. We’re also much better at learning by copying others, but we can also learn from abstract symbols written on a piece of paper. This appears to be a result of runaway evolution, where humans selected for mates with a high capacity for abstract thought, perhaps via a high capacity to predict others actions and plot accordingly.
Intelligent life is common, but technological civilizations are rare. We have had several steady-state conditions over our specie’s history. We used the first simple stone tools ~2.3 million years ago, and then stood upright and invented fire 1.5 million years ago. We haven’t evolved noticeably over the past 200,000 years, and yet we only developed agriculture and colonized the planet 10,000 years ago. Some of that may be due to the most recent ice age, but not all of it. We didn’t invent bronze or written language until 5,000 years ago. All the great advanced civilizations made relatively small advances in technology, and put all their efforts into infrastructure rather than R&D. The only thing the Romans invented was concrete; everything else was an adaptation of ideas from other cultures. Western civilization is really the first culture to invest heavily in R&D, and we generally suck at it. Places like silicon valley are the exception to the rule.
Given all this, I wouldn’t be so quick to assume that the great filter is in front of us. All this must be weighed against the risks posed by all the various existential risks. Nuclear war was a close call in the cold war, and the risk is an order of magnitude lower now, but is by no means gone. AI gets discussed a lot on here, but I don’t think biological warfare gets the attention it deserves. Our understanding of biology is growing rapidly, and I think it may one day be relatively easy for anyone to genetically engineer a unusually dangerous virus or pandemic. Additionally, advanced civilizations in general tend to only last on the order of hundred years, according to this paper. That’s more or less in line with the Future of Humanity Institute’s informal Global Catastrophic Risk Survey. (The mean estimate for humanity’s chance of going extinct this century was on the order of a 20%.) That said, Nick Bostram himself appears to think that the great filter is more likely to lie behind us than ahead of us. To me, it seems like it could easily go either way, but since Bostram has been researching this much longer than I have, I’m inclined to shift my probability estimate a bit further toward the great filter being behind us.
The above dealt primarily with the first half of your post, but let me also address the 2nd half. You’ve assigned several probability estimates to various outcomes of our civilization:
Collapse/Extinction: “in the 1% to 50% range.” I’m inclined to agree with you on this one, as described in the last paragraph of my above post.
Biological/Mixed Civilization: “This scenario is almost not worth mentioning: prior < 1%” I think you’ve defined this a bit too narrowly. We don’t yet see any limiting factor for AI advancement besides physics, but that doesn’t mean that one won’t make itself apparent. Maybe this factor will turn out to be teraFLOPS (aka limited by Moore’s law) or energy (limited by our energy production capacity) or even matter (limited by the amount of rare earth elements necessary to make computronium). But it could also happen that we fail to make a super-intelligence at all, or that AI eventually achieves most, but not all, of humans mental abilities. The livelihood of a general intelligence increases asymptotically with time, but I think it would be a mistake to assume that it is increasing asymptotically toward 1. It could easily be getting closer and closer to 0.8 or some other value which is hard to calculate. The existence of the human mind shows that consciousness can be built out of atoms, but not necessarily that it can be built out of a string of transistors, or that it is simple enough that we can ever understand it well enough to reproduce it in code. There’s also the existential risk of developing a flawed AI. We only have 1 shot at it, and the evidence seems to be against developing one correctly on the first try. I suspect that the supermajority of civilizations that develop AI’s develop flawed AIs. Even if 90% develop an AI before going to the stars, perhaps >99.9999% are wiped out by a poorly designed AI. This would lead to many more “Biological/Mixed Civilizations” than AI civilizations, if the flawed AI’s tend to wipe themselves out or not to spread out into the universe.
PostBiological Warm-tech AI Civilization: “I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.” This seems slightly low to me, but not by much. “This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.” Although this state doesn’t flow from energy being a limiting factor (aka biological/mixed civilizations may also be energy limited) I agree that such a civilization would eventually become energy limited. I see 2 ways of solving this: better harvesting (aka Dyson swarms, since Dyson spheres are likely mass-limited) or broader civilization (if it takes less energy to send a colony to the nearest star, then you do that before you start building a Dyson swarm).
From Warm-tech to Cold-tech: This seems to be where you are putting the majority of your probability mass. I’d probably put less, but that’s not actually my main contention. I don’t buy that this is sufficient reason to travel to the interstellar medium, away from such a ready energy and matter source as a solar system. You list 3 reasons: lower energy bit erasures, superconductivity, and quantum computer efficiency. Bit erasure costs seem like they would be more than made up for my a surplus of energy available from plentiful solar power, materials for fusion plants, etc. Only a few superconductors require temperatures below ~50 Kelvin, and you can get that anywhere perpetually shaded from the sun, such as the craters on the north and south poled of the moon (~30 Kelvin). If you want it somewhere else, stop an asteroid from spinning and build a computer on the dark side. I’m not sure that quantum computers need to be below that either. Anywhere you go, you’ll still be heated be cosmic microwave background radiation to ~4 K. Is an order of magnitude decrease in temperature really worth several orders of magnitude decrease in energy/matter harvesting ability? In order to expand exponentially, such a system would still need huge amounts of matter for superconductors and whatever else.
I’m inclined to agree with you on this one, as described in the last paragraph of my above post.
I should haved pointed out that even a high probability of collapse is unlikely to act as a filter, because it has to be convergent and a single civ can colonize.
From Warm-tech to Cold-tech: This seems to be where you are putting the majority of your probability mass.
It is where I am putting most of my prior probability mass. There are three considerations:
Engineering considerations—the configurations which maximize computation are those where the computational mass is far from heat sources such as stars which limit computation. With reversible computing, energy is unlikely to be a constraint at all, and the best use of available mass probably involves ejecting the most valuable mass out of the system.
Stealth considerations—given no radical new physics, it appears that stealth is the only reliable way to protect a civ’s computational brains. Any civ hanging out near a star would be a sitting duck.
Simulation argument selection effects—discussed elsewhere, but basically the coldtech scenario tends to maximize the creation of simulations which produce observers such as ourselves.
After conditioning on observations of the galaxy to date, the coldtech scenario contains essentially all of the remaining probability mass. Of course, our understanding of physics is incomplete, and I didn’t have time to list all of the plausible models for future civs. There is the transcension scenario, which is related to my model of coldtech civs migrating away from the galactic disk.
One other little thing I may have forgot to mention in the article: the distribution of dark matter is that of a halo, which is suspiciously close to what one would expect in the expulsion scenario, where elder civs are leaving the galaxy in directions away from the galactic disk. Of course, that effect is only relevant if a good chunk of the dark matter is usable for computation.
Bit erasure costs seem like they would be more than made up for my a surplus of energy available from plentiful solar power, materials for fusion plants, etc.
No—I should have elaborated on the model more, but the article was already long.
Given some planemo (asteroid,moon,planet whatever) of mass M, we are concerned with maximizing the total quantity of computation in ops over the future that we can extract from that mass M.
If high tech reversible/quantum computing is possible, then the designs which maximize the total computation are all temperature limited, due to Landauer’s limit.
Now there are actually many constraints to consider. There is a structural constraint that even if your device creates no heat, there is a limit to the ops/s achievable by one molecular transistor—and this actually is also related to Landauer’s principle. Whether the computer is reversible or not, it still requires about 100kT j per reliable bitop—the difference is that the irreversible computer converts that energy into heat, whereas the reversible design recycles it.
If reversible/quantum computing is possible, then there is no competition—the reversible designs will scale to enormously higher computational densities (that would result in the equivalent of nuclear explosions if all of those bits were erased).
Temperature then becomes the last key thing you can optimize for, as the background temperature limits your effective cooling capability.
Anywhere you go, you’ll still be heated be cosmic microwave background radiation to ~4 K. Is an order of magnitude decrease in temperature really worth several orders of magnitude decrease in energy/matter harvesting ability?
Well—assuming that really powerful reversible computing is possible, then the answer—rather obviously—is yes.
But again energy harvesting is only necessary if energy is a constraint, which it isn’t in the coldtech model.
Why not just build an inferior computer design that only achieves 10% of the maximum capacity? Intelligence requires computation. As long as there exists some reasonably low energy technique for ejecting from the solar system, it results in a large payoff multiplier. Of course you can still leave a bunch of stuff in the system, and perhaps even have a form of a supply line—although that could reduce stealth and add risk.
There is admittedly a lot of hand waving going on in this model. If I had more time I would develop a more accurate model focusing on some of the key unknowns.
One key variable is the maximum practical reversibility ratio, which is the ratio of bitops of computation per bitop erased. This determines the maximum efficiency gain from reversible computing. Physics doesn’t appear to have a hard limit for this variable, but there will probably be engineering limits.
For example, an advanced civ will at the very least want to store its observational data from its sensors in a compressed form, which implies erasing some minimal number of bits. But if you think about a big civ occupying a sphere, the input bits/s coming in from a few sparse sensor ports on the surface is going to be incredibly tiny compared to the bitop/s rate across the whole volume.
First, let me try to summarize your position formally. Please let me know if I’m misrepresenting anything. We seem to be talking past each other on a couple subtopics, and I thought this might help clear things up.
1 p(type III civilization in milky way) ≈ 1
1.1 p(reversible computing | type III civilization in milky way) ≈ .9
1.1.1 p(¬energy or mass limited | reversible computing) ≈ 1
1.1.1.1 p(interstellar space | ¬ energy or mass limited) is large
1.1.1.2 p(intergalactic space | ¬ energy or mass limited) is very large
1.1.1.3 p( (interstellar space ↓ intergalactic space) | ¬ energy or mass limited) ≈ 0
1.1.2 p(energy or mass limited | reversible computing) ≈ 0
1.2 p(¬reversible computing | type III civilization in milky way) ≈ .1
2 p(¬type III civilizations in milky way) ≈ 0
Note that 1.1.1.1 and 1.1.1.2 are not mutually exclusive, and that ↓ is the joint denial / NOR boolean logic operator. Personally, after talking with you about this and reading through the reversible computing Wikipedia article (which I found quite helpful), my estimates have shifted up significantly. I originally started to build my own sort of probability tree similar to the one above, but it quickly became quite complex. I think the two of us are starting out with radically different structures in our probability trees. I tend to presume that the future has many more unknown factors than known ones, and so is fundamentally extremely difficult to predict with any certainty, especially in the far future.
The only thing we know for sure is the laws of physics, so we can make some headway by presuming that one specific barrier is the primary limiting factor of an advanced civilization, and see what logical conclusions we can draw from there. That’s why I like your approach so much; before reading it I hadn’t really given much thought to civilizations limited primarily by things like Laudauer’s limit rather than energy or raw materials. However, without knowing their utility function, it is difficult to know for sure what limits will be their biggest concern. It’s not even certain that such a civilization would have one single unified utility function, although it’s certainly likely.
If I was in the 18th century and trying to predict what the 21st century would be like, even if I was a near-perfect rationalist, I would almost certainly get almost everything wrong. I would see limiting factors like transportation and food. From this, I might presume that massive numbers of canals, rather than the automobile, would address the need for trade. I would also presume that food limited population growth, and might hypothesize that once we ran out of land to grow food we would colonize the oceans with floating gardens. The 18th century notion of a type I civilization would probably be one that farmed the entire surface of a planet, rather than one that harvested all solar energy. The need for electricity was not apparent, and it wasn’t clear that the industrial revolution would radically increase crop yields. Perhaps fusion power will make electricity use a non issue, or perhaps ColdTech will decrease demand to the point where it is a non-issue. These are both reasonably likely hypotheses in a huge, mostly unexplored, hypothesis space.
But let’s get to the substance of the matter.
1 and 2: I tried to argue for a substantially lower p value here, and I see that you responded, so I’ll answer on that fork instead. This comment is likely to be long enough as is. :)
1.1 and 1.2: I definitely agree with you that a sufficiently advanced civilization would probably have ColdTech, but among many, many other technologies. It’s likely to be a large fraction of the mass of all their infrastructure, but I’m not sure if it would be a super-majority. This would depend to a large degree on unknown unknowns.
1.1.1 and 1.1.2: I’m inclined to agree with you that ColdTech technology itself isn’t particularly mass or energy limited. You had this to say:
Engineering considerations—the configurations which maximize computation are those where the computational mass is far from heat sources such as stars which limit computation. With reversible computing, energy is unlikely to be a constraint at all, and the best use of available mass probably involves ejecting the most valuable mass out of the system.
I would still think that manufacturing and ejecting ColdTech is likely to be extremely mass and energy intensive. If the civilization expands exponentially limited only by their available resources, the observable effects would look much like other forms of advanced civilizations. Are you arguing that they would stay quite small for the sake of stealth? If so, wouldn’t it still make sense to spread out as much as possible, via as many independent production sites as possible? You touch on this briefly:
As long as there exists some reasonably low energy technique for ejecting from the solar system, it results in a large payoff multiplier. Of course you can still leave a bunch of stuff in the system, and perhaps even have a form of a supply line—although that could reduce stealth and add risk.
I don’t see any reason not to just keep sending material out in different directions. Perhaps this is the underlying assumption that caused us to disagree, since I didn’t make the distinction between manufacturing being mass/energy limited and the actual computation being mass/energy limited. When you say that such a civilization isn’t mass/energy limited, are you referring to just the ColdTech, or the production too?
It seems like you could just have the ejected raw materials/ColdTech perform a course correction and series of gravity assists based on the output from a random number generator, once they were out of observational distance from the origin system. This would ensure that no hostile forces could determine their location by finding the production facility still active. Instead of a handfull of hidden colonies, you could turn a sizable fraction of a solar system’s mass, or even a galaxies mass, into computonium.
Hmm I’m not sure what to make of your probability tree yet .. . but in general I don’t assign such high probabilities to any of these models/propositions. Also, I’m not sure what a type III civilization is supposed to translate to in the cold dark models that are temperature constrained rather than energy constrained. I guess you are using that to indicate how much of the galaxy’s usable computronium mass is colonized?
It is probably unlikely that even a fully colonized galaxy would have a very high computronium ratio: most of the mass is probably low value and not worth bothering with.
That’s why I like your approach so much; before reading it I hadn’t really given much thought to civilizations limited primarily by things like Laudauer’s limit rather than energy or raw materials
Thanks. I like your analogies with food and other early resources. Energy is so fundamental that it will probably always constrain many actions (construction still requires energy, for example), but it isn’t the only constraint, and not necessarily the key constraint for computation.
I would still think that manufacturing and ejecting ColdTech is likely to be extremely mass and energy intensive.
Yes—agreed. (I am now realizing ColdTech really needs a better name)
If the civilization expands exponentially limited only by their available resources, the observable effects would look much like other forms of advanced civilizations.
No, the observable effects vary considerably based on the assumed technology. Let’s compare three models: stellavore, BHE (black hole entity) transcension, and CD (cold dark) arcilects.
The stellavore model predicts that civs will create dyson spheres, which should be observable during the long construction period and may be observable afterwards. John Smart’s transcension model predicts black holes entities arising in or near stellar systems (although we could combine that with ejection I suppose). The CD arcilect model predicts that civs will cool down some of the planemos in their systems, possibly eject some of those planemos, and then also colonize any suitable nomads.
Each theory predicts a different set of observables. The stellavore model doesn’t appear to match our observations all that well. The other two seem to match, although also are just harder to detect, but there are some key things we could look for.
For my CD arcilect model, we have already have some evidence for a large amount of nomads. Perhaps there is a way to distinguish between artificial and natural ejections. Perhaps the natural pattern is ejections tend to occur early in system formation, whereas artificial ejections occur much later. Perhaps we could even get lucky and detect an unusually cold planemo with microlensing. Better modelling of the dark matter halos may reveal a match between ejection models for at least a baryonic component of the halo.
For the CDA model stars become somewhat wasteful, which suggests that civs may favour artificial supernovas if such a thing is practical. At the moment I don’t see how one could get the energy/mass to do such a thing.
Those are just some quick ideas, I haven’t really looked into it all that much.
Are you arguing that they would stay quite small for the sake of stealth? If so, wouldn’t it still make sense to spread out as much as possible, via as many independent production sites as possible?
No, I agree that civilizations will tend to expand and colonize, and yes stealth considerations shouldn’t prevent this.
I don’t see any reason not to just keep sending material out in different directions. . .
Thinking about it a little more, I agree. And yes when I mention not being energy constrained, that was in reference only to computation, not construction. I assume efficient construction is typically in place, using solar or fusion or whatever.
It seems like you could just have the ejected raw materials/ColdTech perform a course correction and series of gravity assists based on the output from a random number generator, once they were out of observational distance from the origin system. This would ensure that no hostile forces could determine their location by finding the production facility still active.
Yes, this seems to be on the right track. However, the orbits of planetary bodies are very predictable and gravity assists are reversible operations (I think), which seems to imply that the remaining objects in the system will contain history sufficient for predicting the ejection trajectory (for a rival superintelligence). You can erase the history only by creating heat … so maybe you end up sending some objects into the sun? :) Yes actually that seems pretty doable.
Thanks for writing this up, I’ll add a direct link from the main article under the historical model/early filter section.
So, if there is a filter, it probably lies in the future (or at least the new evidence tilts us in that direction).
You mentioned several possibilities for a great filter in the past, but that was by no means a comprehensive list.
Yes. The article was already probably too long, and I wanted to focus on the future predictive parts of the model.
Before responding to some of your specific points, I will focus on a couple of key big picture insights that favor “lots of aliens” over any filter at all.
Bayesian Model Selection.
Any model/hypothesis which explains our observations as very rare events is intrinsically less likely than other models that explain our observations as typical events. This is just a simple consequence of Bayesian inference/Solonomoff Induction. A very rare event model is one which has a low P(E|H), which it must overcome with a high prior P(H) to defeat other hypothesis classes which explain the observations as typical (high probability) outcomes.
This is not a quite a knockdown argument against the entire class of rare earth models, but it is close.
Observational Selection Effects due to the Simulation Argument
Some physical universes tend to produce tons of simulated universes containing observers such as ourselves. This acts a very large probability multiplier that strongly favors models which produce tons of simulations. The class of models I propose where there are 1.) lots of aliens and 2.) strong motivations to simulate the history of other alien civs are exactly the types of conditions that maximize the creation of simulations and observers.
Now on to the potential early filter stages:
(1. Habitable stars are abundant (20 to 40 billion suitable candidates in the GHZ of our galaxy)
(2. Habitable planets are rare/abundant. Water is common—mars and many other bodies in our system have significant amounts of water.
We have an oddly large moon,
This is true. Our moon is unusual compared to the moons of other planets we can see. However, from the evidence in our system we can only conclude that our moon is roughly a 1 in 100 or 1 in 1000 event, not a 1 in a billion event. Even so, it is not at all clear that a moon like our is necessary for life. There are many other means to the same end.
Even if our planet is a typical draw, it is likely to be an outlier in at least a few dimensions.
(3. Panspermia / Abiogenesis
Recent evidence seems to favor panspermia. For example—see the “Life Before Earth” paper and related.
Also, there’s a weird coincidence where the formation of the first life on earth seems to coincide well with the end of the late heavy bombardment,
That’s only a weird coincidence if one assumes abiogenesis on earth. Panspermia explains that ‘coincidence’ perfectly.
(5. Prokaryotic → Eukaryotic
(6. Multicellular
(7. “Complex Land Life”
Again any model that explains these evolutionary developments as rare events is intrinsically less likely than models which explain the developments as likely events. Systemic evolutionary theory—especially its computational and complexity theory variants—explains how variation and selection over time inevitably and automatically explores the genetic search space and moves through a series of attractors of escalating complexity. The events you describe are not rare—they are the equivalent of the main sequence for biology.
(8. Complex life is common, but is regularly wiped out before it can become intelligent.
Of all your points, I think this one is perhaps the most important. Large extinctions have also acted as key evolutionary catalysts, so the issue is somewhat more complex. To understand this issue in more detail, we should build galaxy simulations which model the distribution of these events. This would give us a better understanding of the variance in evolutionary timescales, which could give us a better idea concerning the predicted distribution over the age of civilizations. On worlds that have too many extinction events, life is wiped out. On worlds that have too few, life gets stuck. We can observe only that on our world the exact sequence of extinction events resulted in a path from bacteria to humans that took about 5 billion years. It is intrinsically unlikely that our exact sequence was somehow optimal for the speed of evolution, and other worlds could have evolved faster.
(9. Complex life is common, but intelligent life is rare.
I addressed this point specifically. Chimpanzees have about 5 billion cortical neurons, elephants have a little more, some whales/dolphins are comparable. All 3 creatures display comparable very high levels of intelligence. Chimpanzees are very similar to the last common ancestor between ourselves and other primates—essentially they are right on the cusp of evolving into techno-cultural intelligence. So complex intelligence evolved in parallel in 3 widely separated lineages.
This is actually some of the strongest evidence against an early filter—as it indicates that the trajectory towards high intelligence is a strong attractor.
What is rare, however, appears to be the capacity for abstract thought.
This is basically nonsense unless you define ‘abstract thought’ as ‘human language’. Yes language (and more specifically complex lengthy cultural education—as feral humans do not have abstract thought in the way we do) is the key to human ‘abstract thought’. However, elephants and chimpanzees (and perhaps some cetaceans) are right on the cusp of being able to learn language. The upper range of their language learning ability comes close to the lower range of our language learning ability.
If you haven’t seen it yet, I highly recommend the movie “Project Nim”, which concerns an experiment in the 1970′s with attempting to raise a chimp like a human, using sign language.
In short, chimpanzee brains are very much like our own, but with a few differences in some basic key variables (tweaks). Our brains are both larger and tuned for slower development (neotany). A chimpanzee actually becomes socially intelligent much faster than a human child, but the chimp’s intelligence also peaks much earlier. Chimps need to be able to survive on their own much earlier than humans. Our intelligence is deeper and develops much more slowly, tuned for a longer lifespan in a more complex social environment.
The reason that we are the only species to evolve language/technology is simple. Language leads to technology which quickly leads to civilization and planetary dominance. It is a winner take all effect.
(10. Technological civilization
Once you have language, technology and civilization follows with high likelihood.
We haven’t evolved noticeably over the past 200,000 years, and yet we only developed agriculture and colonized the planet 10,000 years ago.
Hunter gatherers expanded across the globe and lived an easy life, hunting big dumb game until such game became rare, extinct, or adapted defenses. This led to a large extinction of the megafauna about 10,000 years ago, and then agriculture follows naturally once the easy hunting life becomes too hard.
We didn’t invent bronze or written language until 5,000 years ago
Follows directly from agriculture leading to larger populations and warring city-states.
Given all this, I wouldn’t be so quick to assume that the great filter is in front of us.
I wouldn’t be so quick to assume that there is a filter at all—that is the much larger assumption.
It should be noted the “life before earth” paper is INFAMOUS amongst bioinformaticists for cherrypicking data to fit an exponential trend, having an incoherent conception of biological complexity, and generally not having anything to do with how evolution actually works. Reading it is PAINFUL.
I agree with that—I mean its main graph has only 5 datapoints.
Still—the general idea (even if poorly executed) is interesting and could be roughly correct—but showing it in the way they intend to will require much more sophisticated computable measures of biological complexity. Machine learning techniques—acting as general compressors—could eventually help with that.
But any measure of biological complexity you could care to generate can increase or decrease over evolutionary time. Looking at modern organisms doesn’t help you.
But any measure of biological complexity you could care to generate can increase or decrease over evolutionary time.
Only at high frequencies. But at a more general level we have strong reasons to believe that the basic form of the argument is correct—that the overall complexity of the terrestrial biome has generally increased over the course of history from the origin of life up to today. Computational models of evolution more than suggest this—it is almost a given.
The problem of course is in actually quantifying the biome complexity—using say KC type measures, which require sophisticated compression. In fact, one’s ability to compute the true KC measure is only achieved in the limit of perfect compression—which incidentally corresponds to perfect understanding of the data!. But with more sophisticated compression we could perhaps approach or estimate that limit.
A useful approximate measure would need to consider the full set of DNA in existence across the biome at a certain point in time. Duplications and related transformations are obviously compressible, whereas handling noise-like variation is more of a challenge. One way to handle it is to consider random draws from the implied species-defining distribution. For a species with lots of high variance/noisy (junk) sequences, the high variance sections then become highly compressible because one only has to specify the aggregate distribution (such that draws from that distribution would implement the phenotype). At the limit a sequence which is completely unused and under no selection pressure wouldn’t contribute anything to the K-complexity.
Any model/hypothesis which explains our observations as very rare events is intrinsically less likely than other models that explain our observations as typical events.
This is true for all cases where the observer is not noticeably entangled in a causal manner with the event they are trying to observe. Otherwise, the Observation Selection Effect can contribute false evidence. If we presumed that earth is typical, then there should also be life on Mars, and in most other solar systems. However, we wouldn’t ever have asked the question if we hadn’t evolved into intelligent life. The same thing that caused us to ask the question also caused the one blue-green data point that we have.
To illustrate: If you came across an island in the middle of the ocean, you might do well to speculate that such islands must be extremely common for you to come across one in the middle of the ocean. However, if you see smoke rising from beyond the horizon, and sail for days until finally reaching a volcanic island, you could not assign the same density to such volcanic islands as to ordinary islands. The same thing that caused you to observe the volcanic island also caused you to search for it in the first place. In the case of observable life, the Observation Selection Effect is much, much stronger because there’s no way we could conceivably have asked the question if we hadn’t come into existence somehow. P(life is common|life on earth)=P(life is common), because knowing that life did evolve on earth can’t give us Bayesian evidence for or against the hypothesis that life is common.
Observational Selection Effects due to the Simulation Argument
Some physical universes tend to produce tons of simulated universes containing observers such as ourselves.
This changes things, potentially. Everything I’ve said in previous posts has been conditional on the assumption that we don’t live in a simulation. If we do, it is likely that our universe roughly resembles the real universe in some aspects. Perhaps they are running a precise simulation based on reality, or perhaps they are running a simulation based on a small change to reality, as an experiment. However, the motives of such a civilization are difficult to predict with any accuracy, so I suspect that the vast majority of possible hypotheses are things we haven’t even thought of yet. (unknown unknowns.) So, although your specific hypothesis becomes more likely if we are in a simulation, so do all other possible hypotheses predicting large numbers of simulations.
Now on to the potential early filter stages:
(2) Oops. I should have specified huge amounts of liquid water in the inner solar system. Mars has icecaps, and some of Jupiter’s moons are ice-balls, possibly with a liquid center. Earth has rather a lot of water, despite being well inside the frost line. When the planets were forming from an accretion disc, the material close to the sun would have caused any available water to evaporate, for the same reason there isn’t much water on the moon (at least outside a couple craters on the poles, which are in continuous shadow). Far enough out, though, and the sun’s heat is disperse enough that ice is stable; hence the icy moons of Jupiter. The best hypothesis we have is that some mechanism transported a large amount of water to Earth after it formed, perhaps via comets or asteroids. It just occurred to me that this might have been during the late heavy bombardment, or it might be just another coincidence. As you point out regarding our large moon, complex systems can be expected to have many, many 1-in-100 coincidences, simply because of statistics.
(3) Panspermia / Abiogenesis: it sounds like “Life Before Earth” isn’t a mainstream consensus, based on a couple comments below. I do know, however, that mainstream biology does teach Panspermia alongside Abiogenesis, so neither of them appears to be a clear winner by merit of scientific evidence. I’m not even sure of how to practically estimate their respective complexities, in order to use Occam’s Razor or Solomonoff Complexity to posit a reasonable prior. It would be nice to bound the problem enough to estimate the probabilities of both with sufficient accuracy to determine which is more likely. Until then though, I guess we’ll have to leave it at 50/50%.
Also, there’s a weird coincidence where the formation of the first life on earth seems to coincide well with the end of the late heavy bombardment,
That’s only a weird coincidence if one assumes abiogenesis on earth. Panspermia explains that ‘coincidence’ perfectly.
The late heavy bombardment coinciding with the start of life is only explained by panspermia if (1) the rocks came from outside the solar system, which is unlikely given the huge amount of material, or (2) the rocks brought life from another source within our own solar system. This could also be explained if life required the large influx of matter/energy/climate disturbance/heating or whatever, or if life was continuously wiped out by the harsh environment until it finally started flourishing when it ended.
(8) Good point about extinction events being an evolutionary catalyst. Aside from possibly generating the primordial soup for Abiogenesis, snowball earths may have catalyzed early advancements, and mammals wouldn’t have been able to supersede dinosaurs without a certain meteor.
(9) Perhaps “abstract thought” isn’t the perfect term to use, since it is common enough to have become vague instead of precise. The stress should be on the word “abstract”, not on the word “thought”. Chimps and many other animals do have simple language, although no complex grammar structures. They can’t abstract an arbitrary series of motions necessary to make or use a tool into language, and communicate it without showing it. Abstract language is most of what I’m referring to, but not all of it.
Language leads to technology which quickly leads to civilization and planetary dominance. It is a winner take all effect.
This is likely why neanderthals went extinct, although we coexisted for quite a while. It still doesn’t explain why there aren’t octopus civilizations, since we haven’t changed that environment much until extremely recently. We haven’t evolved noticeably in hundreds of thousands of years, but haven’t colonized the planet until the last ~16,000 years. If our colonization is the only thing holding back other potential intelligent life, we’d expect to see elephants and parrots at least at the stone tool or fire level of technology. Why don’t octopus hunt with spears or lobster traps?
I skipped over a lot of your good points, largely because I see them as correct. I sill don’t buy the argument that life is common though, although I’d be less confident in any such assertion in either direction if we were in a simulation, just because of the huge amount of uncertainty that adds to things.
The origin of life on earth being coincident with the end of the late heavy bombardment could entirely be an artifact of the fact that no rock from before that time survives to this day. It could well be older on Earth. The reworking of the crust was not complete at any given time, it took hundreds of megayears and at any given time most of the crust would be undisturbed.
Water in the inner system has the complication of the fact that not only do you need to get water, you need to hold onto water. Small objects will not hold onto light molecules out of sheer gravity issues. Mars is not holding onto water or other atmospheric gases well at all because of both gravitational issues, and solar radiation sputtering the upper atmosphere off into space. Venus has all the gravity it could need, but A—no geomagnetic field (at least at this point in its history) and B—got so hot that all the water went into the atmosphere, where it gets cracked by radiation in the upper atmosphere and the hydrogen leaves (allowing the oxygen to react with volcanic gases giving you sulfuric acid and phosphoric acid and the like). The same process happens on Earth but the pool of atmospheric water is SO MUCH SMALLER due to all the liquid volume and the fact that we have this lovely temperature trap in our atmosphere that makes it condense out before it gets too high up that the rate is extremely small.
I would say the only time you can call humans ‘dominant’ is after the widespread adoption of agriculture, which was much more gradual than many people think—people were probably propagating seedless figs 20k years ago and much longer ago were altering the composition of plants and animals in various biomes just via their actions. Since agriculture got big we have become ecosystem engineers in the vein of bears, but rather larger in our effects. We have been creating new large-scale-symbiotic biomes where plants and animals flow matter and energy into each other and where we take care of dispersal rather than the plants themselves doing as much of it, for example. That’s the unique aspect of humanity. Since then we have also started breaking into non-biological forms of energy—raw sunlight, water flow, the black rocks that are basically 500 megayears of stored sunlight—and have been using those for our purposes too in addition to the biological energy flow that all other biomes deal with.
It will be very interesting to see how human ecology continues to change after the extremely concentrated energy sources that represent most of the power we have used over the last 200 years go away. The end result might be very big but might not have the sheer flux of inefficient extractive growth—think weeds colonizing a freshly plowed field, versus an old-growth forest.
That is a very interesting question and one which there’s constant research going into.
A few initial points. First, its becoming clearer and clearer that ‘prokaryotes’ is a very poor grouping to use for much of anything. The bacteria most of us think of are the smaller, faster-replicating members of the eubacteria. There’s also the archaebacteria, which are deeply and fundamentally different from the eubacteria in their membrane composition, cell wall structure, DNA organization, and transcription machinery.
Second, it’s becoming more and more clear that the eukaryotes are indeed the result of an early union of eubacteria and archaebacteria. I saw some very cool research at a conference last December bolstering the “eocyte hypothesis”—the idea that the Eukaryotic nuclear genome roots in one particular spot of the archaebacterial tree plus loads of horizontal gene transfer from the eubacteria that became the mitochondria. You can’t root it there just by aligning things, this was long enough ago that base sequence is effectively randomized, you need to look at what sorts of proteins exist, characters that change very rarely as opposed to mere sequence, and its a very hard question that has required a LOT of sequence data from a LOT of organisms. Most of our DNA structure and transcription and some of our protein processing looks like the archaebacteria, but basically all of our metabolism looks like the eubacteria. This is interesting in the light of recent discoveries of symbiotic pairings of archaebacteria and eubacteria in nature in which they exchange metabolic products.
Anyways, the eubacteria and archaebacteria have deeply different transcription machinery and make their membranes in fundamentally different ways. Central carbon metabolism is all but identical though as is a lot of other pathways, and the core biochemistry. I’ve seen work proposing that the eubacteria and archaebacteria may have diverged before living things managed to synthesize their own membrane components rather than scavenging them from the environment. I’ve also seen interesting work to the effect that certain clay minerals can assemble fatty acids and other such membrane-building substances from acetate under the proper energetic conditions.
There’s also a lot of diverstiy in DNA and RNA processing methods that isn’t in any of the cellular life – there are truly bizarre ways of doing this that you only find in viruses. Viruses mutate incredibly rapidly and so you cannot try to root them anywhere, they change too fast. That being said there are proposals that they may be primordial, elements of the very wide range of possible nucleic acid processing mechanisms that existed before the current forms of cellular life really were established and took off. The eubacterial and archaebacterial models may have taken off with remnants of the rest winding up parasitizing them.
Rampant horizontal transfer of genes, especially early when cell identity might not have been so strong, makes all this very complicated.
There’s a school of thought in origin of life research that autocatalytic metabolism was important, and another that replicating polymers were important. The former posits that metal-ion driven cyclical reactions like the citric acid cycle can take off and take over, and wind up producing lots of interesting chemical byproducts that can then capture it and become discrete self-replicating units. The latter points out that elongating polymers in membrane bubbles speed the growth and splitting of these bubbles. They’re both probably important. It should be noted too that these ideas intersect – one of the popular metabolic ideas, polyphosphate, is actually represented in our nucleic acids. Polyphosphate is an interesting substance that can be built up by the right chemial reactions, and can drive other ones when it breaks down. Every ATP, GTP, etc is a nice chemical handle on the end of a chain of three phosphates – a short polyphosphate. By breaking down those polyphosphates you build polymers.
Proteins obviously came very early and gave a huge advantage, and the genetic code is damn near universal with all deviations from the standard one obvioulsy coming in after the fact. Whatever could make proteins probably took over quickly. The initial frenzy, whatever it was, probably eventually lead to a diverse population of compartments processing their nucleic acids in diverse ways and sending pieces of their codes back and forth, which eventually gained advantages by building their own membranes, and eventually cell walls, in different ways. Some of these populations probably took off like mad, making the eubacteria and archaebacteria, and others remained only as horizontally transferred elements like viruses or transposons or the like.
Written in a hurry, may be edited or clarified/extended later.
A clade of archaebacteria found via metagenomics at an undersea vent (uncultured). Contains huge numbers of eukaryotic characteristic genes that are important for formerly eukaryotic specific functions. The eukaryotes cluster within this clade rather than as a sister clade.
P(life is common|life on earth)=P(life is common), because knowing that life did evolve on earth can’t give us Bayesian evidence for or against the hypothesis that life is common.
That math is rather obviously wrong. You are so close here—just use Bayes.
We have 2 mutually exclusive models: life is common, and life is rare. To be more specific, lets say that the life is common theory posits that life is a 1 in 10 event, the life is rare theory posits that life is a 1 in a billion event.
Let’s say that our priors are P(life is common) = 0.09, and P(life is rare) = 0.91
Now, our observation history over this solar system tells us that life evolved on earth—and probably only complex life on earth, although there may be simple life on mars or some of the watery moons.
As we are just comparing two models, we can compare likelihoods
P(life is common | life on earth) ]= P(life on earth | life is common) P(life is common) = 0.09 * 0.1 = 0.009 ~ 10^-2
P(life is rare | life on earth) ]= P(life on earth | life is rare) P(life is rare) = 10^-9 * 0.91 ~ 10^-9
To convert to actual probabilities we would need to divide by P(life on earth), but that doesn’t really matter because it is a constant normalizing factor.
However, the motives of such a civilization are difficult to predict with any accuracy, so I suspect that the vast majority of possible hypotheses are things we haven’t even thought of yet. (unknown unknowns.) So, although your specific hypothesis becomes more likely if we are in a simulation, so do all other possible hypotheses predicting large numbers of simulations.
I agree with your general analysis here, although it is important to remember that the full hypothesis space is always infinite. For tractable inference, we focus on a small subset of the most promising theories/models.
When considering the wide space of potential simulators, we must focus on key abstractions. For example, we can focus on models in which advanced civs have convergent instrumental reasons for creating large numbers of simulations. I am currently aware of a couple of wide classes of models that predict lots of sims. Besides aliens simulating other aliens, our descendants could have strong motivations to simulate us—as a form of resurrection for example, in addition to the common motivator for improving world models. There is also the possibility of creating new artificial universes, in which case there may be interesting strong motivators to create lots of universes and lots of simulations as a precursor step.
(3) Panspermia / Abiogenesis: it sounds like “Life Before Earth” isn’t a mainstream consensus, based on a couple comments below.
No—that paper is not even really mainstream. I mentioned it as an example of the panspermia model and the resulting potentially expanded timeframe for the history of life. If life is really that old, then it becomes less likely that a single early elder civ colonized the galaxy early and dominated.
P(life is common|life on earth)=P(life is common), because knowing that life did evolve on earth can’t give us Bayesian evidence for or against the hypothesis that life is common.
That math is rather obviously wrong. You are so close here—just use Bayes.
Perhaps I should have used an approximately equal to symbol instead of an equals sign, to avoid confusion. And thanks for the detailed writeup. I would agree 100% if you substituted “planet X” for “earth”. Basically, I’m arguing that using ourselves as a data point is a form of the observational selection effect, just like survivorship bias.
Similarly, let’s suppose that we have a less discriminating test, mammography, that still has a 20% rate of false negatives, as in the original case. However, mammography has an 80% rate of false positives. In other words, a patient without breast cancer has an 80% chance of getting a false positive result on her mammography test. If we suppose the same 1% prior probability that a patient presenting herself for screening has breast cancer, what is the chance that a patient with positive mammography has cancer?
Group 1: 100 patients with breast cancer.
Group 2: 9,900 patients without breast cancer.
After mammography* screening:
Group A: 80 patients with breast cancer and a “positive” mammography*.
Group B: 20 patients with breast cancer and a “negative” mammography*.
Group C: 7920 patients without breast cancer and a “positive” mammography*.
Group D: 1980 patients without breast cancer and a “negative” mammography*.
The result works out to 80 / 8,000, or 0.01. This is exactly the same as the 1% prior probability that a patient has breast cancer! A “positive” result on mammography doesn’t change the probability that a woman has breast cancer at all. You can similarly verify that a “negative” mammography also counts for nothing. And in fact it must be this way, because if mammography has an 80% hit rate for patients with breast cancer, and also an 80% rate of false positives for patients without breast cancer, then mammography is completely uncorrelated with breast cancer.
In that example, the reason the posterior probability equals the prior probability is that the “test” isn’t causally linked with the cancer. You have to assume the same the same sort of thing for cases in which you are personally entangled. For example, if I watched my friend survive 100 rounds of solo Russian Roulette, then Baye’s theorem would lead me to believe that there was a high probability that the gun was empty or only had 1 bullet. However, if I myself survived 100 rounds, I couldn’t afterward conclude a low probability, because there would be no conceivable way for me to observe anything but 10 wins. I can’t observe anything if I’m dead.
Does what I’m saying make sense? I’m not sure how else to put it. Are you arguing that Baye’s theorem can still output good data even if you feed it skewed evidence? Or are you arguing that the evidence isn’t actually the result of survivorship bias/observation selection effect?
For example, if I watched my friend survive 100 rounds of solo Russian Roulette, then Baye’s theorem would lead me to believe that there was a high probability that the gun was empty or only had 1 bullet. However, if I myself survived 100 rounds, I couldn’t afterward conclude a low probability, because there would be no conceivable way for me to observe anything but 10 wins. I can’t observe anything if I’m dead.
Obviously you can’t observe anything if you are dead, but that isn’t interesting. What matter is comparing the various hypothesis that could explain the events.
The case where you yourself survive 100 rounds is somewhat special only in that you presumably remember whether you put bullets in or not and thus already know the answer.
Pretend, however that you suddenly wake up with total amensia. There is a gun next to you and a TV then shows a video of you playing 100 rounds of roulette and surviving—but doesn’t show anything before that (where the gun was either loaded or not).
What is the most likely explanation?
the gun was empty in the beginning
the gun had 1 bullet in the beginning
With high odds, option 1 is more likely. This survorship bias/observation selection effect issue you keep bringing up is completely irrelevant when comparing two rival hypothesis that both explain the data!
Here is another, cleaner and simpler example:
Omega rolls a fair die which has N sides. Omega informs you the roll comes up as a ‘2’. Assume Omega is honest. Assume that dice can be either 10 sided or 100 sided, in about the same ratio.
First, in the interest of full disclosure, the reason I’m here on LW is to maximize my contribution to promoting intelligent life. It currently appears that maximizing the number of Quality Adjusted Life Years integrated over the period from now until the heat death of the universe can only be achieved through spaceflight and spreading life/AI through the solar system, and then the galaxy. This can be done through either directed panspermia or by spreading intelligent life/AI directly. I have spent the last year or so trying to find any flaws in my understanding, and so I’m about to do everything I can to tear your initial argument to shreds. That’s not necessarily because I don’t agree with you, (although my reasoning diverges about halfway through) but rather a concerted effort to avoid confirmation bias. I don’t want to devote my entire life to something sub-optimal, just because I’m afraid to put my views under scrutiny.
You mentioned several possibilities for a great filter in the past, but that was by no means a comprehensive list. Here’s a longer list, off the top of my head:
Habitable stars are rare. (roughly sun-sized, minimal solar flares, etc) Poor candidate, as you point out.
Habitable planets are rare. (Orbit within the habitable zone, liquid H2O, ingredients for life) You touched on this, but our understanding of the source of Earth’s water is poor, so I don’t think we can discard this as a possibility. We have an oddly large moon, which may have played a role. First, it’s gravity ensured that the Earth’s rotational axis is roughly parallel to it’s orbital plane most of the time. This means that the planet is baked roughly evenly, rather than spending millions of years with the north pole facing the sun. Tidal forces also effect the mantle, which creates our magnetosphere, which in turn prevents atmospheric loss to space. There are a surprising number of other theories linking the moon to life on Earth.
Panspermia / Abiogenesis is rare. (transport may be limited by radiation/mutations, while genesis of new life may require rare environments or energy sources) We have reasonable evidence that life could survive within rocks blasted off of a planet’s surface long enough to seed nearby planets, but not necessarily that life could survive the long voyage between nearby stars. We’ve demonstrated that most, but not all, essential amino acids can be generated under conditions similar to those of early Earth. Also, there’s a weird coincidence where the formation of the first life on earth seems to coincide well with the end of the late heavy bombardment, which might have created conditions conducive to the formation of life late enough after planetary formation that geological activity could settle down a bit. There doesn’t seem to be any reason why there should have been a second heavy bombardment period, though, so that may be unique to our solar system.
Either photosynthesis is rare, or the Oxygen Catastrophe generally kills off all species. (High concentrations of oxygen are highly poisonous, which caused a massive extinction event. Additionally, losing all that CO2 from the atmosphere cooled earth tremendously since the sun wasn’t so bright. This caused the longest Snowball Earth episode in the planet’s history, in which all the planet’s oceans froze solid and all the land was covered in one massive glacier.) It seems likely that life could never have recovered from this.
Prokaryotic life is common, but Eukaryotic life is rare. (It’s really hard to evolve a cell nucleus.) Eukaryotes only appeared about 2 billion years after Prokaryotes; halfway through the chain of evolution from the first life until today.
Eukaryotic life is common, but multicellular life is rare. We’ve only had it for ~500 million years.
Multicellular life is common, but complex life on land is rare. It’s possible that we could never have developed spines or crawled onto land, or that animal life itself might be rare. This seems much less plausible, since it seems to have sprung directly from the evolution of multicellular life, in a fairly spectacular explosion of complexity.
Complex life is common, but is regularly wiped out before it can become intelligent. There have been 5 big extinction events in earth’s history, most recently the meteor that killed the dinosaurs. Although these weren’t enough to wipe out all life on earth, there are several cosmic threats that could. These include collision with another planet or other sufficiently large object, which might be caused by orbital periods synching up with Jupiter or by passing stars or black-holes. Additionally, Gamma Ray Bursts are extremely common, and might regularly wipe out all life in the inner solar system, where the stars are closer together. This would explain why we evolved out on the edge of a spiral arm of the milky way, and not closer to the galactic center.
Complex life is common, but intelligent life is rare. There seem to be a lot of somewhat intelligent creatures that aren’t closely related to us. (Parrots, octopus, dolphins, etc.) There are even several animals that make limited use of tools. What is rare, however, appears to be the capacity for abstract thought. Chimps can learn from each other by copying, but have a hard time learning or teaching each other without demonstrating. We’re also much better at learning by copying others, but we can also learn from abstract symbols written on a piece of paper. This appears to be a result of runaway evolution, where humans selected for mates with a high capacity for abstract thought, perhaps via a high capacity to predict others actions and plot accordingly.
Intelligent life is common, but technological civilizations are rare. We have had several steady-state conditions over our specie’s history. We used the first simple stone tools ~2.3 million years ago, and then stood upright and invented fire 1.5 million years ago. We haven’t evolved noticeably over the past 200,000 years, and yet we only developed agriculture and colonized the planet 10,000 years ago. Some of that may be due to the most recent ice age, but not all of it. We didn’t invent bronze or written language until 5,000 years ago. All the great advanced civilizations made relatively small advances in technology, and put all their efforts into infrastructure rather than R&D. The only thing the Romans invented was concrete; everything else was an adaptation of ideas from other cultures. Western civilization is really the first culture to invest heavily in R&D, and we generally suck at it. Places like silicon valley are the exception to the rule.
Given all this, I wouldn’t be so quick to assume that the great filter is in front of us. All this must be weighed against the risks posed by all the various existential risks. Nuclear war was a close call in the cold war, and the risk is an order of magnitude lower now, but is by no means gone. AI gets discussed a lot on here, but I don’t think biological warfare gets the attention it deserves. Our understanding of biology is growing rapidly, and I think it may one day be relatively easy for anyone to genetically engineer a unusually dangerous virus or pandemic. Additionally, advanced civilizations in general tend to only last on the order of hundred years, according to this paper. That’s more or less in line with the Future of Humanity Institute’s informal Global Catastrophic Risk Survey. (The mean estimate for humanity’s chance of going extinct this century was on the order of a 20%.) That said, Nick Bostram himself appears to think that the great filter is more likely to lie behind us than ahead of us. To me, it seems like it could easily go either way, but since Bostram has been researching this much longer than I have, I’m inclined to shift my probability estimate a bit further toward the great filter being behind us.
The above dealt primarily with the first half of your post, but let me also address the 2nd half. You’ve assigned several probability estimates to various outcomes of our civilization:
Collapse/Extinction: “in the 1% to 50% range.” I’m inclined to agree with you on this one, as described in the last paragraph of my above post.
Biological/Mixed Civilization: “This scenario is almost not worth mentioning: prior < 1%” I think you’ve defined this a bit too narrowly. We don’t yet see any limiting factor for AI advancement besides physics, but that doesn’t mean that one won’t make itself apparent. Maybe this factor will turn out to be teraFLOPS (aka limited by Moore’s law) or energy (limited by our energy production capacity) or even matter (limited by the amount of rare earth elements necessary to make computronium). But it could also happen that we fail to make a super-intelligence at all, or that AI eventually achieves most, but not all, of humans mental abilities. The livelihood of a general intelligence increases asymptotically with time, but I think it would be a mistake to assume that it is increasing asymptotically toward 1. It could easily be getting closer and closer to 0.8 or some other value which is hard to calculate. The existence of the human mind shows that consciousness can be built out of atoms, but not necessarily that it can be built out of a string of transistors, or that it is simple enough that we can ever understand it well enough to reproduce it in code. There’s also the existential risk of developing a flawed AI. We only have 1 shot at it, and the evidence seems to be against developing one correctly on the first try. I suspect that the supermajority of civilizations that develop AI’s develop flawed AIs. Even if 90% develop an AI before going to the stars, perhaps >99.9999% are wiped out by a poorly designed AI. This would lead to many more “Biological/Mixed Civilizations” than AI civilizations, if the flawed AI’s tend to wipe themselves out or not to spread out into the universe.
PostBiological Warm-tech AI Civilization: “I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.” This seems slightly low to me, but not by much. “This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.” Although this state doesn’t flow from energy being a limiting factor (aka biological/mixed civilizations may also be energy limited) I agree that such a civilization would eventually become energy limited. I see 2 ways of solving this: better harvesting (aka Dyson swarms, since Dyson spheres are likely mass-limited) or broader civilization (if it takes less energy to send a colony to the nearest star, then you do that before you start building a Dyson swarm).
From Warm-tech to Cold-tech: This seems to be where you are putting the majority of your probability mass. I’d probably put less, but that’s not actually my main contention. I don’t buy that this is sufficient reason to travel to the interstellar medium, away from such a ready energy and matter source as a solar system. You list 3 reasons: lower energy bit erasures, superconductivity, and quantum computer efficiency. Bit erasure costs seem like they would be more than made up for my a surplus of energy available from plentiful solar power, materials for fusion plants, etc. Only a few superconductors require temperatures below ~50 Kelvin, and you can get that anywhere perpetually shaded from the sun, such as the craters on the north and south poled of the moon (~30 Kelvin). If you want it somewhere else, stop an asteroid from spinning and build a computer on the dark side. I’m not sure that quantum computers need to be below that either. Anywhere you go, you’ll still be heated be cosmic microwave background radiation to ~4 K. Is an order of magnitude decrease in temperature really worth several orders of magnitude decrease in energy/matter harvesting ability? In order to expand exponentially, such a system would still need huge amounts of matter for superconductors and whatever else.
I should haved pointed out that even a high probability of collapse is unlikely to act as a filter, because it has to be convergent and a single civ can colonize.
It is where I am putting most of my prior probability mass. There are three considerations:
Engineering considerations—the configurations which maximize computation are those where the computational mass is far from heat sources such as stars which limit computation. With reversible computing, energy is unlikely to be a constraint at all, and the best use of available mass probably involves ejecting the most valuable mass out of the system.
Stealth considerations—given no radical new physics, it appears that stealth is the only reliable way to protect a civ’s computational brains. Any civ hanging out near a star would be a sitting duck.
Simulation argument selection effects—discussed elsewhere, but basically the coldtech scenario tends to maximize the creation of simulations which produce observers such as ourselves.
After conditioning on observations of the galaxy to date, the coldtech scenario contains essentially all of the remaining probability mass. Of course, our understanding of physics is incomplete, and I didn’t have time to list all of the plausible models for future civs. There is the transcension scenario, which is related to my model of coldtech civs migrating away from the galactic disk.
One other little thing I may have forgot to mention in the article: the distribution of dark matter is that of a halo, which is suspiciously close to what one would expect in the expulsion scenario, where elder civs are leaving the galaxy in directions away from the galactic disk. Of course, that effect is only relevant if a good chunk of the dark matter is usable for computation.
No—I should have elaborated on the model more, but the article was already long.
Given some planemo (asteroid,moon,planet whatever) of mass M, we are concerned with maximizing the total quantity of computation in ops over the future that we can extract from that mass M.
If high tech reversible/quantum computing is possible, then the designs which maximize the total computation are all temperature limited, due to Landauer’s limit.
Now there are actually many constraints to consider. There is a structural constraint that even if your device creates no heat, there is a limit to the ops/s achievable by one molecular transistor—and this actually is also related to Landauer’s principle. Whether the computer is reversible or not, it still requires about 100kT j per reliable bitop—the difference is that the irreversible computer converts that energy into heat, whereas the reversible design recycles it.
If reversible/quantum computing is possible, then there is no competition—the reversible designs will scale to enormously higher computational densities (that would result in the equivalent of nuclear explosions if all of those bits were erased).
Temperature then becomes the last key thing you can optimize for, as the background temperature limits your effective cooling capability.
Well—assuming that really powerful reversible computing is possible, then the answer—rather obviously—is yes.
But again energy harvesting is only necessary if energy is a constraint, which it isn’t in the coldtech model.
Why not just build an inferior computer design that only achieves 10% of the maximum capacity? Intelligence requires computation. As long as there exists some reasonably low energy technique for ejecting from the solar system, it results in a large payoff multiplier. Of course you can still leave a bunch of stuff in the system, and perhaps even have a form of a supply line—although that could reduce stealth and add risk.
There is admittedly a lot of hand waving going on in this model. If I had more time I would develop a more accurate model focusing on some of the key unknowns.
One key variable is the maximum practical reversibility ratio, which is the ratio of bitops of computation per bitop erased. This determines the maximum efficiency gain from reversible computing. Physics doesn’t appear to have a hard limit for this variable, but there will probably be engineering limits.
For example, an advanced civ will at the very least want to store its observational data from its sensors in a compressed form, which implies erasing some minimal number of bits. But if you think about a big civ occupying a sphere, the input bits/s coming in from a few sparse sensor ports on the surface is going to be incredibly tiny compared to the bitop/s rate across the whole volume.
First, let me try to summarize your position formally. Please let me know if I’m misrepresenting anything. We seem to be talking past each other on a couple subtopics, and I thought this might help clear things up.
1 p(type III civilization in milky way) ≈ 1
1.1 p(reversible computing | type III civilization in milky way) ≈ .9
1.1.1 p(¬energy or mass limited | reversible computing) ≈ 1
1.1.1.1 p(interstellar space | ¬ energy or mass limited) is large
1.1.1.2 p(intergalactic space | ¬ energy or mass limited) is very large
1.1.1.3 p( (interstellar space ↓ intergalactic space) | ¬ energy or mass limited) ≈ 0
1.1.2 p(energy or mass limited | reversible computing) ≈ 0
1.2 p(¬reversible computing | type III civilization in milky way) ≈ .1
2 p(¬type III civilizations in milky way) ≈ 0
Note that 1.1.1.1 and 1.1.1.2 are not mutually exclusive, and that ↓ is the joint denial / NOR boolean logic operator. Personally, after talking with you about this and reading through the reversible computing Wikipedia article (which I found quite helpful), my estimates have shifted up significantly. I originally started to build my own sort of probability tree similar to the one above, but it quickly became quite complex. I think the two of us are starting out with radically different structures in our probability trees. I tend to presume that the future has many more unknown factors than known ones, and so is fundamentally extremely difficult to predict with any certainty, especially in the far future.
The only thing we know for sure is the laws of physics, so we can make some headway by presuming that one specific barrier is the primary limiting factor of an advanced civilization, and see what logical conclusions we can draw from there. That’s why I like your approach so much; before reading it I hadn’t really given much thought to civilizations limited primarily by things like Laudauer’s limit rather than energy or raw materials. However, without knowing their utility function, it is difficult to know for sure what limits will be their biggest concern. It’s not even certain that such a civilization would have one single unified utility function, although it’s certainly likely.
If I was in the 18th century and trying to predict what the 21st century would be like, even if I was a near-perfect rationalist, I would almost certainly get almost everything wrong. I would see limiting factors like transportation and food. From this, I might presume that massive numbers of canals, rather than the automobile, would address the need for trade. I would also presume that food limited population growth, and might hypothesize that once we ran out of land to grow food we would colonize the oceans with floating gardens. The 18th century notion of a type I civilization would probably be one that farmed the entire surface of a planet, rather than one that harvested all solar energy. The need for electricity was not apparent, and it wasn’t clear that the industrial revolution would radically increase crop yields. Perhaps fusion power will make electricity use a non issue, or perhaps ColdTech will decrease demand to the point where it is a non-issue. These are both reasonably likely hypotheses in a huge, mostly unexplored, hypothesis space.
But let’s get to the substance of the matter.
1 and 2: I tried to argue for a substantially lower p value here, and I see that you responded, so I’ll answer on that fork instead. This comment is likely to be long enough as is. :)
1.1 and 1.2: I definitely agree with you that a sufficiently advanced civilization would probably have ColdTech, but among many, many other technologies. It’s likely to be a large fraction of the mass of all their infrastructure, but I’m not sure if it would be a super-majority. This would depend to a large degree on unknown unknowns.
1.1.1 and 1.1.2: I’m inclined to agree with you that ColdTech technology itself isn’t particularly mass or energy limited. You had this to say:
I would still think that manufacturing and ejecting ColdTech is likely to be extremely mass and energy intensive. If the civilization expands exponentially limited only by their available resources, the observable effects would look much like other forms of advanced civilizations. Are you arguing that they would stay quite small for the sake of stealth? If so, wouldn’t it still make sense to spread out as much as possible, via as many independent production sites as possible? You touch on this briefly:
I don’t see any reason not to just keep sending material out in different directions. Perhaps this is the underlying assumption that caused us to disagree, since I didn’t make the distinction between manufacturing being mass/energy limited and the actual computation being mass/energy limited. When you say that such a civilization isn’t mass/energy limited, are you referring to just the ColdTech, or the production too?
It seems like you could just have the ejected raw materials/ColdTech perform a course correction and series of gravity assists based on the output from a random number generator, once they were out of observational distance from the origin system. This would ensure that no hostile forces could determine their location by finding the production facility still active. Instead of a handfull of hidden colonies, you could turn a sizable fraction of a solar system’s mass, or even a galaxies mass, into computonium.
Hmm I’m not sure what to make of your probability tree yet .. . but in general I don’t assign such high probabilities to any of these models/propositions. Also, I’m not sure what a type III civilization is supposed to translate to in the cold dark models that are temperature constrained rather than energy constrained. I guess you are using that to indicate how much of the galaxy’s usable computronium mass is colonized?
It is probably unlikely that even a fully colonized galaxy would have a very high computronium ratio: most of the mass is probably low value and not worth bothering with.
Thanks. I like your analogies with food and other early resources. Energy is so fundamental that it will probably always constrain many actions (construction still requires energy, for example), but it isn’t the only constraint, and not necessarily the key constraint for computation.
Yes—agreed. (I am now realizing ColdTech really needs a better name)
No, the observable effects vary considerably based on the assumed technology. Let’s compare three models: stellavore, BHE (black hole entity) transcension, and CD (cold dark) arcilects.
The stellavore model predicts that civs will create dyson spheres, which should be observable during the long construction period and may be observable afterwards. John Smart’s transcension model predicts black holes entities arising in or near stellar systems (although we could combine that with ejection I suppose). The CD arcilect model predicts that civs will cool down some of the planemos in their systems, possibly eject some of those planemos, and then also colonize any suitable nomads.
Each theory predicts a different set of observables. The stellavore model doesn’t appear to match our observations all that well. The other two seem to match, although also are just harder to detect, but there are some key things we could look for.
For my CD arcilect model, we have already have some evidence for a large amount of nomads. Perhaps there is a way to distinguish between artificial and natural ejections. Perhaps the natural pattern is ejections tend to occur early in system formation, whereas artificial ejections occur much later. Perhaps we could even get lucky and detect an unusually cold planemo with microlensing. Better modelling of the dark matter halos may reveal a match between ejection models for at least a baryonic component of the halo.
For the CDA model stars become somewhat wasteful, which suggests that civs may favour artificial supernovas if such a thing is practical. At the moment I don’t see how one could get the energy/mass to do such a thing.
Those are just some quick ideas, I haven’t really looked into it all that much.
No, I agree that civilizations will tend to expand and colonize, and yes stealth considerations shouldn’t prevent this.
Thinking about it a little more, I agree. And yes when I mention not being energy constrained, that was in reference only to computation, not construction. I assume efficient construction is typically in place, using solar or fusion or whatever.
Yes, this seems to be on the right track. However, the orbits of planetary bodies are very predictable and gravity assists are reversible operations (I think), which seems to imply that the remaining objects in the system will contain history sufficient for predicting the ejection trajectory (for a rival superintelligence). You can erase the history only by creating heat … so maybe you end up sending some objects into the sun? :) Yes actually that seems pretty doable.
Thanks for writing this up, I’ll add a direct link from the main article under the historical model/early filter section.
Yes. The article was already probably too long, and I wanted to focus on the future predictive parts of the model.
Before responding to some of your specific points, I will focus on a couple of key big picture insights that favor “lots of aliens” over any filter at all.
Bayesian Model Selection.
Any model/hypothesis which explains our observations as very rare events is intrinsically less likely than other models that explain our observations as typical events. This is just a simple consequence of Bayesian inference/Solonomoff Induction. A very rare event model is one which has a low P(E|H), which it must overcome with a high prior P(H) to defeat other hypothesis classes which explain the observations as typical (high probability) outcomes.
This is not a quite a knockdown argument against the entire class of rare earth models, but it is close.
Observational Selection Effects due to the Simulation Argument
Some physical universes tend to produce tons of simulated universes containing observers such as ourselves. This acts a very large probability multiplier that strongly favors models which produce tons of simulations. The class of models I propose where there are 1.) lots of aliens and 2.) strong motivations to simulate the history of other alien civs are exactly the types of conditions that maximize the creation of simulations and observers.
Now on to the potential early filter stages:
(1. Habitable stars are abundant (20 to 40 billion suitable candidates in the GHZ of our galaxy)
(2. Habitable planets are rare/abundant. Water is common—mars and many other bodies in our system have significant amounts of water.
This is true. Our moon is unusual compared to the moons of other planets we can see. However, from the evidence in our system we can only conclude that our moon is roughly a 1 in 100 or 1 in 1000 event, not a 1 in a billion event. Even so, it is not at all clear that a moon like our is necessary for life. There are many other means to the same end.
Even if our planet is a typical draw, it is likely to be an outlier in at least a few dimensions.
(3. Panspermia / Abiogenesis
Recent evidence seems to favor panspermia. For example—see the “Life Before Earth” paper and related.
That’s only a weird coincidence if one assumes abiogenesis on earth. Panspermia explains that ‘coincidence’ perfectly.
(5. Prokaryotic → Eukaryotic
(6. Multicellular
(7. “Complex Land Life”
Again any model that explains these evolutionary developments as rare events is intrinsically less likely than models which explain the developments as likely events. Systemic evolutionary theory—especially its computational and complexity theory variants—explains how variation and selection over time inevitably and automatically explores the genetic search space and moves through a series of attractors of escalating complexity. The events you describe are not rare—they are the equivalent of the main sequence for biology.
(8. Complex life is common, but is regularly wiped out before it can become intelligent.
Of all your points, I think this one is perhaps the most important. Large extinctions have also acted as key evolutionary catalysts, so the issue is somewhat more complex. To understand this issue in more detail, we should build galaxy simulations which model the distribution of these events. This would give us a better understanding of the variance in evolutionary timescales, which could give us a better idea concerning the predicted distribution over the age of civilizations. On worlds that have too many extinction events, life is wiped out. On worlds that have too few, life gets stuck. We can observe only that on our world the exact sequence of extinction events resulted in a path from bacteria to humans that took about 5 billion years. It is intrinsically unlikely that our exact sequence was somehow optimal for the speed of evolution, and other worlds could have evolved faster.
(9. Complex life is common, but intelligent life is rare.
I addressed this point specifically. Chimpanzees have about 5 billion cortical neurons, elephants have a little more, some whales/dolphins are comparable. All 3 creatures display comparable very high levels of intelligence. Chimpanzees are very similar to the last common ancestor between ourselves and other primates—essentially they are right on the cusp of evolving into techno-cultural intelligence. So complex intelligence evolved in parallel in 3 widely separated lineages.
This is actually some of the strongest evidence against an early filter—as it indicates that the trajectory towards high intelligence is a strong attractor.
This is basically nonsense unless you define ‘abstract thought’ as ‘human language’. Yes language (and more specifically complex lengthy cultural education—as feral humans do not have abstract thought in the way we do) is the key to human ‘abstract thought’. However, elephants and chimpanzees (and perhaps some cetaceans) are right on the cusp of being able to learn language. The upper range of their language learning ability comes close to the lower range of our language learning ability.
If you haven’t seen it yet, I highly recommend the movie “Project Nim”, which concerns an experiment in the 1970′s with attempting to raise a chimp like a human, using sign language.
In short, chimpanzee brains are very much like our own, but with a few differences in some basic key variables (tweaks). Our brains are both larger and tuned for slower development (neotany). A chimpanzee actually becomes socially intelligent much faster than a human child, but the chimp’s intelligence also peaks much earlier. Chimps need to be able to survive on their own much earlier than humans. Our intelligence is deeper and develops much more slowly, tuned for a longer lifespan in a more complex social environment.
The reason that we are the only species to evolve language/technology is simple. Language leads to technology which quickly leads to civilization and planetary dominance. It is a winner take all effect.
(10. Technological civilization
Once you have language, technology and civilization follows with high likelihood.
Hunter gatherers expanded across the globe and lived an easy life, hunting big dumb game until such game became rare, extinct, or adapted defenses. This led to a large extinction of the megafauna about 10,000 years ago, and then agriculture follows naturally once the easy hunting life becomes too hard.
Follows directly from agriculture leading to larger populations and warring city-states.
I wouldn’t be so quick to assume that there is a filter at all—that is the much larger assumption.
It should be noted the “life before earth” paper is INFAMOUS amongst bioinformaticists for cherrypicking data to fit an exponential trend, having an incoherent conception of biological complexity, and generally not having anything to do with how evolution actually works. Reading it is PAINFUL.
It truly is ‘not even wrong’.
I agree with that—I mean its main graph has only 5 datapoints.
Still—the general idea (even if poorly executed) is interesting and could be roughly correct—but showing it in the way they intend to will require much more sophisticated computable measures of biological complexity. Machine learning techniques—acting as general compressors—could eventually help with that.
But any measure of biological complexity you could care to generate can increase or decrease over evolutionary time. Looking at modern organisms doesn’t help you.
Only at high frequencies. But at a more general level we have strong reasons to believe that the basic form of the argument is correct—that the overall complexity of the terrestrial biome has generally increased over the course of history from the origin of life up to today. Computational models of evolution more than suggest this—it is almost a given.
The problem of course is in actually quantifying the biome complexity—using say KC type measures, which require sophisticated compression. In fact, one’s ability to compute the true KC measure is only achieved in the limit of perfect compression—which incidentally corresponds to perfect understanding of the data!. But with more sophisticated compression we could perhaps approach or estimate that limit.
A useful approximate measure would need to consider the full set of DNA in existence across the biome at a certain point in time. Duplications and related transformations are obviously compressible, whereas handling noise-like variation is more of a challenge. One way to handle it is to consider random draws from the implied species-defining distribution. For a species with lots of high variance/noisy (junk) sequences, the high variance sections then become highly compressible because one only has to specify the aggregate distribution (such that draws from that distribution would implement the phenotype). At the limit a sequence which is completely unused and under no selection pressure wouldn’t contribute anything to the K-complexity.
This is true for all cases where the observer is not noticeably entangled in a causal manner with the event they are trying to observe. Otherwise, the Observation Selection Effect can contribute false evidence. If we presumed that earth is typical, then there should also be life on Mars, and in most other solar systems. However, we wouldn’t ever have asked the question if we hadn’t evolved into intelligent life. The same thing that caused us to ask the question also caused the one blue-green data point that we have.
To illustrate: If you came across an island in the middle of the ocean, you might do well to speculate that such islands must be extremely common for you to come across one in the middle of the ocean. However, if you see smoke rising from beyond the horizon, and sail for days until finally reaching a volcanic island, you could not assign the same density to such volcanic islands as to ordinary islands. The same thing that caused you to observe the volcanic island also caused you to search for it in the first place. In the case of observable life, the Observation Selection Effect is much, much stronger because there’s no way we could conceivably have asked the question if we hadn’t come into existence somehow. P(life is common|life on earth)=P(life is common), because knowing that life did evolve on earth can’t give us Bayesian evidence for or against the hypothesis that life is common.
This changes things, potentially. Everything I’ve said in previous posts has been conditional on the assumption that we don’t live in a simulation. If we do, it is likely that our universe roughly resembles the real universe in some aspects. Perhaps they are running a precise simulation based on reality, or perhaps they are running a simulation based on a small change to reality, as an experiment. However, the motives of such a civilization are difficult to predict with any accuracy, so I suspect that the vast majority of possible hypotheses are things we haven’t even thought of yet. (unknown unknowns.) So, although your specific hypothesis becomes more likely if we are in a simulation, so do all other possible hypotheses predicting large numbers of simulations.
(2) Oops. I should have specified huge amounts of liquid water in the inner solar system. Mars has icecaps, and some of Jupiter’s moons are ice-balls, possibly with a liquid center. Earth has rather a lot of water, despite being well inside the frost line. When the planets were forming from an accretion disc, the material close to the sun would have caused any available water to evaporate, for the same reason there isn’t much water on the moon (at least outside a couple craters on the poles, which are in continuous shadow). Far enough out, though, and the sun’s heat is disperse enough that ice is stable; hence the icy moons of Jupiter. The best hypothesis we have is that some mechanism transported a large amount of water to Earth after it formed, perhaps via comets or asteroids. It just occurred to me that this might have been during the late heavy bombardment, or it might be just another coincidence. As you point out regarding our large moon, complex systems can be expected to have many, many 1-in-100 coincidences, simply because of statistics.
(3) Panspermia / Abiogenesis: it sounds like “Life Before Earth” isn’t a mainstream consensus, based on a couple comments below. I do know, however, that mainstream biology does teach Panspermia alongside Abiogenesis, so neither of them appears to be a clear winner by merit of scientific evidence. I’m not even sure of how to practically estimate their respective complexities, in order to use Occam’s Razor or Solomonoff Complexity to posit a reasonable prior. It would be nice to bound the problem enough to estimate the probabilities of both with sufficient accuracy to determine which is more likely. Until then though, I guess we’ll have to leave it at 50/50%.
The late heavy bombardment coinciding with the start of life is only explained by panspermia if (1) the rocks came from outside the solar system, which is unlikely given the huge amount of material, or (2) the rocks brought life from another source within our own solar system. This could also be explained if life required the large influx of matter/energy/climate disturbance/heating or whatever, or if life was continuously wiped out by the harsh environment until it finally started flourishing when it ended.
(8) Good point about extinction events being an evolutionary catalyst. Aside from possibly generating the primordial soup for Abiogenesis, snowball earths may have catalyzed early advancements, and mammals wouldn’t have been able to supersede dinosaurs without a certain meteor.
(9) Perhaps “abstract thought” isn’t the perfect term to use, since it is common enough to have become vague instead of precise. The stress should be on the word “abstract”, not on the word “thought”. Chimps and many other animals do have simple language, although no complex grammar structures. They can’t abstract an arbitrary series of motions necessary to make or use a tool into language, and communicate it without showing it. Abstract language is most of what I’m referring to, but not all of it.
This is likely why neanderthals went extinct, although we coexisted for quite a while. It still doesn’t explain why there aren’t octopus civilizations, since we haven’t changed that environment much until extremely recently. We haven’t evolved noticeably in hundreds of thousands of years, but haven’t colonized the planet until the last ~16,000 years. If our colonization is the only thing holding back other potential intelligent life, we’d expect to see elephants and parrots at least at the stone tool or fire level of technology. Why don’t octopus hunt with spears or lobster traps?
I skipped over a lot of your good points, largely because I see them as correct. I sill don’t buy the argument that life is common though, although I’d be less confident in any such assertion in either direction if we were in a simulation, just because of the huge amount of uncertainty that adds to things.
The origin of life on earth being coincident with the end of the late heavy bombardment could entirely be an artifact of the fact that no rock from before that time survives to this day. It could well be older on Earth. The reworking of the crust was not complete at any given time, it took hundreds of megayears and at any given time most of the crust would be undisturbed.
Water in the inner system has the complication of the fact that not only do you need to get water, you need to hold onto water. Small objects will not hold onto light molecules out of sheer gravity issues. Mars is not holding onto water or other atmospheric gases well at all because of both gravitational issues, and solar radiation sputtering the upper atmosphere off into space. Venus has all the gravity it could need, but A—no geomagnetic field (at least at this point in its history) and B—got so hot that all the water went into the atmosphere, where it gets cracked by radiation in the upper atmosphere and the hydrogen leaves (allowing the oxygen to react with volcanic gases giving you sulfuric acid and phosphoric acid and the like). The same process happens on Earth but the pool of atmospheric water is SO MUCH SMALLER due to all the liquid volume and the fact that we have this lovely temperature trap in our atmosphere that makes it condense out before it gets too high up that the rate is extremely small.
I would say the only time you can call humans ‘dominant’ is after the widespread adoption of agriculture, which was much more gradual than many people think—people were probably propagating seedless figs 20k years ago and much longer ago were altering the composition of plants and animals in various biomes just via their actions. Since agriculture got big we have become ecosystem engineers in the vein of bears, but rather larger in our effects. We have been creating new large-scale-symbiotic biomes where plants and animals flow matter and energy into each other and where we take care of dispersal rather than the plants themselves doing as much of it, for example. That’s the unique aspect of humanity. Since then we have also started breaking into non-biological forms of energy—raw sunlight, water flow, the black rocks that are basically 500 megayears of stored sunlight—and have been using those for our purposes too in addition to the biological energy flow that all other biomes deal with.
It will be very interesting to see how human ecology continues to change after the extremely concentrated energy sources that represent most of the power we have used over the last 200 years go away. The end result might be very big but might not have the sheer flux of inefficient extractive growth—think weeds colonizing a freshly plowed field, versus an old-growth forest.
Dear CellBioGuy, what is your intuition on what preceded procaryotes?
That is a very interesting question and one which there’s constant research going into.
A few initial points. First, its becoming clearer and clearer that ‘prokaryotes’ is a very poor grouping to use for much of anything. The bacteria most of us think of are the smaller, faster-replicating members of the eubacteria. There’s also the archaebacteria, which are deeply and fundamentally different from the eubacteria in their membrane composition, cell wall structure, DNA organization, and transcription machinery.
Second, it’s becoming more and more clear that the eukaryotes are indeed the result of an early union of eubacteria and archaebacteria. I saw some very cool research at a conference last December bolstering the “eocyte hypothesis”—the idea that the Eukaryotic nuclear genome roots in one particular spot of the archaebacterial tree plus loads of horizontal gene transfer from the eubacteria that became the mitochondria. You can’t root it there just by aligning things, this was long enough ago that base sequence is effectively randomized, you need to look at what sorts of proteins exist, characters that change very rarely as opposed to mere sequence, and its a very hard question that has required a LOT of sequence data from a LOT of organisms. Most of our DNA structure and transcription and some of our protein processing looks like the archaebacteria, but basically all of our metabolism looks like the eubacteria. This is interesting in the light of recent discoveries of symbiotic pairings of archaebacteria and eubacteria in nature in which they exchange metabolic products.
Anyways, the eubacteria and archaebacteria have deeply different transcription machinery and make their membranes in fundamentally different ways. Central carbon metabolism is all but identical though as is a lot of other pathways, and the core biochemistry. I’ve seen work proposing that the eubacteria and archaebacteria may have diverged before living things managed to synthesize their own membrane components rather than scavenging them from the environment. I’ve also seen interesting work to the effect that certain clay minerals can assemble fatty acids and other such membrane-building substances from acetate under the proper energetic conditions.
There’s also a lot of diverstiy in DNA and RNA processing methods that isn’t in any of the cellular life – there are truly bizarre ways of doing this that you only find in viruses. Viruses mutate incredibly rapidly and so you cannot try to root them anywhere, they change too fast. That being said there are proposals that they may be primordial, elements of the very wide range of possible nucleic acid processing mechanisms that existed before the current forms of cellular life really were established and took off. The eubacterial and archaebacterial models may have taken off with remnants of the rest winding up parasitizing them.
Rampant horizontal transfer of genes, especially early when cell identity might not have been so strong, makes all this very complicated.
There’s a school of thought in origin of life research that autocatalytic metabolism was important, and another that replicating polymers were important. The former posits that metal-ion driven cyclical reactions like the citric acid cycle can take off and take over, and wind up producing lots of interesting chemical byproducts that can then capture it and become discrete self-replicating units. The latter points out that elongating polymers in membrane bubbles speed the growth and splitting of these bubbles. They’re both probably important. It should be noted too that these ideas intersect – one of the popular metabolic ideas, polyphosphate, is actually represented in our nucleic acids. Polyphosphate is an interesting substance that can be built up by the right chemial reactions, and can drive other ones when it breaks down. Every ATP, GTP, etc is a nice chemical handle on the end of a chain of three phosphates – a short polyphosphate. By breaking down those polyphosphates you build polymers.
Proteins obviously came very early and gave a huge advantage, and the genetic code is damn near universal with all deviations from the standard one obvioulsy coming in after the fact. Whatever could make proteins probably took over quickly. The initial frenzy, whatever it was, probably eventually lead to a diverse population of compartments processing their nucleic acids in diverse ways and sending pieces of their codes back and forth, which eventually gained advantages by building their own membranes, and eventually cell walls, in different ways. Some of these populations probably took off like mad, making the eubacteria and archaebacteria, and others remained only as horizontally transferred elements like viruses or transposons or the like.
Written in a hurry, may be edited or clarified/extended later.
Of interest!
Even more recent evidence for the eocyte hypothesis!
http://www.the-scientist.com/?articles.view/articleNo/42902/title/Prokaryotic-Microbes-with-Eukaryote-like-Genes-Found/
A clade of archaebacteria found via metagenomics at an undersea vent (uncultured). Contains huge numbers of eukaryotic characteristic genes that are important for formerly eukaryotic specific functions. The eukaryotes cluster within this clade rather than as a sister clade.
That math is rather obviously wrong. You are so close here—just use Bayes.
We have 2 mutually exclusive models: life is common, and life is rare. To be more specific, lets say that the life is common theory posits that life is a 1 in 10 event, the life is rare theory posits that life is a 1 in a billion event.
Let’s say that our priors are P(life is common) = 0.09, and P(life is rare) = 0.91
Now, our observation history over this solar system tells us that life evolved on earth—and probably only complex life on earth, although there may be simple life on mars or some of the watery moons.
As we are just comparing two models, we can compare likelihoods
P(life is common | life on earth) ]= P(life on earth | life is common) P(life is common) = 0.09 * 0.1 = 0.009 ~ 10^-2
P(life is rare | life on earth) ]= P(life on earth | life is rare) P(life is rare) = 10^-9 * 0.91 ~ 10^-9
To convert to actual probabilities we would need to divide by P(life on earth), but that doesn’t really matter because it is a constant normalizing factor.
I agree with your general analysis here, although it is important to remember that the full hypothesis space is always infinite. For tractable inference, we focus on a small subset of the most promising theories/models.
When considering the wide space of potential simulators, we must focus on key abstractions. For example, we can focus on models in which advanced civs have convergent instrumental reasons for creating large numbers of simulations. I am currently aware of a couple of wide classes of models that predict lots of sims. Besides aliens simulating other aliens, our descendants could have strong motivations to simulate us—as a form of resurrection for example, in addition to the common motivator for improving world models. There is also the possibility of creating new artificial universes, in which case there may be interesting strong motivators to create lots of universes and lots of simulations as a precursor step.
No—that paper is not even really mainstream. I mentioned it as an example of the panspermia model and the resulting potentially expanded timeframe for the history of life. If life is really that old, then it becomes less likely that a single early elder civ colonized the galaxy early and dominated.
Perhaps I should have used an approximately equal to symbol instead of an equals sign, to avoid confusion. And thanks for the detailed writeup. I would agree 100% if you substituted “planet X” for “earth”. Basically, I’m arguing that using ourselves as a data point is a form of the observational selection effect, just like survivorship bias.
As for the math, I’ll pull an example from An Intuitive Explanation of Bayes’ Theorem:
In that example, the reason the posterior probability equals the prior probability is that the “test” isn’t causally linked with the cancer. You have to assume the same the same sort of thing for cases in which you are personally entangled. For example, if I watched my friend survive 100 rounds of solo Russian Roulette, then Baye’s theorem would lead me to believe that there was a high probability that the gun was empty or only had 1 bullet. However, if I myself survived 100 rounds, I couldn’t afterward conclude a low probability, because there would be no conceivable way for me to observe anything but 10 wins. I can’t observe anything if I’m dead.
Does what I’m saying make sense? I’m not sure how else to put it. Are you arguing that Baye’s theorem can still output good data even if you feed it skewed evidence? Or are you arguing that the evidence isn’t actually the result of survivorship bias/observation selection effect?
Obviously you can’t observe anything if you are dead, but that isn’t interesting. What matter is comparing the various hypothesis that could explain the events.
The case where you yourself survive 100 rounds is somewhat special only in that you presumably remember whether you put bullets in or not and thus already know the answer.
Pretend, however that you suddenly wake up with total amensia. There is a gun next to you and a TV then shows a video of you playing 100 rounds of roulette and surviving—but doesn’t show anything before that (where the gun was either loaded or not).
What is the most likely explanation?
the gun was empty in the beginning
the gun had 1 bullet in the beginning
With high odds, option 1 is more likely. This survorship bias/observation selection effect issue you keep bringing up is completely irrelevant when comparing two rival hypothesis that both explain the data!
Here is another, cleaner and simpler example:
Omega rolls a fair die which has N sides. Omega informs you the roll comes up as a ‘2’. Assume Omega is honest. Assume that dice can be either 10 sided or 100 sided, in about the same ratio.
What is the more likely value of N?
100
10
Here is my solution:
priors: P(N=100) = 1, P(N=10) = 1
P(N=100 | roll(N) = 2) = P(roll(N)=2 | N=100) P(N=100) = 0.01
P(N=10 | roll(N) = 2) = P(roll(N)=2 | N = 10) P(N=10) = 0.1
So N=10 is 10 times more likely than N= 100.