Can somebody explain to me why people generally assume that the great filter has a single cause? My gut says it’s most likely a dozen one-in-a-million chances that all have to turn out just right for intelligent life to colonize the universe. So the total chance would be 1/1000000^12. Yet everyone talks of a single ‘great filter’ and I don’t get why.
Maybe not explicitly, but I keep seeing people refer to “the great filter” as if it was a single thing. But maybe you’re right and I’m reading too much into this.
Normal filters have multiple layers too—for example first you can have the screen that keeps the large particles out. Then you have the activated charcoal, and then the paper to keep the charcoal out, and for high-end filters you finish off with a porous membrane.
And yet, it’s all one filter.
SO… we’re speculating here on where the bulk of the work is being done. The strength could be evenly distributed from beginning to end, but it could also be lumpy.
The real filter could be a combination of an early one and a late one, of course. But, unless the factors are exquisitely well-balanced, its likely that there is one location in civilizational development where most of the filter lies (ie where the probability of getting to the next stage is the lowest).
That’s very non-obvious to me; I can’t see why there couldn’t be (say) a step with probability around 1e-12, one with probability around 1e-7, one with probability around 1e-5, one with probability around 1e-2, and a dozen ones with joint probability around 1e-1, so that no one step comprises the majority (logarithmically) of the filter.
Good point, but note that in your example, the top filter does a lot more than the runner-up. There could be a lot of value in knowing what that top one is, even if the others combined are more important. (But that wouldn’t answer the question, how much danger do we still face? Which may be the biggest question. So again—good point.)
But that wouldn’t answer the question, how much danger do we still face? Which may be the biggest question.
It is. But even there, I’m under the impression that many people are focussing on the answers “most of it” and “hardly any” neglecting everything in between.
A few possible hypotheses (where by “supercomputers” I mean ‘computation capabilities comparable to those available to people on Earth in 2014’):
Nearly all the filter ahead. A sizeable fraction of all star systems have civilizations with supercomputers, and about one in 1e24 of them will take over their future light cone.
Most of the filter ahead. There are about a million civilizations with supercomputers per galaxy in average, and about one in 1e18 of them will take over their future light cone.
Halfway through the filter. There is about one civilization with supercomputers per galaxy in average, and about one in 1e12 of them will take over their future light cone.
Most of the filter behind. There is about one civilization with supercomputers per million galaxies in average, and about one in a million of them will take over their future light cone.
Nearly all of the filter behind. There have been few or no other civilizations with supercomputers than us in our past light cone, and we have a sizeable chance of taking over our future light cone.
ISTM people push too much of their probability mass near the ends and leave too little in the middle for some reason. (In particular, I’m under the impression that certain singularitarians unduly privilege hypothesis 5 just because they can’t imagine a reason why we will fail to take over the future light cone—which is way too Inside View for me—and they think the only thing left to know is whether the takeover will be Good or Bad.)
I think there’s pretty little practical difference between 3 and 4 (unless you value us taking over the future light cone at somewhere between 1e6 and 1e12, on some appropriate scale), and not terribly much between 1 and 4 either, so maybe people are conflating anything below 5 together and that’s what they actually mean when they sound like they say 1?
(Of course there’s 6. Planetarium hypothesis—someone in our past light cone has already taken over their future light cone, but they don’t want us to know for some reason or another—but I don’t think that more than 99.99% of possible superintelligences would give a crap about us inferior beings, so this can explain at most about one sixth of the filter. Just add “visibly” before “take over” everywhere above.)
BTW, I think that 1 is all but ruled out (and 2 is slightly disfavoured) by the failure of SETI (at least if you interpret the present tense to mean ‘as of the time their wordline intersects the surface of our past light cone’), and 5 is unlikely because of Moloch (if he’s scary enough to stop cancer from killing whales he’s totally likely to be scary enough to stop > 99.9% of civilizations with supercomputers from taking over the light cone).
My own probability distribution has a broad peak somewhere around 4 with a long-ish tail to the left.
The real filter could be a combination of an early one and a late one, of course. But, unless the factors are exquisitely well-balanced, its likely that there is one location in civilizational development where most of the filter lies (ie where the probability of getting to the next stage is the lowest).
That doesn’t sound like it admits the possibility of twelve, independent, roughly equally balanced filters.
Maybe I am being uncharitable, but when Sophronius asks “[c]an somebody explain to me why people generally assume that the great filter has a single cause?” and you reply “I don’t think anyone really assumes that”, I have to admit that I’ve always seen people think of the Great Filter in terms of one main cause (e.g., look to the poll in this thread where people choose one particular cause), and not in terms of multiple causes.
Though, you’re right that no one has said that multiple causes is outright impossible. And you may be right that one main cause makes a lot more sense. But I do think Sophronius raises a question worth considering, at least a bit.
At least in other Less Wrong posts and comments on the topic, the question is usually presented probabilistically, as in “Does the bulk of the great filter lie ahead of us or behind us?”
Though, it is usually not specified whether the writer is asking where the greatest number of candidates gets ruled out, or the greatest fraction. Maybe there are ten factors that each eliminate 90% of candidates prior to the technological civilization level (leaving maybe a hundred near-type-I civilizations per galaxy), but one factor (AI? self-destruction by war or ecological collapse? Failing to colonize other worlds before a stray cosmic whatever destroys your homeworld?) takes out 99% of what is left. In that case nearly all the great filter would be behind us, and our odds still wouldn’t be good.
I don’t have an answer but here’s a guess: For any given pre-civilizational state, I imagine there are many filters. If we model these filters as having a kill rate then my (unreliable stats) intuition tells me that a prior on the kill rate distribution should be log-normal. I think this suggests that most of the killing happens on the left-most outlier but someone better at stats should check my assumptions.
I’d liken it to a chemical reaction. Many of them are multistep, and as a general statement chemical processes take place over an extremely wide range of orders of magnitude of rate (ranging from less than a billionth of a second to years). So, in an overall reaction, there are usually several steps, and the slowest one is usually orders of magnitude slower than any of the others, and that one’s called the rate determining step, for obvious reasons: it’s so much slower than the others that speeding up or slow down the others even by a couple of orders of magnitude is negligible to the overall rate of reaction. it’s pretty rare that more than one of them happen to be at nearly the same rate, since the range of orders of magnitude is so large.
I think that the evolution of intelligence is a stochastic process that’s pretty similar to molecular kinetics in a lot of ways, particularly that all of the above applies to it as well, thus, it’s more likely that there’s one rate determining step, one Great Filter, for the same reasons.
However (and I made another post about this here too), I do think that the filters are interdependent (there are multiple pathways and it’s not a linear process, but progress along a multidimensional surface.) that’s not really all that different than molecular kinetics either though.
Interesting. However, I still don’t see why the filter would work similarly to a chemical reaction. Unless it’s a general law of statistics that any event is always far more likely to have a single primary cause, it seems like a strange assumption since they are such dissimilar things.
Sorry for the delayed response; I don’t come on here particularly often.
The assumptions I’m making are that evolution is a stochastic process in which elements are in fluxional states and there ere is some measure of ‘difficulty’ in transitioning from one state to another, an energetic or entropic barrier of sorts, that to go from A to B (for example, from an organism with asexual reproduction to an organism with sexual reproduction) some confluence of factors must occur, and that occurrence has a certain likelihood that’s dependent on the conditions of the whole system (ecosystem). I think that this combined with the large numbers of physical elements interacting (organsims) is enough to say that evolution is governed by something pretty similar to statistical thermodynamics.
So, from the Arrhenius equation,
k = Ae^{{-E_a}/{RT}}
where k is the rate of reaction, A is the order of reaction (number of components that must come together), E_a is the activation energy, or energy barrier, and RT is the gas constant multiplied by temperature.
The equation is mostly applied to chemistry, but it also has found uses in other sectors, like predicting the geographic progression of the blooming of Sakura trees (http://en.wikipedia.org/wiki/Cherry_blossom_front). It really applies to any system that has certain kinetic properties.
So ignoring all the chemistry specific factors (like temperature), the relation in its most general form becomes
k = Ae^-BE
This says essentially that the rate is proportional to a negative exponential of the barrier to the transformation, and small changes in the value of the barrier correspond to large changes in the value of the rate. Thus, it’s unlikely that two rates are similar. I don’t see why two unrelated things would be likely to have a similar barrier, and given this, they’re even less likely to have a similar rate.
The Pareto principle is the 80-20 rule. Meaning if there are 4 great filters each with an effectiveness of 0.9,0.9,0.99, and 0.9, it will be the 0.99 filter that requires ten times the attention / luck and will get the name ‘Great’, even if there are many other hurdles adding to the end result.
This is a troubleshooting problem where we are still on the first stage of the diagnosis: Identifying the largest contributor to the problem, identifying the low hanging fruit. The whole filter that prevents life on every planet is going to the a combination of many stacked smaller filters, but we want to find the most relevant filter, either the 0.999 filter or the locally relevant filter. This discussion is all part of that diagnostic process to find the most relevant single cause.
I voted for that, but my view is a bit more fine grained. I think complex multicellular life with efficient energy is rare. My view is based on the fact that there was life about as soon as possible on earth but it took billions of years to get mitochondria and large complex multicellular life. Once you got that you had a very rapid explosion and tons of highly complex species within a few hundred million years. And multiple fairly intelligent species within a few hundred million after that. My assumption is the step with the longest gap is the most unlikely piece.
Wait… don’t all eukaryotes have mitochondria, including unicellular ones? I think “Complex single cells unlikely” on the poll is a better fit to your position.
It is worth noting that there are numerous examples of endosymbiosis all over the tree of life and the mitochondrion and plastids aren’t the only ones, just the most successful and most ancient.
There are bacteria that have bacterial endosymbiotes, and I’ve seen electron micrographs of large bacteria with strange uncharacterized archaea hanging off them like tassles. Some animals, mostly insects that drink plant sap, have endosymbiotic bacteria that have had their genomes stripped down to only 150 genes and cannot make their own cellular energy that only exist to make essential amino acids so that the animal does not have to eat them. Large numbers of photosynthetic microbes have taken up eukaryotic green algae as second-order endosymbiotes, or even have taken those organisms up as tertiary endosymbiotes.
EDIT: here’s a chart of the bizarre known history of clades acquiring photosynthesis via known primary, secondary and tertiary endosymbiosis. By the time you are at tertiary endosymbiosis, the plastid has 6 nested membranes and may have some vestigial nuclei between some of them.
It is worth noting that the Cambrian also coincided with the Earth releasing from an ice age that makes everything since look mild, as well as the increase of atmospheric oxygen to levels within an order of magnitude or two of today.
It is also worth noting that there is equivocal evidence of multicellular life before the Cambrian that nobody quite agrees on what it means—things that look like worm-trails in seafloor sediment a billion years old (but that some think could be trails from the motion of giant protists), flat sheets with a distinct center and edge 2 gigayears old, and macroscopic curly fibers two or more gigayears old...
I’m wary of calling cyanobacteria with their specialized nitrogen-fixer cells or the various colonial sporangia-forming bacteria ‘multicellular’. They do some hellacious cell specializing within colonies, and there are other bacteria that form chains between reducing and oxidizing environments with bacteria in the middle passing electrons back and forth so as to allow the whole colony to metabolize things one bacterium couldn’t, but it’s arguably a little different than what we think of when we think of a mushroom or an animal.
Eukaryotes have however evolved unambiguous multicellularity on many occasions.
In lineages not directly discended from one another (though they do have a common (unicellular) ancestor). According to the article I linked to it even happened to prokaryotes, though “complex” multicellular organisms (whatever that means—I guess cells not only bound together but also specialized?) ‘only’ evolved six times among eukaryotes.
It is possible that eukaryotes are particularly propense to becoming multicellular the way the OP claims bilaterians are propense to becoming intelligent, but I’d interpret each item of the poll to be conditional on all of the above, and I’d take “complex single cell life” to mean eukaryotes on Earth or similar.
What if we’re the first in a winner-takes-all scenario? If the first-mover prevents (or vastly reduces the likelihood of) the evolution of latter intelligent life, intelligent life should not be surprised by being the first intelligent species to evolve.
That we’re not that special is an assumption of the Great Filter argument. We could be the first intelligence to arrive on the scene, but it probably will be more harmful to underestimate upcoming pitfalls than overestimating them.
For example, say you have one hour to pick five locks by trial and error, locks with 1,2,3,4, and 5 dials of ten numbers, so that the expected time to pick each lock is .01,.1, 1, 10, and 100 hours respectively. Then just looking at those rare cases when you do pick all five locks in the hour, the average time to pick the first two locks would be .0096 and .075 hours respectively, close to the usual expected times of .01 and .1 hours. The average time to pick the third lock, however, would be .20 hours, and the average time for the other two locks, and the average time left over at the end, would be .24 hours. That is, conditional on success, all the hard steps, no matter how hard, take about the same time, while easy steps take about their usual time (see Technical Appendix). And all these step durations (and the time left over) are roughly exponentially distributed (with standard deviation at least 76% of the mean). (Models where the window closing is also random give similar results.)
I voted “see results only”. Based on the wide, unconcentrated distribution of evidence and beliefs about the so-called Great Filter, I think an option has been excluded: there is no Great Filter, and we are only one among many intelligent species in the universe (or even in this one galaxy) on one among many planets to develop such. Other intelligent life may well be totally unlike us, or, very rarely, parallel evolutionary pressures might have operated and produce something whose mentality we could almost understand. We don’t know.
We do know that our world hasn’t been strip-mined by a long-since-transhuman intergalactic supercivilization… but that’s not really good evidence that no other intelligence has evolved in our light-cone at all.
This whole thing has too much an air of mystery for me to believe we really understand it.
I think calling it a “filter” implicitly primes people for the impression that the options are supergalactic civilization or death, and to exclude possibilities where things keep going but don’t become astronomically visible.
I will again try for a poll.
Where do you think the great filter most likely lies:
[pollid:766]
Can somebody explain to me why people generally assume that the great filter has a single cause? My gut says it’s most likely a dozen one-in-a-million chances that all have to turn out just right for intelligent life to colonize the universe. So the total chance would be 1/1000000^12. Yet everyone talks of a single ‘great filter’ and I don’t get why.
I don’t think anyone really assumes that.
Maybe not explicitly, but I keep seeing people refer to “the great filter” as if it was a single thing. But maybe you’re right and I’m reading too much into this.
Normal filters have multiple layers too—for example first you can have the screen that keeps the large particles out. Then you have the activated charcoal, and then the paper to keep the charcoal out, and for high-end filters you finish off with a porous membrane.
And yet, it’s all one filter.
SO… we’re speculating here on where the bulk of the work is being done. The strength could be evenly distributed from beginning to end, but it could also be lumpy.
From the OP:
That’s very non-obvious to me; I can’t see why there couldn’t be (say) a step with probability around 1e-12, one with probability around 1e-7, one with probability around 1e-5, one with probability around 1e-2, and a dozen ones with joint probability around 1e-1, so that no one step comprises the majority (logarithmically) of the filter.
Good point, but note that in your example, the top filter does a lot more than the runner-up. There could be a lot of value in knowing what that top one is, even if the others combined are more important. (But that wouldn’t answer the question, how much danger do we still face? Which may be the biggest question. So again—good point.)
It is. But even there, I’m under the impression that many people are focussing on the answers “most of it” and “hardly any” neglecting everything in between.
A few possible hypotheses (where by “supercomputers” I mean ‘computation capabilities comparable to those available to people on Earth in 2014’):
Nearly all the filter ahead. A sizeable fraction of all star systems have civilizations with supercomputers, and about one in 1e24 of them will take over their future light cone.
Most of the filter ahead. There are about a million civilizations with supercomputers per galaxy in average, and about one in 1e18 of them will take over their future light cone.
Halfway through the filter. There is about one civilization with supercomputers per galaxy in average, and about one in 1e12 of them will take over their future light cone.
Most of the filter behind. There is about one civilization with supercomputers per million galaxies in average, and about one in a million of them will take over their future light cone.
Nearly all of the filter behind. There have been few or no other civilizations with supercomputers than us in our past light cone, and we have a sizeable chance of taking over our future light cone.
ISTM people push too much of their probability mass near the ends and leave too little in the middle for some reason. (In particular, I’m under the impression that certain singularitarians unduly privilege hypothesis 5 just because they can’t imagine a reason why we will fail to take over the future light cone—which is way too Inside View for me—and they think the only thing left to know is whether the takeover will be Good or Bad.)
I think there’s pretty little practical difference between 3 and 4 (unless you value us taking over the future light cone at somewhere between 1e6 and 1e12, on some appropriate scale), and not terribly much between 1 and 4 either, so maybe people are conflating anything below 5 together and that’s what they actually mean when they sound like they say 1?
(Of course there’s 6. Planetarium hypothesis—someone in our past light cone has already taken over their future light cone, but they don’t want us to know for some reason or another—but I don’t think that more than 99.99% of possible superintelligences would give a crap about us inferior beings, so this can explain at most about one sixth of the filter. Just add “visibly” before “take over” everywhere above.)
BTW, I think that 1 is all but ruled out (and 2 is slightly disfavoured) by the failure of SETI (at least if you interpret the present tense to mean ‘as of the time their wordline intersects the surface of our past light cone’), and 5 is unlikely because of Moloch (if he’s scary enough to stop cancer from killing whales he’s totally likely to be scary enough to stop > 99.9% of civilizations with supercomputers from taking over the light cone).
My own probability distribution has a broad peak somewhere around 4 with a long-ish tail to the left.
From the article:
That doesn’t sound like it admits the possibility of twelve, independent, roughly equally balanced filters.
You’re being uncharitable. “[It’s] likely [that X]” doesn’t exclude the possibility of non-X.
If you know nothing about a probability distribution, it is more likely that it has one absolute maximum than more than one.
Maybe I am being uncharitable, but when Sophronius asks “[c]an somebody explain to me why people generally assume that the great filter has a single cause?” and you reply “I don’t think anyone really assumes that”, I have to admit that I’ve always seen people think of the Great Filter in terms of one main cause (e.g., look to the poll in this thread where people choose one particular cause), and not in terms of multiple causes.
Though, you’re right that no one has said that multiple causes is outright impossible. And you may be right that one main cause makes a lot more sense. But I do think Sophronius raises a question worth considering, at least a bit.
At least in other Less Wrong posts and comments on the topic, the question is usually presented probabilistically, as in “Does the bulk of the great filter lie ahead of us or behind us?”
Though, it is usually not specified whether the writer is asking where the greatest number of candidates gets ruled out, or the greatest fraction. Maybe there are ten factors that each eliminate 90% of candidates prior to the technological civilization level (leaving maybe a hundred near-type-I civilizations per galaxy), but one factor (AI? self-destruction by war or ecological collapse? Failing to colonize other worlds before a stray cosmic whatever destroys your homeworld?) takes out 99% of what is left. In that case nearly all the great filter would be behind us, and our odds still wouldn’t be good.
I don’t have an answer but here’s a guess: For any given pre-civilizational state, I imagine there are many filters. If we model these filters as having a kill rate then my (unreliable stats) intuition tells me that a prior on the kill rate distribution should be log-normal. I think this suggests that most of the killing happens on the left-most outlier but someone better at stats should check my assumptions.
I’d liken it to a chemical reaction. Many of them are multistep, and as a general statement chemical processes take place over an extremely wide range of orders of magnitude of rate (ranging from less than a billionth of a second to years). So, in an overall reaction, there are usually several steps, and the slowest one is usually orders of magnitude slower than any of the others, and that one’s called the rate determining step, for obvious reasons: it’s so much slower than the others that speeding up or slow down the others even by a couple of orders of magnitude is negligible to the overall rate of reaction. it’s pretty rare that more than one of them happen to be at nearly the same rate, since the range of orders of magnitude is so large.
I think that the evolution of intelligence is a stochastic process that’s pretty similar to molecular kinetics in a lot of ways, particularly that all of the above applies to it as well, thus, it’s more likely that there’s one rate determining step, one Great Filter, for the same reasons.
However (and I made another post about this here too), I do think that the filters are interdependent (there are multiple pathways and it’s not a linear process, but progress along a multidimensional surface.) that’s not really all that different than molecular kinetics either though.
Interesting. However, I still don’t see why the filter would work similarly to a chemical reaction. Unless it’s a general law of statistics that any event is always far more likely to have a single primary cause, it seems like a strange assumption since they are such dissimilar things.
Sorry for the delayed response; I don’t come on here particularly often.
The assumptions I’m making are that evolution is a stochastic process in which elements are in fluxional states and there ere is some measure of ‘difficulty’ in transitioning from one state to another, an energetic or entropic barrier of sorts, that to go from A to B (for example, from an organism with asexual reproduction to an organism with sexual reproduction) some confluence of factors must occur, and that occurrence has a certain likelihood that’s dependent on the conditions of the whole system (ecosystem). I think that this combined with the large numbers of physical elements interacting (organsims) is enough to say that evolution is governed by something pretty similar to statistical thermodynamics.
So, from the Arrhenius equation, k = Ae^{{-E_a}/{RT}}
where k is the rate of reaction, A is the order of reaction (number of components that must come together), E_a is the activation energy, or energy barrier, and RT is the gas constant multiplied by temperature.
The equation is mostly applied to chemistry, but it also has found uses in other sectors, like predicting the geographic progression of the blooming of Sakura trees (http://en.wikipedia.org/wiki/Cherry_blossom_front). It really applies to any system that has certain kinetic properties.
So ignoring all the chemistry specific factors (like temperature), the relation in its most general form becomes
k = Ae^-BE
This says essentially that the rate is proportional to a negative exponential of the barrier to the transformation, and small changes in the value of the barrier correspond to large changes in the value of the rate. Thus, it’s unlikely that two rates are similar. I don’t see why two unrelated things would be likely to have a similar barrier, and given this, they’re even less likely to have a similar rate.
Pareto principle.
There also the Fermi formula.
I think you need to explain your meaning more
The Pareto principle is the 80-20 rule. Meaning if there are 4 great filters each with an effectiveness of 0.9,0.9,0.99, and 0.9, it will be the 0.99 filter that requires ten times the attention / luck and will get the name ‘Great’, even if there are many other hurdles adding to the end result.
This is a troubleshooting problem where we are still on the first stage of the diagnosis: Identifying the largest contributor to the problem, identifying the low hanging fruit. The whole filter that prevents life on every planet is going to the a combination of many stacked smaller filters, but we want to find the most relevant filter, either the 0.999 filter or the locally relevant filter. This discussion is all part of that diagnostic process to find the most relevant single cause.
To whoever voted for “Multi-cell life unlikely”: Multicellularity has evolved independently at least 46 times.
I voted for that, but my view is a bit more fine grained. I think complex multicellular life with efficient energy is rare. My view is based on the fact that there was life about as soon as possible on earth but it took billions of years to get mitochondria and large complex multicellular life. Once you got that you had a very rapid explosion and tons of highly complex species within a few hundred million years. And multiple fairly intelligent species within a few hundred million after that. My assumption is the step with the longest gap is the most unlikely piece.
Wait… don’t all eukaryotes have mitochondria, including unicellular ones? I think “Complex single cells unlikely” on the poll is a better fit to your position.
It is worth noting that there are numerous examples of endosymbiosis all over the tree of life and the mitochondrion and plastids aren’t the only ones, just the most successful and most ancient.
There are bacteria that have bacterial endosymbiotes, and I’ve seen electron micrographs of large bacteria with strange uncharacterized archaea hanging off them like tassles. Some animals, mostly insects that drink plant sap, have endosymbiotic bacteria that have had their genomes stripped down to only 150 genes and cannot make their own cellular energy that only exist to make essential amino acids so that the animal does not have to eat them. Large numbers of photosynthetic microbes have taken up eukaryotic green algae as second-order endosymbiotes, or even have taken those organisms up as tertiary endosymbiotes.
EDIT: here’s a chart of the bizarre known history of clades acquiring photosynthesis via known primary, secondary and tertiary endosymbiosis. By the time you are at tertiary endosymbiosis, the plastid has 6 nested membranes and may have some vestigial nuclei between some of them.
Yeah, somewhat. I think what I was getting at by complex multi cellular life is that I think the cambian explosion http://en.m.wikipedia.org/wiki/Cambrian_explosion is rare.
It is worth noting that the Cambrian also coincided with the Earth releasing from an ice age that makes everything since look mild, as well as the increase of atmospheric oxygen to levels within an order of magnitude or two of today.
It is also worth noting that there is equivocal evidence of multicellular life before the Cambrian that nobody quite agrees on what it means—things that look like worm-trails in seafloor sediment a billion years old (but that some think could be trails from the motion of giant protists), flat sheets with a distinct center and edge 2 gigayears old, and macroscopic curly fibers two or more gigayears old...
Why did it take 0.6 billion years from Eukaryotic cells to simple multi-celled organisms?
Define “independently.”
I’m wary of calling cyanobacteria with their specialized nitrogen-fixer cells or the various colonial sporangia-forming bacteria ‘multicellular’. They do some hellacious cell specializing within colonies, and there are other bacteria that form chains between reducing and oxidizing environments with bacteria in the middle passing electrons back and forth so as to allow the whole colony to metabolize things one bacterium couldn’t, but it’s arguably a little different than what we think of when we think of a mushroom or an animal.
Eukaryotes have however evolved unambiguous multicellularity on many occasions.
In lineages not directly discended from one another (though they do have a common (unicellular) ancestor). According to the article I linked to it even happened to prokaryotes, though “complex” multicellular organisms (whatever that means—I guess cells not only bound together but also specialized?) ‘only’ evolved six times among eukaryotes.
It is possible that eukaryotes are particularly propense to becoming multicellular the way the OP claims bilaterians are propense to becoming intelligent, but I’d interpret each item of the poll to be conditional on all of the above, and I’d take “complex single cell life” to mean eukaryotes on Earth or similar.
[x] other reasons
What if we’re the first in a winner-takes-all scenario? If the first-mover prevents (or vastly reduces the likelihood of) the evolution of latter intelligent life, intelligent life should not be surprised by being the first intelligent species to evolve.
That we’re not that special is an assumption of the Great Filter argument. We could be the first intelligence to arrive on the scene, but it probably will be more harmful to underestimate upcoming pitfalls than overestimating them.
I answered “Complex single cell life unlikely” simply because it took the longest time − 1.6 billion years—to develop.
A very valid point.
The logic is actually from Hanson’s “The Great Filter” essay.
I voted “see results only”. Based on the wide, unconcentrated distribution of evidence and beliefs about the so-called Great Filter, I think an option has been excluded: there is no Great Filter, and we are only one among many intelligent species in the universe (or even in this one galaxy) on one among many planets to develop such. Other intelligent life may well be totally unlike us, or, very rarely, parallel evolutionary pressures might have operated and produce something whose mentality we could almost understand. We don’t know.
We do know that our world hasn’t been strip-mined by a long-since-transhuman intergalactic supercivilization… but that’s not really good evidence that no other intelligence has evolved in our light-cone at all.
This whole thing has too much an air of mystery for me to believe we really understand it.
That would be a late Filter (between intelligence and intergalactic supercivilization).
I think calling it a “filter” implicitly primes people for the impression that the options are supergalactic civilization or death, and to exclude possibilities where things keep going but don’t become astronomically visible.
What about central nervous systems?
I took the list from wikipedia (except for added differentiation after now).