The Great Filter provides no explanation here. There are four options: early filter, late filter, multiple (early and late) filters, and no filters. In the first three cases the existence of early filters is explanation for an empty sky. (We are getting very close to the capability to build von Neuman probes though, so I’m not sure an o sky is evidence for a late filter.) In the case of no filter however, we can expect that any intelligence would start expanding into the universe at near the speed of light. So the fact that our light cone doesn’t contain other intelligences, so far as we can see, is also consistent with no filter. An observation which is evidence for a proposition being both true and false doesn’t provide any useful evidence.
In the ‘there’s no filter, and colonization bubbles just expand too rapidly for other organisms to get advance warning’ scenario, there’s a fairly small window of time between ‘the first organisms evolve’ and ‘no more organisms evolve ever again’. But in the absence of an early filter, that small window should occur early in the universe’s lifespan, not late. The fact that we live in an old universe suggests that there must be an early filter of some sort (particularly if colonization is easy).
The universe is a big, big place. It also becomes isolated relatively fast due to accelerating inflationary effects. There will probably be many intelligences out there even our most distant descendants will never meet.
Ultimately though you’re making assumptions about the prior distribution of intelligent life which isn’t warranted with a sample size of 1.
We are getting very close to the capability to build von Neuman probes though, so I’m not sure an o sky is evidence for a late filter.
I am highly skeptical of this statement.
We haven’t built a machine that can get out of our solar system and land on a planet in another.
We haven’t made machines that can copy themselves terrestrially.
Making something that can get out of the solar system, land on another planet, then make (multiple) copies of itself seems huge leap beyond either of the other two issues.
Even an AGI that could self-replicate might have enormous difficulty getting to another planet and turning its raw resources into copies of itself.
But an AGI wouldn’t be an AGI if it wasn’t able to figure out how to solve the problem of getting from here to there and using in-situ resources to replicate itself. I hate to make arguments from definitions, but that’s kinda the case here. If an intelligence can’t solve that solvable problem, it really isn’t a general intelligence now is it?
So how far are we from making an (UF)AGI? 15 years? 50 years? 100 years? That’s still a cosmic blink in the eye.
But an AGI wouldn’t be an AGI if it wasn’t able to figure out how to solve the problem of getting from here to there and using in-situ resources to replicate itself.
It remains to be seen whether we humans can do that. Does this mean we might not be general intelligences, either? That seems like a slightly silly and very nonstandard way to use the term.
Eh… read up on the literature. Pay special attention to studies done by the British Interplanetary Society and the Advanced Automation in Space NASA workshop in the 80′s, not to mention all the work done by early space advocates, some published, some not.
We can say with some confidence that we know how to do something even if it hasn’t yet been reduced to practice or yet practially demonstrated. 19th century thinkers showed how rockets could in principle be built to enable human exploration of space. And they were right, pretty much on every point—we still use and cite their work today.
We have done enough research on automated exploration and kinematic self-replicating machines* to say that it is definitively possible (life being an example), and within our reach if we had pockets and conviction deep enough to create it.
You must be misunderstanding me, because what you just said seems like a total non-sequitur. What does pockets and deep conviction have to do with general intelligence indeed?
Starting with a rhetorical question was probably a bad idea. Let me try again:
But an AGI wouldn’t be an AGI if it wasn’t able to figure out how to solve the problem of getting from here to there and using in-situ resources to replicate itself.
I don’t think this is true. An AGI which—due to practical limitations—cannot eat its future light cone can still be generally intelligent. Humans are potentially an example (minus the “artificial”, but that isn’t the relevant part).
Claiming that general intelligence can eat the universe by definition seems to suffer the same problem as the Socrates/hemlock question. It would mean we can’t call something generally intelligent until we see it able to eat the universe, which would require not just theoretical knowledge but also the resources to pull it off. And if that’s the requirement, then human general intelligence is an open question and we’d have zero known examples of general intelligence.
This does not seem to fit how we’d like to use the term to point to “the kind of problem-solving humans can do”, so I think it’s a bad definition.
(AGI can probably eat the universe, but that’s more like a theorem than a definition.)
Ok there was an unstated assumption, but that assumption was that the AGI has physical effectors. Those effectors could be nearly anything, since with enough planning and time nearly any physical effector could be used to bootstrap your way to any other capability.
So many posts back I was asserting that even un augmented human beings have the capability to eat our collective future light cone. It’s a monumental project, yes, with probably hundreds of years before the first starships leave. But once they do our future descendents would be expanding into the cosmos at a fairly large fraction of the speed of light. I’ll point you to the various studies done by the British Interplanetary Society and others on interstellar colonization if you don’t want to take my word for it.
So if regular old homo sapiens can do it, but an AGI with physical effectors can’t, then I seriously question how general that intelligence is.
And remember that getting to this level of industrial capacity on earth followed from millions of years of biological evolution and thousands of years of cultural evolution in Earth’s biosphere. Why would one ship full of humans be able to replicate that success somewhere else?
Similarly, an AGI that can replicate itself with an industrial base at its disposal might not be able to when isolated from those resource (it’s still an AGI).
In the case of no filter however, we can expect that any intelligence would start expanding into the universe at near the speed of light.
Questionable premise. It isn’t at all clear how close one can get to the speed of light, and even if this were occurring at 10% of the speed of light rather 99% the situation would look drastically different.
The Great Filter provides no explanation here. There are four options: early filter, late filter, multiple (early and late) filters, and no filters. In the first three cases the existence of early filters is explanation for an empty sky. (We are getting very close to the capability to build von Neuman probes though, so I’m not sure an o sky is evidence for a late filter.) In the case of no filter however, we can expect that any intelligence would start expanding into the universe at near the speed of light. So the fact that our light cone doesn’t contain other intelligences, so far as we can see, is also consistent with no filter. An observation which is evidence for a proposition being both true and false doesn’t provide any useful evidence.
In the ‘there’s no filter, and colonization bubbles just expand too rapidly for other organisms to get advance warning’ scenario, there’s a fairly small window of time between ‘the first organisms evolve’ and ‘no more organisms evolve ever again’. But in the absence of an early filter, that small window should occur early in the universe’s lifespan, not late. The fact that we live in an old universe suggests that there must be an early filter of some sort (particularly if colonization is easy).
The universe is a big, big place. It also becomes isolated relatively fast due to accelerating inflationary effects. There will probably be many intelligences out there even our most distant descendants will never meet.
Ultimately though you’re making assumptions about the prior distribution of intelligent life which isn’t warranted with a sample size of 1.
An extremely low prior distribution of life is an early great filter.
I am highly skeptical of this statement.
We haven’t built a machine that can get out of our solar system and land on a planet in another.
We haven’t made machines that can copy themselves terrestrially.
Making something that can get out of the solar system, land on another planet, then make (multiple) copies of itself seems huge leap beyond either of the other two issues.
Even an AGI that could self-replicate might have enormous difficulty getting to another planet and turning its raw resources into copies of itself.
But an AGI wouldn’t be an AGI if it wasn’t able to figure out how to solve the problem of getting from here to there and using in-situ resources to replicate itself. I hate to make arguments from definitions, but that’s kinda the case here. If an intelligence can’t solve that solvable problem, it really isn’t a general intelligence now is it?
So how far are we from making an (UF)AGI? 15 years? 50 years? 100 years? That’s still a cosmic blink in the eye.
It remains to be seen whether we humans can do that. Does this mean we might not be general intelligences, either? That seems like a slightly silly and very nonstandard way to use the term.
Eh… read up on the literature. Pay special attention to studies done by the British Interplanetary Society and the Advanced Automation in Space NASA workshop in the 80′s, not to mention all the work done by early space advocates, some published, some not.
We can say with some confidence that we know how to do something even if it hasn’t yet been reduced to practice or yet practially demonstrated. 19th century thinkers showed how rockets could in principle be built to enable human exploration of space. And they were right, pretty much on every point—we still use and cite their work today.
We have done enough research on automated exploration and kinematic self-replicating machines* to say that it is definitively possible (life being an example), and within our reach if we had pockets and conviction deep enough to create it.
http://www.molecularassembler.com/KSRM.htm
Right, I’m objecting to the claim that
are included by definition when we say “general intelligence”. (That, or I’m totally misunderstanding you.)
You must be misunderstanding me, because what you just said seems like a total non-sequitur. What does pockets and deep conviction have to do with general intelligence indeed?
Starting with a rhetorical question was probably a bad idea. Let me try again:
I don’t think this is true. An AGI which—due to practical limitations—cannot eat its future light cone can still be generally intelligent. Humans are potentially an example (minus the “artificial”, but that isn’t the relevant part).
Claiming that general intelligence can eat the universe by definition seems to suffer the same problem as the Socrates/hemlock question. It would mean we can’t call something generally intelligent until we see it able to eat the universe, which would require not just theoretical knowledge but also the resources to pull it off. And if that’s the requirement, then human general intelligence is an open question and we’d have zero known examples of general intelligence.
This does not seem to fit how we’d like to use the term to point to “the kind of problem-solving humans can do”, so I think it’s a bad definition.
(AGI can probably eat the universe, but that’s more like a theorem than a definition.)
Ok there was an unstated assumption, but that assumption was that the AGI has physical effectors. Those effectors could be nearly anything, since with enough planning and time nearly any physical effector could be used to bootstrap your way to any other capability.
So many posts back I was asserting that even un augmented human beings have the capability to eat our collective future light cone. It’s a monumental project, yes, with probably hundreds of years before the first starships leave. But once they do our future descendents would be expanding into the cosmos at a fairly large fraction of the speed of light. I’ll point you to the various studies done by the British Interplanetary Society and others on interstellar colonization if you don’t want to take my word for it.
So if regular old homo sapiens can do it, but an AGI with physical effectors can’t, then I seriously question how general that intelligence is.
Yes; please provide those links.
And remember that getting to this level of industrial capacity on earth followed from millions of years of biological evolution and thousands of years of cultural evolution in Earth’s biosphere. Why would one ship full of humans be able to replicate that success somewhere else?
Similarly, an AGI that can replicate itself with an industrial base at its disposal might not be able to when isolated from those resource (it’s still an AGI).
Questionable premise. It isn’t at all clear how close one can get to the speed of light, and even if this were occurring at 10% of the speed of light rather 99% the situation would look drastically different.