UFAI cannot be the Great Filter
[Summary: The fact we do not observe (and have not been wiped out by) an UFAI suggests the main component of the ‘great filter’ cannot be civilizations like ours being wiped out by UFAI. Gentle introduction (assuming no knowledge) and links to much better discussion below.]
Introduction
The Great Filter is the idea that although there is lots of matter, we observe no “expanding, lasting life”, like space-faring intelligences. So there is some filter through which almost all matter gets stuck before becoming expanding, lasting life. One question for those interested in the future of humankind is whether we have already ‘passed’ the bulk of the filter, or does it still lie ahead? For example, is it very unlikely matter will be able to form self-replicating units, but once it clears that hurdle becoming intelligent and going across the stars is highly likely; or is it getting to a humankind level of development is not that unlikely, but very few of those civilizations progress to expanding across the stars. If the latter, that motivates a concern for working out what the forthcoming filter(s) are, and trying to get past them.
One concern is that advancing technology gives the possibility of civilizations wiping themselves out, and it is this that is the main component of the Great Filter—one we are going to be approaching soon. There are several candidates for which technology will be an existential threat (nanotechnology/‘Grey goo’, nuclear holocaust, runaway climate change), but one that looms large is Artificial intelligence (AI), and trying to understand and mitigate the existential threat from AI is the main role of the Singularity Institute, and I guess Luke, Eliezer (and lots of folks on LW) consider AI the main existential threat.
The concern with AI is something like this:
AI will soon greatly surpass us in intelligence in all domains.
If this happens, AI will rapidly supplant humans as the dominant force on planet earth.
Almost all AIs, even ones we create with the intent to be benevolent, will probably be unfriendly to human flourishing.
Or, as summarized by Luke:
… AI leads to intelligence explosion, and, because we don’t know how to give an AI benevolent goals, by default an intelligence explosion will optimize the world for accidentally disastrous ends. A controlled intelligence explosion, on the other hand, could optimize the world for good. (More on this option in the next post.)
So, the aim of the game needs to be trying to work out how to control the future intelligence explosion so the vastly smarter-than-human AIs are ‘friendly’ (FAI) and make the world better for us, rather than unfriendly AIs (UFAI) which end up optimizing the world for something that sucks.
‘Where is everybody?’
So, topic. I read this post by Robin Hanson which had a really good parenthetical remark (emphasis mine):
Yes, it is possible that the extremely difficultly was life’s origin, or some early step, so that, other than here on Earth, all life in the universe is stuck before this early extremely hard step. But even if you find this the most likely outcome, surely given our ignorance you must also place a non-trivial probability on other possibilities. You must see a great filter as lying between initial planets and expanding civilizations, and wonder how far along that filter we are. In particular, you must estimate a substantial chance of “disaster”, i.e., something destroying our ability or inclination to make a visible use of the vast resources we see. (And this disaster can’t be an unfriendly super-AI, because that should be visible.)
This made me realize an UFAI should also be counted as an ‘expanding lasting life’, and should be deemed unlikely by the Great Filter.
Another way of looking at it: if the Great Filter still lies ahead of us, and a major component of this forthcoming filter is the threat from UFAI, we should expect to see the UFAIs of other civilizations spreading across the universe (or not see anything at all, because they would wipe us out to optimize for their unfriendly ends). That we do not observe it disconfirms this conjunction.
[Edit/Elaboration: It also gives a stronger argument—as the UFAI is the ‘expanding life’ we do not see, the beliefs, ‘the Great Filter lies ahead’ and ‘UFAI is a major existential risk’ lie opposed to one another: the higher your credence in the filter being ahead, the lower your credence should be in UFAI being a major existential risk (as the many civilizations like ours that go on to get caught in the filter do not produce expanding UFAIs, so expanding UFAI cannot be the main x-risk); conversely, if you are confident that UFAI is the main existential risk, then you should think the bulk of the filter is behind us (as we don’t see any UFAIs, there cannot be many civilizations like ours in the first place, as we are quite likely to realize an expanding UFAI).]
A much more in-depth article and comments (both highly recommended) was made by Katja Grace a couple of years ago. I can’t seem to find a similar discussion on here (feel free to downvote and link in the comments if I missed it), which surprises me: I’m not bright enough to figure out the anthropics, and obviously one may hold AI to be a big deal for other-than-Great-Filter reasons (maybe a given planet has a 1 in a googol chance of getting to intelligent life, but intelligent life ‘merely’ has a 1 in 10 chance of successfully navigating an intelligence explosion), but this would seem to be substantial evidence driving down the proportion of x-risk we should attribute to AI.
What do you guys think?
- A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warming by 11 Sep 2022 10:22 UTC; 33 points) (EA Forum;
- The Great Filter is early, or AI is hard by 29 Aug 2014 16:17 UTC; 33 points) (
- A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warming by 11 Sep 2022 10:25 UTC; 33 points) (
- “Cheating Death in Damascus” Solution to the Fermi Paradox by 30 Jun 2018 12:00 UTC; 14 points) (
- Don’t Fear The Filter by 29 May 2014 0:45 UTC; 11 points) (
- AI X-risk is a possible solution to the Fermi Paradox by 30 May 2023 17:42 UTC; 11 points) (
- Thoughts on the Drake Equation and the Great Filter by 23 Dec 2012 18:11 UTC; 7 points) (
- 22 Dec 2016 11:44 UTC; 3 points) 's comment on Lunar Colony by (EA Forum;
- AI as a resolution to the Fermi Paradox. by 2 Mar 2016 20:45 UTC; 3 points) (
- 25 Sep 2014 3:07 UTC; 3 points) 's comment on Superintelligence Reading Group 2: Forecasting AI by (
What if post-singularity civilizations expand at the speed of light? Then we should not expect to see anything:
It looks like we are going to have less than 200 years between interstellar detectability and singularity. So the chance of us being around at the same time (adjusted for distance) as another civilization to a resolution of a few hundred years seems quite low.
Life will only get to the point of asking questions like these on worlds that haven’t been ground up for resources, so we can only be outside the “expansion cone” of any post-singularity civilization. If the expansion cone and the light cone are close (within a few hundred years), then, given that we are outside of the expansion cone, we are probably outside the light cone as well. So the AI-as filter doesn’t get falsified by observing no AIs.
It doesn’t even have to be a filter, though it probably is; 100% of civilizaions could successfully navigate the intelligence explosion and we would see nothing, because we can only exist in the last corner of the universe that hasn’t been ground up by them.
This is all assuming lightspeed expansion. Here’s a few ideas: a single nanoseed catapulted at 99.9 % the speed of light, followed by a laser encoding the instructions. Cross galaxy light-lag would be only 100 years. How to slow it down on arrival is unknown… Another possibility is actually creating spaceships out of light; some kind of super laser that would excite whatever it hit in just the right way to create a nanoseed. This one seems mush less plausible, but I wouldn’t bet against the engineering skill of a superintelligence.
Then it spreads throughout the galaxy in 100,000 light years. The Fermi paradox is not really affected—there are still no alien superintelligent machines visible here.
Why is it not affected? If we assume they expand at a negligible fraction of the speed of light, we expect them to be visible from the outside for their entire lifetime (which may be very long). On the other hand if we expect them to expand at nearly the speed of light, we expect them to be detectable from outside for only a few hundred years.
The other side of the galaxy could very well be already consumed by an alien civilization.
E.g. check with http://en.wikipedia.org/wiki/Fermi_paradox
The rate of expansion makes very little difference, and a high rate of expansion is not listed as a possible resolution.
That article has little about the effect of expansion. Why does it not affect it? What is wrong with my argument that it should matter?
A near-c rate of expansion drastically reduces the volume of space that a given civilization is observable from. What specifically is wrong with this?
If intelligent life was common and underwent such expansion, then there would be very few new-arising lonely civilizations later in the history of the universe (the real estate for their evolution already occupied or filled with signs of intelligence). The overwhelming majority of civilizations evolving with empty skies would be much younger.
So, whether you attend to the number of observers with our observations, or the proportion of all observers with such observations, near-c expansion doesn’t help resolve the Fermi paradox.
Another way of thinking about it is: we are a civilization that has developed without seeing any sign of aliens, developed on a planet that had not been colonized by aliens. Colonization would have prevented our observations just as surely as alien transmissions or a visibly approaching wave of colonization.
I still don’t get it.
Assume life is rare/filtered, we straightforwardly expect to see what we see (empty sky).
Assume life is common and the singularity comes quickly and reliably, and colonization proceeds at the speed of light, then condition on the fact that we are pre-singularity. As far as I can tell, a random young civilization still expects empty skies, possibly slightly less because of the relatively small volume of spacetime where we would observe an approaching colonization wave.
So the observation of empty skies is only very weak evidence against life being common, given that this singularity stuff is sound.
The latter hypothesis is more specific, but I already believe all those assumptions (quick, reliable, and near-c).
Given that I take those singularity assumptions seriously (not just hypothetically), and given that we are where we are in the history of the universe, the fermi paradox seems resolved for me; I find it unlikely that a given young civilization would observe any other civilization, no matter the actual rate of life. If we did observe another isolated civilization it would be pretty much falsify my “quick,reliable, and lightspeed” singularity belief.
However, as you say, that “given that we are where we are in the history of the universe” is worrying. I predict most young civilizations to be early (because the universe gets burned up quickly), and I predict most civilizations to not be young, given that life is common. When we observe ourselves to be young and late (are we actually late?), fermi’s paradox results. I guess in this case fermi’s paradox is that we observed something that is a priori unlikely, and we wonder what unlikely alternate hypotheses this digs up (the above, for one). However, anthropics is very confusing...
Fermi’s paradox also makes mention of the fact that there are billions of stars in the galaxy that are billions of years older than ours, many of them having habitable planets. Some reasons have prevented any of these from spawning a galactic colonization wave—and those reasons are of interest to us.
Yes and yes.
As envisioned in Olaf Stapledon’s classic Last and First Men, [free here]:
(Read, then guess the year of publication!)
Year of publication? Rot13: avargrra guvegl!
Here are a couple of scattered short LW comments where I discussed this possibility and considered counterarguments and implementations.
Interesting. You seem to have exactly the same thoughts as me.
How do you think one might slow down a .999*c von neuman probe at the destination?
I am not a physicist, so I didn’t and couldn’t do the calculations, but I don’t really believe that classic probes can reach .999c. They would be pulverised by intergalactic material. Even worse, literal .999c would not be fast enough for this fancy “hits us before we know it” filter idea to work. As I explained in some of the above-quoted threads, my bet would definitely be on the things you called “spaceships out of light”. A sufficiently advanced civilisation might switch from atoms to photons as their substrate. The only resource they would extract from the volume of space they consume would be negentropy, so they wouldn’t need any slowing down or seeds. Again, I am not a physicist. I discussed this with some physicists, and they were sceptic, but their objections seemed to be of the engineering kind, not theoretic kind, and I’m not sure they sufficiently internalized “don’t bet against the engineering skill of a superintelligence”.
For me, one source of inspiration for this light-speed expansion idea was Stanislav Lem’s “His Master’s Voice”, where precisely tuned radio waves are used to catalyse the formation of DNA-based life on distant planets. (Obviously that’s way too slow for the purposes we discuss here.)
Photons can’t interact with each other (by the linearity of Maxwell’s equations) and so can’t form a computational substrate on their own. This doesn’t rule out “no atoms” computing in general though.
EDIT: I’m wrong. When you do the calculations in full quantum field theory there is a (extremely) slight interaction (due to creations and destructions of electron-postitron pairs, which in some sense destroy the linearity). I don’t know if this is enough to support computers.
That’s actually concerning. Maybe it isn’t possible to shoot matter intact across the galaxy… Would have to do the calculations with interstellar particle density.
Also, surely you mean “interstellar”? I was only thinking of interstellar travel for now; assuming intergalactic is impossible or whatever.
Not for intergalactic, but the galaxy is 100k lightyears across. 0.999c would get you a lag behind the light of 100 years, which is on the same order of magnitude as the time between detectability and singularity (looks like < 200 years for us).
How would one eat a star without slowing down, even in principle?
This is closer to what I was thinking, but of course if you can catalyze DNA, you can catalyze arbitrary nanomachines. Exactly how this would work is a mystery to me… (also, doing it with radio waves is needlessly difficult, surely you’d use something precise and ionizing like UV, Xrays, or gamma)
When you look at it from a Fermi paradox perspective, you have to be able to account for many hundred million years of expansion, because there can be many civilizations that are that much older than us. We are talking about some crazy thing that is supposed to be able to consume a galaxy with almost-optimal speed. I don’t expect galaxy boundaries to stop it completely, neither by intention nor by necessity. I am not even sure that it has to treat intergalactic space as the long boring travel between the rare interesting parts. Maybe all it really needs is empty space.
Interesting point.
Note that I speculated about photons as a substrate. Maybe major reorganization of atoms in unnecessary, and it can just fill the space around the star, and utilize the star as a photon source.
Fire a particle accelerator that can fire a smaller von neuman probe at -.999c. The particle accelerator could be built and assembled during the trip if it’s too unwieldy to fire directly.
An implicit assumption here is that alien civilizations have an observation weight of zero.
If complex space-faring civilizations have spread across the galaxy to produce lots of observers capable of anthropic reasoning, why aren’t we in one?
If they don’t, doesn’t that just reframe the Filter? Technological evolution into Blindsight-style Scramblers sure sounds like extinction to me.
This is sort of valid but it is extremely unlikely. Even if expansion occurs at say .99% of light then the problem will still exist. One needs to be expanding extremely close (implausibly close?) to the speed of light for this explanation to work.
We have particle accelerators that achieve Lorentz factors of 7,500. I proposed a Lorentz factor of 22. Never mind a superintelligence, we, are on the brink of being able to accelerate nanomachines to that speed (assuming we had nanomachines).
The only implausible thing is being able to decelerate non-destructively at the target, and none of us have given that even 5 whole minutes of serious thought, never mind a couple trillion superintelligent FLOPS.
Nanobot is hard to de-accelerate, but a robust femtobot might do better.
Hmm, using the femtobot, would it being charged and entering a conductive material slow it down due to that induction thingy, like a magnet dropped down a copper tube? Or maybe having a conductive right shaped bot, and launching it into a ludicrously strong magnetic field of a neutron star or something.?
Another option is to launch a black hole in front of it, and give both the probe and black hole extremely strong negative charge; the black hole will absorb impacting matter (also solving the problem of interstellar dust) slowing it down by averaging, simultaneously clearing a safe path for the probe and gently pushing it back as it gets closer and the charges repel.
Femto? Explain.
The black hole idea is interesting. Does it even have to be a black hole? Any big non-functional absorbent mass at the front would do, right? Maybe only a black hole would be reliable...
Maybe not even a mass. If the probe had a magnetic field, you might be able to do things with the bussard ramjet idea to slow you down and control (charged) collisions.
not very good but good enough: http://en.wikipedia.org/wiki/Femtotech
ANd I were just brainstorming, your guess is as good as mine. But yea a tiny neutron star might work.
Here are my five minutes: nanomachines need to carry a charge to be accelerable, right? Well, it works the other way too—they will decelerate on their own in destination’s Van Allen belts.
They don’t actually decelerate in the Van Allen belts, though. Magnetic fields apply a force to a charged particle perpendicular to it’s direction of motion. F*V = Deceleration Power = 0. Also worth noting that a charged nanomachine has a much higher mass/charge ratio than the usual charged particles (He2+, H+, and e-), so it would be much less affected.
I was actually thinking of neutralizing the seed at the muzzle to avoid troublesome charge effects.
Hmm.
Okay, filters that would produce results consistent with observation.
1: Politics: Aka: “The Berzerker’s Garden” The first enduring civilization in our galaxy rose many millions of years before the second, and happened to be both highly preservationist and highly expansionist. They have outposts buried in the deep crust of every planet in the galaxy, including earth. Whenever a civilization arises that is both inclined and able to turn the galaxy entire into fast food joints/smily faces/ect, arises said civilization very suddenly disappears. The berzerkers cannot be fought, and cannot be fooled, because they have been watching the entirety of history, and their contingency plans for us predate the discovery of fire. If we are really lucky, they will issue a warning before annihilating us.
2: Physics is booby trapped: One of the experiments every technological civilization inevitably conducts while exploring the laws of the universe has an unforeseeable, and planet-wrecking result. We are screwed.
3: Economics: The minimal mass of a technological “ecology” capable of sustaining itself outside of a compatible biosphere is just too large to fit into a star ship. The interlocking chains of expertise, material extraction and recycling, energy production and so on, and so forth, flat out cannot be compacted down enough to be moved. No such thing as a von-neuman probe or a colony ship can be built. Civilizations expand to the limit of how far spare parts and help can can be sent, and then halt.
4: Diversion: Advancing tech opens “frontiers” much, much more attractive than star flight before starflight becomes possible. Alternate timeline gates, uploads into the underlying computational substrate of the universe, ect, ect.
5: Anthropic engineering. Advanced civilizations have proof of the manyworlds and anthropic principles—And use them. Which from outside any given circle of coordination looks like collective suicide. So the universe is full of empty planets, and every civilization has all of it to themselves.
I’m from the future & i just want to thank you for these unusual solutions.
AFAIK the idea that “UFAI exacerbates, and certainly does not explain, the question of the Great Filter” is standard belief among SIAI rationalists (and always has been, i.e., none of us ever thought otherwise that I can recall).
I was just going to quote your comment on Overcoming Bias to emphasise this.
I think some people may be misinterpreting you as believing this because many people understand your advocacy as implying “UFAI is the biggest baddest existential risk we need to deal with”. Assuming a late filter not explained by UFAI suggests there is an unidentified risk in our future that is much likelier than an uncontrolled intelligence explosion.
That’s a big assumption, both uncertain and decisive if made.
It is; I don’t particularly think the answer to the Great Filter is a Bigger Threat that comes after this. There’s a possibility that most species like ours happen to be inside the volume of some earlier species’s “F”AI’s enforced Prime Directive with a restriction threshold (species are allowed to get as far as ours, but are not allowed to colonize galaxies) but if so I’m not sure what our own civilization ought to do about that. I suspect, and certainly hope, that there’s actually a hidden rarity factor.
But I do think some fallacy of the form, “This argument would make UFAI more threatening—therefore UFAI-fearers must endorse it—but the argument is wrong, ha ha!” might have occurred.
I think this is it. However, there are at least a few enthusiasts, even if they are relatively peripheral, who do tend to engage in such indiscriminate argument. Sort of like internet skeptics who confabulate wrong arguments for true skeptical conclusions in the course of comment thread combat that the scientists they are citing would not endorse.
What has prevented local living systems from colonising the universe so far has been delays—not risks.
This is not necessarily true. If the goals of the AI do not involve a rapid acquisition of resources even outside its solar system, then we would not see evidence for it (E.g, wireheading that does not involve creating as many sentient organisms as possible).
However, because there would be many instances of this, AI being the filter is probably still not likely. If it’s very likely for UAI to be screwed up in a self-contained way, we would not expect to see evidence of life. If UAI has a non-negligible chance to gobble up everything it sees for energy, then we would expect to se it.
Sure. For instance, consider the directive “Make everyone on Earth live as happily as possible for as long as the sun lasts.” Solution: Wirehead everyone, then figure out how to blow up the sun, then shut down — mission accomplished.
Not if the system is optimizing for the probability of success and can cheaply send out probes to eat the universe and use it to make sure the job is finished lest something go wrong (e.g. the sun-destroyer [???] failed, or aliens resuscitate the Sun under whatever criterion of stellar life is used).
“Our analysis of the alien probe shows that its intended function is to … um … go back to a star that blew up ten thousand years ago and make damn sure that it’s blown up and not just fooling.”
A UFAI that doesn’t go around eating stars to make paper-clips is probably already someone’s attempted FAI. Bringing arbitrarily large sums of mass-energy and negentropy under one’s control is a Basic AI Drive, so you have to program the utility function to actually penalize it.
Only if the AI has goals that both require additional energy, and don’t have a small, bounded success condition.
For example, if an UFAI for humans has a goal that requires humans to be there, but is not allowed to create/lead to the creation of more, then if all humans are already dead it won’t do anything.
For galactic civilisations I’d guess that there would be a strong first mover advantage. If one civilisation (perhaps controlled be an AI) started expanding 1000 years before another then any conflict between them would likely be won by the civilisation that started capturing resources first.
But what if none of them know which of them expanded first? There might be several forces colonising the galaxy, and all keeping extremely quiet so that they don’t get noticed and destroyed by and older civilisation. Thus no need for a great filter, and even if UFAI were common we wouldn’t observe it colonising the galaxy.
The same way different species hid from each other to avoid being wiped out? You can’t expand and hide. And you must expand—so you can’t realistically hide.
The fairly obvious distinction is the species do not have centralling planning, or even sophisticated communication between individual members. Civilisations can and do have such things, and AIs also seem likely to.
Civiliszations could hide—if they were stupid—or if they didn’t care about the future. I wasn’t suggesting hiding was a strategy that was not available at all—just that it would not be an effective survival strategy.
That may be the case, but the fact that species did not hide is no evidence for it, as species could not realistically have hid even if it would have been optimal
Also, I would like to point out that where the hiding strategy is feasible, on the individual level, it is very common among animals.
In fact, arguably we do have a few cases of species ‘hiding’, such as the coelocanth, which has gone over a hundred million years without much significant evolution, vastly longer than most species survive.
Basically, I do not see any reason to believe either of the following assertions.
Perhaps I overstated my case. Hiding and expanding are different optimisation targets that pull in pretty different directions. It is challenging to expand while hiding effectively—since farming suns tends to leave a visible thermodynamic signature which is visible from far away and is expensive to eliminate. I expect civilizations will typically strongly prioritize expanding over hiding.
Okay, in that case I now agree with the first part of your claim, I will accept that there is certainly a trade-off, perfect expanding does not involve much hiding and perfect hiding does not involve much expanding.
So, to move on to the other side, why do you expect expanding to be more prevalent than hiding. It seems to me that Oscar Cunningham’s argument for why hiding might be preferable is quite convincing, or at any rate reduces it to a non-trivial problem of game theory and risk aversion.
Camouflage is pretty unlikely to be an effective defense against an oncoming colonisation wave. I figure the defense budget will be spent more on growth and weapons than camouflage.
For reasons of technological progress, I suspect a hiding civilisation could destroy an younger expanding civilisation before it was hit by the colonisation wave. If this is the case, it becomes a matter of how likely you are to be the oldest civilisation, how likely the oldest civilisation is to expand or hide, and how much you value survival relative to growth. If the first is low and the last is high, then hiding seems like quite a good strategy.
That sounds like the wave’s leading edge to me.
The issues as I see them are different. Much depends on whether progress “maxes out”. If it doesn’t the most mature civilization probaby just wins—in which case, hiding is irrelevant. If the adversaries are well matched they may attempt to find some other resolution besides a big fight which could weaken both of them. Again, hiding won’t help.
IMO, assuming the oldest civilization is in hiding is not a good way to start analysing this issue.
I am confused by this sentence, and cannot parse what it means.
If it knows that its then oldest, then yes it wins. The whole point of Oscar-Cunningham’s comment is that it might not know this.
To model it as a simple game, 100 people are all put in separate rooms. One of them is designated as the ‘big player’, nobody knows who they are including them. Each has two choices expand or hide. If the big player expands then they receive a large pay-off and everyone else gets nothing. If the big player hides, then everyone who hides gets a small pay-off, and everyone who expands gets nothing.
Obviously much depends of the relative size of the large and small pay-offs, but it is not trivially obvious to me that expanding is the optimal strategy here.
Against an equally matched foe, attempts to negotiate are inherently highly risky, if negotiations break down, then one of them may well destroy the other, given the possibility of a first mover advantage, one civilisation may decide to attack if negotiations merely look likely to break down, applying the game theory backwards, we get an extremely volatile situation where as soon as anything ceases to go absolutely perfectly both sides attack. Hiding from an equally matched civilisation may well be much safer than trying to talk to them.
Furthermore, if you become aware of an equally matched civilisation hiding from you, it may be better to continue to pretend you are not aware, rather than opening negotiations straight away. This may go to rather high levels of I know You know I know and as long a mutual knowledge isn’t attained both can survive.
The more realistic version of that game always has expanding, since we know the total payout on expanding is greater than the total on hiding, and the big player is allowed to share the resources equally if she wants to.
I fail to see that this carries.
We modify the game to make sure that the pay-off for successful expanding exceeds all the hiding pay-offs put together, and we also allow the big player, after the fact, to share their expanding pay-out if they want to.
Clearly, not sharing produces a higher pay-off than sharing, so the big-player will not do this. Negotiating in advance doesn’t work, as it requires revealing yourself, so once you’ve done that you’ve made your move and its no longer “in advance”.
If a civilisations utility is linear in its size, then it is always wise for it to expand. If it is risk-averse, which seems plausible, (most of us would not accept a plan which had a 50% chance of colonising Mars and a 50% chance of wiping out humanity), then it may still be wise to hide. If all civilisations are risk averse, hiding is a Nash equilibrium.
The total payout must be higher because in the hiding scenario a lot of negentropy is wasted into nowhere, in the natural lifecycle of stars and the like. The universe is a pie of a fixed size, but one that gradually rots away you take to long deciding who gets to eat it.
And it might be the case that nobody in fact chose to share, but due to game theory it still matters that they had the option.
Also, things like TDT allow for coordination even while hiding, and in fact seems to be one of the assumptions behind this thing in the first place.
I wasn’t disputing this.
Game theory is not magic. If there is an option that nobody intends to take, and everyone knows that no-one intends to take it, and everyone knows that, etc, then this option has no effect on the game.
This is more promising, but I would be a lot more convinced to see the logic actually worked through rather than just using “TDT, therefore everyone is nice” as a magic wand.
Obligatory “extraterrestrial superintelligence has put a planetarium-like illusion around the Earth that appears and behaves exactly like real spacetime would in the absence of extraterrestrials.”
That’s quite a different situation from the one in the context, which was:
Yes, I did not link to that to either refute or support your point, it was merely mentioning an interesting article on the “civilizations in hiding” tangent. For the public good, you know.
This requires either:
Interstellar travel is much slower than seems to be possible (a non-trivial fraction of the speed of light)
“Colonizing” or rather fully exploiting the resources of a star system or other object takes a long time and is also for a long time more economical than just expanding again to grab the low hanging fruit a few light years away.
That no civilization in our galaxy has a head start long enough to win. My best estimate is that a few hundred thousand years before any other is more than enough.
It seems much likelier that we are alone in the galaxy. Either civilizations are pretty rare or we are the oldest one. If the latter is true this seems anthropic evidence in favour of the simulation hypothesis.
Your argument works much better on a much larger scale, for example it does take millions of years for light to travel between galaxies.
~110 or ~200 million year head start on intelligent civilization building life on a Earth like planet still doesn’t seem obviously unlikely.
This is almost certainly true, but at these scales the speed limit of the universe is a potent ally. By the time anyone notices you are doing anything many hundreds of millions of years of you already doing whatever you wanted to do with the local matter have passed.
Also see metric expansion of space. The farther away an object is, the faster it recedes from us.
Or it could be anthropic evidence that the first mover advantage is so large that the first civilization to expand prevents all others from even developing.
The relevant notion of intelligence for a singularity is optimization power, and it’s not obvious that we aren’t already witnessing the expansion of such an intelligence. You may have already had these thoughts, but you didn’t mention them, and I think they’re important to evaluating the strength of evidence we have against UFAI explosions:
What do agents with extreme optimization power look like? One way for them to look is a rapidly-expanding-space-and-resource-consuming process which at some point during our existence engulfs our region of space destroys us, which we haven’t seen happen yet. Another way for them to look is such an optimization process that has already engulfed our region of space. And if that were the case, we would necessarily be a byproduct of or noise term in that process, and the laws of physics actually meet that description.
(I had this thought when I first asked myself to look for naturally-occurring examples of extremely powerful optimization processes, and the laws of physics were the best examples I could think of. E.g., an agent wishing to minimize the L^2 norm of the differential of energy with respect to time would produce an environment that conserves energy.)
It’s difficult and conjunction-fallacious to come with an agent whose utility function would exactly be to have an environment that follows our particular laws of physics with our particular initial conditions, so one should not update heavily in favor of this idea based on observing the laws of physics to be true. What I’m saying is that we only have seen evidence against UFAIs that could have expanded from regions of nearby space and interrupted human progress, not UFAIs that could have expanded before and perhaps caused the existence of the observable universe.
This is similar to the Burning the Cosmic Commons paper by Robin Hanson, which considers whether the astronomical environment we observe might be the leftovers of a migrating extraterrestrial ecosystem that left a long time ago.
I take issue with the assumption that the only two options are perpetual expansion of systems derived from an origin of life out into the universe, and destruction via some filter. The universe just might not be clement to expansion of life or life’s products beyond small islands of habitability, like our world.
You cannot assume that the last few hundred years of our history is typical, or that you can expect similar exponentiation into the future. I would argue that it is a fantastic anomaly and regression to the mean is far more likely.
it doesn’t need to be for us, only for things we build.
Right—but we can see how far apart the stars in our galaxy are, and roughly what it would take to travel between them. Intragalactic travel looks as though it will be relatively trivial—for advanced living systems.
I fail to see how your second sentence follows from your first. What do you mean by relatively trivial?
Human beings in the 1970s were able to throw a hefty chunk of metal (eventually) out of the Solar System, so interstellar travel doesn’t require any future breakthroughs (though such breakthroughs would make it easier). See Project Orion) for an early look at the feasibility of interstellar travel without space elevators, uploading, nanotechnology, or AI.
Good post, good explanation. I agree. I saw the recent comment on OB that probably sparked you making this topic, I was thinking of posting it fleetingly before akrasia kicked in. So, thanks.
A throwaway parenthesized remark from RH that nevertheless should be of major importance, because it lowers the credence we should assign to the argument that “UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring.”
“because it lowers the credence we should assign to the argument that “UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring.”″
Can you identify some people who ever held or promoted this view? I don’t know of any writers who have actually made this argument. It’s pretty absurd on its face, basically saying that instead of there being super-convergence among biological civilizations not to colonize the galaxy, there is super-convergence among autonomous robotic civilizations not to colonize.
You are correct; I cannot.
I did however, find plenty of refutations of precisely that argument, from the SI4 mailing list to various blogs. Related, Robin Hanson wrote this 2 years ago:
I suppose that having seen some of those refutations, I falsely overestimated the importance of the argument that was being refuted:
I thought that to merit public refutations, there must be a certain number of people believing in it. If there are, I couldn’t identify any.
Maybe the association occurs from “uFAI” being so closely related to “x-risk”, and “x-risk” being so closely related to “the Great Filter”. No transitivity this time.
I think this may cause confusion for some casual observers, so it’s worth reiterating the refutation, but it’s also worth noting that no one has seriously pressed the refuted argument.
There are certainly some who think machine intelligence may account for the Fermi paradox. For instance, here’s George Dvorsky on the topic. Also, the Wikipedia article on the Fermi paradox lists “a badly programmed super-intelligence” as a possible cause.
Thanks for the links Tim. Yes, it certainly gets included in exhaustive laundry lists of Fermi Paradox explanations (Dvorsky has covered many proposed Fermi Paradox solutions, including very dubious ones). The Fermi Paradox wiki page also includes the following weird explanation:
Hang on, we’ve known this for years, right? This is not new information.
Early or late great filter?
I’m currently leaning strongly towards late filter, because many of the proposed early filters seem to not be such big barriers. We’ve for example found a bunch of exoplanets in the last decade or so and several of those seem plausibly in the habitable zone. Life on Earth arose very early in its history so if life arising is the hard and rare step I would expect there to be many more hundreds of millions or even billions of years of conditions on Earth being seemingly ripe for it arising and it not doing so.
Obviously maybe the conditions ripe for life being there at all is the tricky part. Another possible hard step is multicellular life.
Correction: Multicelular life seems easy eukaryotic life not so much.
The Cambrian explosion seems to have happened like a billion or more years after it “could” have happened.
Not necessarily—there are those that argue that the cambrian explosion might have had more to do with the increase in atmospheric oxygen over geological time than evolution, and we find vague evidence of multicellular creatures (worm-track type impressions in the seafloor, some strange radially-symmetrical things buried in the sediment) up to a billion and a half years ago. Oxygen lets you have big energy-gobbling multicellular creatures easier, and blocks the destructive ultraviolet radiation that would have previously sterilized the above-water land when it turns to ozone. We also might just see an explosion because that is when hard body parts that fossilize easier appeared.
If the explosion was evolution-driven it could have been due to some kind of runaway arms race between predators and prey, or due to the final establishment of the developmental plans of the various animal phyla that could then be modularly tweaked to enable diversification and rapid evolution.
It’s the move to eukaryotic life, or “complex cells” that’s unique. Multicellularity given eukaryotic status seems easy, but eukaryote status happened only once, and about halfway through the habitable lifetime of the Earth.
Thank you for correcting me on this, it has been some time since I thought of this.
This argument is faulty if there is more than one hard step (and the update is not that strong, although significant, even with one step). See Robin’s paper for details.
The jump to multicellular life seems to be pretty easy, actually. To quote Wikipedia:
It seems to be remarkably easy for eukaryotes, with their excessive number of genes (probably accumulated via drift and non-adaptive processes) which can be co-opted for cell-to-cell communication. There are those that argue that prokaryotes are too heavily optimized for efficient fast reproduction to make huge multicellular complexes- though it turns out that they actually do specialize themselves a bit when they are growing in colonies or biofilms to provide for the colony as a whole.
Really, it seems like any kind of superintelligent AI, friendly or unfriendly, would result in expanding intelligence throughout the universe. So perhaps a good statement would be: “If you believe the Great Filter is ahead of us, that implies that most civilizations get wiped out before achieving any kind of superintelligent AI, meaning that either superintelligent AI is very hard, or wiping out generally comes relatively early.” (It seems possible that we already got lucky with the Cold War… http://www.guardian.co.uk/commentisfree/2012/oct/27/vasili-arkhipov-stopped-nuclear-war)
Unless intelligent life is already almost-extremely rare, that’s not nearly enough ‘luck’ to explain why everyone else is dead, including aliens who happen to be better at solving coordination problems (imagine SF insectoid aliens).
Yeah, of course.
Katja says:
That’s true—but anthropic evidence seems kind-of trumped by the direct observational evidence that we have already invented advanced technology and space travel, which took many billions of years. From here, expansion shouldn’t be too difficult—unless, of course, we meet more-advanced aliens.
Other civilizations may possibly be expanding too by now—SETI is still too small and young to say much about that directly. Probably not within our galaxy, but only because us and them becoming civilised at the same time would be quite a coincidence.
Robin’s use of the Great Filter argument relies on the SIA, which (if one buys it) allows one to rule out a priori the possibility that the development of beings like us is very rare. Absent that, if one’s prior for the development of life is flatter than for things like nuclear war (it would be much less surprising for less than one in 10^100 planets to evolve intelligent life than for less than 1 in 10^100 civilizations like ours to avoid self-destruction with advanced technology) then you get much less update in favor of future filters.
OTOH, the SIA also strongly supports the possibility that we’re a simulation (if we assign a 1 in 1 million probability to sims being billions of times more numerous, than we should assign more credence to that than to being in the basement), which warps the Great Filter argument into something almost unrecognizable. See this paper for a discussion of the interactions with SIA.
On a log plot, the difference between the number of stars in the Solar System and the number of stars in the Milky Way is smaller than the difference between the number of stars in the Milky Way and the number of stars that we can reach before the expansion of the universe removes them from our grasp. How much probability mass would you place on alien civilizations within reachable space? Within our galaxy?
Interesting. However, I’d like to propose an alternative: The real probability of another alien civilisation being inside our universe shard, that is the area of the universe that us humans can possibly explore below the speed of light, is very low. So there might be predatory super intelligence that has wiped out the civilisation that made it, but we’re just not in its universe shard.
When you (and Robin) say “because [UFAI] should be visible,” that seems to imply that there are a significant number of potential observer moments that occur where we can see evidence for a UFAI but the UFAI is not yet able to break us down into spare parts. I’ve always assumed that if a UFAI was created in our lightcone, we would be extinct in very short amount of time. Thus, the assertion “UFAI is not the great filter because we don’t see any” is similar to saying “giant asteroids aren’t the great filter because we don’t see any smashing into earth and extinctifying us.” That is, of course we don’t observe these things because if they occurred, we wouldn’t be around to observe them. Is the assumption that once a UFAI has advanced enough to be observable by us, it would be traveling to devour the rest of its observable universe at near-light-speed obviously silly for some reason I’m missing?
Alien UFAI could be dangerous for us if we find its radiosignal as a result of SETI seach. His messages could content a bite which could lure us to built a copy of alien AI based on schemas which he will send to us in this messages.
D. Carrigan wrote about it: http://home.fnal.gov/~carrigan/SETI/SETI_Hacker.htm
Some simple natural selection reason imply that UFAI radiosignals should dominate in all SETI signals if any exist. And the goal of such UFAI is to convert the Earth to another radio backon which will send his own code futher.
My article on the topic: Is SETI dangerous? http://ru.scribd.com/doc/7428586/Risks-of-SETI-Is-SETI-Dangerous
I have a hard time imagining a filter that could’ve wiped out all of a large number of civilizations that got to our current point or further. That’s not to say that future x-risks aren’t an issue—it just feels implausible that no civilization would’ve been able to successfully coordinate with regard to them or avoid developing them. (E.g. bonobos seem substantially more altruistic than humans and are one of the most intelligent non-human species.)
Also, I thought of an interesting response to the Great Filter, assuming that we’re pretty sure it actually is ahead of us: Halt all technological development ASAP and stay on this planet chilling out. It’s possible that other civilizations already have done this (having realized that the Great Filter was an issue and there was likely deadly tech in their future)--if they had, we wouldn’t know about it.
Of course, that might just mean that 99.9% of all civilizations destroy themselves in the roughly 100 years between the invention of the nuclear bomb and the invention of AGI.
This should be “UFAI can’t be the only great filter.” Nothing says that once you get past a great filter, you are home free. Maybe we already passed a filter on life originating in the first place, or a technology-using species evolving, but UFAI is another filter that still has an overwhelming probability of killing us if nothing else does first.
The fact that UFAI can’t be the only great filter certainly screens off the presence of a great filter as evidence of UFAI being a great filter, but there are good arguments directly from how UFAI would work that indicate that it is pretty big danger.
You’re talking about something being a disaster by our lights, but not a filter that prevents something from Earth colonizing the galaxy. It’s confusing to keep using the term ‘filter’ for concerns about the composition of a future colonizing civilization/the fate of humanity rather than discussion of the Fermi paradox.