Don’t assume that this immense savage universe is just a growth medium for whatever microbe wins the game on Earth.
Personally, I assume this as a two-place function; I assume that by my values, “basically a growth medium for humanity” is a good and useful way to think about the universe. Someone with a different value system, e.g. placing greater value than I do on non-human life, might prefer that we not think of it that way. Oh well.
This is not about values, it is about realism. I am protesting this presumption that the cosmos is just a dumb desert waiting for transhumanity to come and make it bloom in our image. If a line of argument tells you that you are a 1-in-10^80 special snowflake from the dawn of time, you should conclude that there is something wrong with the argument, not wallow in the ecstatic dread of your implied cosmic responsibility. It would be far more reasonable to conclude that there is some presently unknown property of the universe which either renders such expansion physically impossible, or which actively suppresses it when it begins to occur.
If so, you’re using it very controversially, compared to disbelieving in a googolplex or Ackermann of leverage. A 10^-80 prior is easy for sensory evidence to overcome if your model implies that fewer than 10^-80 sentients hallucinate your sensory evidence; this happens every time you flip 266 coins. Conversely to state the 10^-80 prior is invincible just restates that you think more than 10^-80 sentients are having your experiences, due to Simulation Arguments or some explanation of the Fermi Paradox which involves lots of civilizations like ours within any given Hubble volume. In other words, to say that the 10^-80 prior is not beaten by our sensory experience merely restates that you believe in an alternate explanation for the Fermi Paradox in which our sensory experiences are not rare.
Attention:
Eliezer Yudkowsky
Machine Intelligence Research Institute
Sir, you will doubtlessly be astonished to be receiving a letter from a species unknown to you, who is about to ask a favor from you.
As fifth rectified knigget of my underclan’s overhive, I have recently come into possession of an ancient Andromedan passkey, guaranteeing the owner access to no less than 2^419 intergalactic credits. My own species is a trans-cladistic harmonic agglomerate and therefore does not satisfy the anghyfieithadwy of Andromedan culture-law, which stipulates that the titular beneficiary of the passkey (who has first claim on half the credits) must be a natural sophont species. However, we have inherited a trust relationship with a Voolhari Legacy adjudication system, in the vicinity of what you know as the Orion OB1 association, and we have verified that your species is the nearest natural sophont with the technical capacity and cognitive inclinations needed to be our partners in this venture. In order to earn your share of this account, your species should beam by radio telescope its genome, cultural history, and at least two hundred (200) characteristic high-resolution brain maps, to:
Right Ascension 05h 55m 10.3053s, Declination +07° 24′ 25.426″
The Voolhari adjudicator will then process and ratify your source code, facilitating the clemnestration of the passkey’s paramancy. The adjudicator has already been notified to expect your transmission.
Please note that, due to the nearby presence of several aging supergiant stars, the adjudicator will most likely be destroyed via supernova within one galactic day (equalling approximately 610,000 Earth years), so this must be done urgently. Please maintain the transmission until we notify you that the clemnestration is complete. Also, again according to Andromedan anghyfieithadwy, the passkey will be invalidated if the beneficiary species becomes postbiological. We therefore request that you halt all technological progress for the duration of the transmission, unless it directly aids the maintenance of the radio signal.
Certain of your epistemologists may be skeptical of our veracity. If the passkey claimed access to 2^(2^419) credits, we would share this skepticism and suspect a Circinian scam. However, a single round of Arcturan Jeopardy easily produces events with a probability of less than 1 in 2^419; therefore, we consider it irrational to doubt our good luck in this case.
We look forward to concluding this venture with an outcome of mutual enrichment and satisfaction! Feelers touched to yours, Snooldorp Gastool V, Ensorcelment Overlord, Deneb Octant
I directly state that, for other reasons not related to the a priori pre-sensory exclusion of any act which can yield 2^419 credits, it seems to me likely that most of the sentients receiving such a message will not be dealing with a genuine offer.
It seems to me that winning the leverage lottery (by being at the dawn of an intergalactic civilization) is not like flipping a few hundred coins and getting a random bitstring that was not generated in that fashion, anywhere else in our Hubble volume. It is like flipping a few hundred coins and getting nothing but heads. The individual random bitstring is improbable, but it is not special, and getting some not-special bitstring through the coinflipping process is the expected outcome.
Therefore I think the analogy fails, and the proper conclusion is that models implying a “cosmic manifest destiny” for present-day Earthlings are wrong. How this relates to the whole Mugging/Muggle dialectic I do not know, I haven’t had time to see what’s really going on there. I am presently more interested in the practical consequences of this conclusion for our model of the universe, than I am in the epistemology.
It seems to me that winning the leverage lottery (by being at the dawn of an intergalactic civilization) is not like flipping a few hundred coins and getting a random bitstring that was not generated in that fashion, anywhere else in our Hubble volume. It is like flipping a few hundred coins and getting nothing but heads.
Yeah, exactly. The issue is not so much the 10^-80 prior, as the 10^-80 prior on obtaining it randomly vs much much larger prior of obtaining it because, say, you can’t visually discriminate between the coin sides.
My own position regarding this is that yet we haven’t really even started properly thinking how to use anthropic evidence. e.g. you’re seemingly just treating every single individual consciousness in the history of the universe as of equal probability to have been ‘you’, but that by itself implies an assumption that there exists a well-defined thing called ‘individual consciousness’ rather than a confusing combination of different processes in your brain… That they must each be given equal weight is an additional step that I don’t think can be properly supported (e.g. if MWI is correct and my consciousness splits into a trillion different people every second, some of which merge back together, what is the anthropic weight assigned to my past self vs the future self?)
Another possibility would e.g. be that for some reason anthropic evidence are heavily tilted to favour the early universe—that it’s more likely to ‘be’ someone in the early universe, the earlier the better. (e.g. easier to simulate the early than the late universe, hence more Universe-simulators do the former than the latter)
Or anthropic evidence could be tilted to favour simple intelligences. (e.g. easier to simulate simple intelligences than complex ones)
(The above is not meant to imply that I support the simulation hypothesis. I’m just using it as a way of demonstrating how some anthropic calculations may be off)
You could think of the “utilities” in your utilitarianism. Why would one unit of global utility that you can sacrifice be able to produce 10^80 - ish units of utility gain? It’s unlikely to come across an unit of utility that you can so profitably sacrifice (if it is bounded and doesn’t just exponentially stack up in influences ad infinitum). This removes the anthropic considerations from the leverage problem.
Why would one unit of global utility that you can sacrifice be able to produce 10^80 - ish units of utility gain?
Since utility isn’t an inherent concept in the physical laws of the universe but just a calculation inside our minds, I don’t see your meaning here: You don’t “come across” a unit of utility to sacrifice, you seek it out. An architect that seeks to design a skyscraper is more likely to succeed in designing a skyscraper than a random monkey doodling.
To estimate the architect’s chances of success I see no point in starting out by thinking “how likely is a monkey to be able to randomly design a skyscraper?”.
It seems to me that there’s considerably less search in “not buy a porche” than in “build a skyscraper”.
Let’s suppose you value paperclips. Someone takes 10 paperclips from you, unbends them, but makes 10^90 paperclips later thanks to their use of 10 paperclips. In this hypothetical universe, these 10 paperclips are very special, and if someone gives you coordinates of a paperclip and claims it’s one of those legendary 10 paperclips (that are going to be turned into 10^90 paperclips), you’d be wise to be quite skeptical—you need evidence that the paperclips you’re looking at are so oddly located within the totality of paperclips. edit: or if someone gives you papers with paperclip marks left on them and says it’s the papers that were held together by said legendary paperclips.
edit2: albeit i do agree—if we actually seek out something, we may be able to overcome very large priors against. In this case though, the issue is that we have a claim that our existing intrinsic values are in a necessarily very unusual relation to the vast majority of what’s intrinsically valuable.
When it comes to the Fermi Paradox, we have an easy third option with a high prior for which a small amount of evidence is starting to accumulate: we are simply not that special in the universe. Life, perhaps even sapient life, has happened elsewhere before, and will happen elsewhere after we have either died off or become a permanent fixture of the universe. There may already be other species who are permanent fixtures of the universe, and have chosen for one reason or another not to interfere with our development.
In fact, I would figure that “don’t touch life-infested planets” might be a very common moral notion among trans-$SPECIES_NAME races), a kind of intergalactic social contract: any race could have been the ones whose existence would have been prevented by strip-mining their planet or paving it over in living quarters or whatever they do with planets, so everyone refrains from messing with worlds that have evolution going on.
As to the evidence, well, as time goes on we’re finding out that Earth is less and less of an astronomical (ahaha) rarity compared to what we thought it was. Turns out liquid-water planets aren’t very common, but they’re common enough for there to be large numbers of them in our galaxy.
Given two billion planets and billions upon billions of years for evolution to work, I think we should give some weight to the thought that someone else is out there, even though they may be nowhere near us and not be communicating with us at all.
the cosmos is just a dumb desert waiting for transhumanity to come and make it bloom in our image.
What, no aliens? But I was really looking forward to meeting them!
It would be far more reasonable to conclude that there is some presently unknown property of the universe which either renders such expansion physically impossible, or which actively suppresses it when it begins to occur.
Or just makes it very quiet, or causes it to happen at the same time to multiple species. There could already be someone out there who “went trans-flooberghian” and have begun expanding, but they do so slowly and quietly to responsibly conserve resources on the opposite side of the galaxy from us. How would we know?
Oh, okay, I understand what you mean now. Sorry for the misplaced “rebuttal”. I don’t understand this topic well enough to have a real opinion about the Great Filter, so I think I’ll butt out.
It would be far more reasonable to conclude that there is some presently unknown property of the universe which either renders such expansion physically impossible, or which actively suppresses it when it begins to occur.
I would contend that it’s the simple, KNOWN attributes of the universe that render expansion past islands of habitability implausible.
It’s too much justification. Don’t assume that this immense savage universe is just a growth medium for whatever microbe wins the game on Earth.
Maybe not assume. But I’ll most likely conclude that is what it is after analysis of my preferences, philosophy and the multiverse as I can understand it.
THIS so many times over. I can never understand why the idea that replicating systems might just never expand past small islands of clement circumstances (like, say, the surface of the Earth) gets so readily dismissed in these parts.
I can never understand why the idea that replicating systems might just never expand past small islands of clement circumstances (like, say, the surface of the Earth) gets so readily dismissed in these parts.
People in these parts don’t necessarily have in mind the spread of biological replicators. Spreading almost any kind of computing machinery would be good enough to count, because it could host simulations of humans or other worthwhile intelligent life.
(Note that that question of whether simulated people are actually conscious is not that relevant to the question of whether this kind of expansion will happen. What’s relevant is the question of whether the relevant decision makers would come to think they are conscious. For example, even if simulated people aren’t actually conscious, after interacting with simulated people intergrated into society all their lives most non-simulated people would probably think they are conscious, and thus worth sending out to colonize space. And the simulated people themselves will definitely think they are conscious.)
I wasn’t limiting myself to biology, hence talking about ‘replicating systems’. I was more going for the possibility that the sorts of places that non-biologically-descended replicators can replicate are also very limited, possibly to not terribly much wider-ranging to those in which biological replicators can work.
We can send one-off things that work for a long time all over the place, but all you need for them not to establish themselves somewhere is for the successful replacement rate to be less than one.
It’s too much justification. Don’t assume that this immense savage universe is just a growth medium for whatever microbe wins the game on Earth.
Personally, I assume this as a two-place function; I assume that by my values, “basically a growth medium for humanity” is a good and useful way to think about the universe. Someone with a different value system, e.g. placing greater value than I do on non-human life, might prefer that we not think of it that way. Oh well.
This is not about values, it is about realism. I am protesting this presumption that the cosmos is just a dumb desert waiting for transhumanity to come and make it bloom in our image. If a line of argument tells you that you are a 1-in-10^80 special snowflake from the dawn of time, you should conclude that there is something wrong with the argument, not wallow in the ecstatic dread of your implied cosmic responsibility. It would be far more reasonable to conclude that there is some presently unknown property of the universe which either renders such expansion physically impossible, or which actively suppresses it when it begins to occur.
Would you agree that you are carrying out a Pascal’s Muggle line of reasoning using a leverage prior?
http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/
If so, you’re using it very controversially, compared to disbelieving in a googolplex or Ackermann of leverage. A 10^-80 prior is easy for sensory evidence to overcome if your model implies that fewer than 10^-80 sentients hallucinate your sensory evidence; this happens every time you flip 266 coins. Conversely to state the 10^-80 prior is invincible just restates that you think more than 10^-80 sentients are having your experiences, due to Simulation Arguments or some explanation of the Fermi Paradox which involves lots of civilizations like ours within any given Hubble volume. In other words, to say that the 10^-80 prior is not beaten by our sensory experience merely restates that you believe in an alternate explanation for the Fermi Paradox in which our sensory experiences are not rare.
From the “Desk” of: Snooldorp Gastool V
Attention: Eliezer Yudkowsky Machine Intelligence Research Institute
Sir, you will doubtlessly be astonished to be receiving a letter from a species unknown to you, who is about to ask a favor from you.
As fifth rectified knigget of my underclan’s overhive, I have recently come into possession of an ancient Andromedan passkey, guaranteeing the owner access to no less than 2^419 intergalactic credits. My own species is a trans-cladistic harmonic agglomerate and therefore does not satisfy the anghyfieithadwy of Andromedan culture-law, which stipulates that the titular beneficiary of the passkey (who has first claim on half the credits) must be a natural sophont species. However, we have inherited a trust relationship with a Voolhari Legacy adjudication system, in the vicinity of what you know as the Orion OB1 association, and we have verified that your species is the nearest natural sophont with the technical capacity and cognitive inclinations needed to be our partners in this venture. In order to earn your share of this account, your species should beam by radio telescope its genome, cultural history, and at least two hundred (200) characteristic high-resolution brain maps, to:
Right Ascension 05h 55m 10.3053s, Declination +07° 24′ 25.426″
The Voolhari adjudicator will then process and ratify your source code, facilitating the clemnestration of the passkey’s paramancy. The adjudicator has already been notified to expect your transmission.
Please note that, due to the nearby presence of several aging supergiant stars, the adjudicator will most likely be destroyed via supernova within one galactic day (equalling approximately 610,000 Earth years), so this must be done urgently. Please maintain the transmission until we notify you that the clemnestration is complete. Also, again according to Andromedan anghyfieithadwy, the passkey will be invalidated if the beneficiary species becomes postbiological. We therefore request that you halt all technological progress for the duration of the transmission, unless it directly aids the maintenance of the radio signal.
Certain of your epistemologists may be skeptical of our veracity. If the passkey claimed access to 2^(2^419) credits, we would share this skepticism and suspect a Circinian scam. However, a single round of Arcturan Jeopardy easily produces events with a probability of less than 1 in 2^419; therefore, we consider it irrational to doubt our good luck in this case.
We look forward to concluding this venture with an outcome of mutual enrichment and satisfaction! Feelers touched to yours, Snooldorp Gastool V, Ensorcelment Overlord, Deneb Octant
I directly state that, for other reasons not related to the a priori pre-sensory exclusion of any act which can yield 2^419 credits, it seems to me likely that most of the sentients receiving such a message will not be dealing with a genuine offer.
Best comment I read all week. Thanks!
OK, that is excellent.
I want to respond directly now…
It seems to me that winning the leverage lottery (by being at the dawn of an intergalactic civilization) is not like flipping a few hundred coins and getting a random bitstring that was not generated in that fashion, anywhere else in our Hubble volume. It is like flipping a few hundred coins and getting nothing but heads. The individual random bitstring is improbable, but it is not special, and getting some not-special bitstring through the coinflipping process is the expected outcome.
Therefore I think the analogy fails, and the proper conclusion is that models implying a “cosmic manifest destiny” for present-day Earthlings are wrong. How this relates to the whole Mugging/Muggle dialectic I do not know, I haven’t had time to see what’s really going on there. I am presently more interested in the practical consequences of this conclusion for our model of the universe, than I am in the epistemology.
Yeah, exactly. The issue is not so much the 10^-80 prior, as the 10^-80 prior on obtaining it randomly vs much much larger prior of obtaining it because, say, you can’t visually discriminate between the coin sides.
My own position regarding this is that yet we haven’t really even started properly thinking how to use anthropic evidence. e.g. you’re seemingly just treating every single individual consciousness in the history of the universe as of equal probability to have been ‘you’, but that by itself implies an assumption that there exists a well-defined thing called ‘individual consciousness’ rather than a confusing combination of different processes in your brain… That they must each be given equal weight is an additional step that I don’t think can be properly supported (e.g. if MWI is correct and my consciousness splits into a trillion different people every second, some of which merge back together, what is the anthropic weight assigned to my past self vs the future self?)
Another possibility would e.g. be that for some reason anthropic evidence are heavily tilted to favour the early universe—that it’s more likely to ‘be’ someone in the early universe, the earlier the better. (e.g. easier to simulate the early than the late universe, hence more Universe-simulators do the former than the latter)
Or anthropic evidence could be tilted to favour simple intelligences. (e.g. easier to simulate simple intelligences than complex ones)
(The above is not meant to imply that I support the simulation hypothesis. I’m just using it as a way of demonstrating how some anthropic calculations may be off)
You could think of the “utilities” in your utilitarianism. Why would one unit of global utility that you can sacrifice be able to produce 10^80 - ish units of utility gain? It’s unlikely to come across an unit of utility that you can so profitably sacrifice (if it is bounded and doesn’t just exponentially stack up in influences ad infinitum). This removes the anthropic considerations from the leverage problem.
Since utility isn’t an inherent concept in the physical laws of the universe but just a calculation inside our minds, I don’t see your meaning here: You don’t “come across” a unit of utility to sacrifice, you seek it out. An architect that seeks to design a skyscraper is more likely to succeed in designing a skyscraper than a random monkey doodling.
To estimate the architect’s chances of success I see no point in starting out by thinking “how likely is a monkey to be able to randomly design a skyscraper?”.
It seems to me that there’s considerably less search in “not buy a porche” than in “build a skyscraper”.
Let’s suppose you value paperclips. Someone takes 10 paperclips from you, unbends them, but makes 10^90 paperclips later thanks to their use of 10 paperclips. In this hypothetical universe, these 10 paperclips are very special, and if someone gives you coordinates of a paperclip and claims it’s one of those legendary 10 paperclips (that are going to be turned into 10^90 paperclips), you’d be wise to be quite skeptical—you need evidence that the paperclips you’re looking at are so oddly located within the totality of paperclips. edit: or if someone gives you papers with paperclip marks left on them and says it’s the papers that were held together by said legendary paperclips.
edit2: albeit i do agree—if we actually seek out something, we may be able to overcome very large priors against. In this case though, the issue is that we have a claim that our existing intrinsic values are in a necessarily very unusual relation to the vast majority of what’s intrinsically valuable.
What sensory experience are you talking about?
When it comes to the Fermi Paradox, we have an easy third option with a high prior for which a small amount of evidence is starting to accumulate: we are simply not that special in the universe. Life, perhaps even sapient life, has happened elsewhere before, and will happen elsewhere after we have either died off or become a permanent fixture of the universe. There may already be other species who are permanent fixtures of the universe, and have chosen for one reason or another not to interfere with our development.
In fact, I would figure that “don’t touch life-infested planets” might be a very common moral notion among trans-$SPECIES_NAME races), a kind of intergalactic social contract: any race could have been the ones whose existence would have been prevented by strip-mining their planet or paving it over in living quarters or whatever they do with planets, so everyone refrains from messing with worlds that have evolution going on.
As to the evidence, well, as time goes on we’re finding out that Earth is less and less of an astronomical (ahaha) rarity compared to what we thought it was. Turns out liquid-water planets aren’t very common, but they’re common enough for there to be large numbers of them in our galaxy.
Given two billion planets and billions upon billions of years for evolution to work, I think we should give some weight to the thought that someone else is out there, even though they may be nowhere near us and not be communicating with us at all.
What, no aliens? But I was really looking forward to meeting them!
Or just makes it very quiet, or causes it to happen at the same time to multiple species. There could already be someone out there who “went trans-flooberghian” and have begun expanding, but they do so slowly and quietly to responsibly conserve resources on the opposite side of the galaxy from us. How would we know?
Oh, okay, I understand what you mean now. Sorry for the misplaced “rebuttal”. I don’t understand this topic well enough to have a real opinion about the Great Filter, so I think I’ll butt out.
I would contend that it’s the simple, KNOWN attributes of the universe that render expansion past islands of habitability implausible.
Maybe not assume. But I’ll most likely conclude that is what it is after analysis of my preferences, philosophy and the multiverse as I can understand it.
THIS so many times over. I can never understand why the idea that replicating systems might just never expand past small islands of clement circumstances (like, say, the surface of the Earth) gets so readily dismissed in these parts.
People in these parts don’t necessarily have in mind the spread of biological replicators. Spreading almost any kind of computing machinery would be good enough to count, because it could host simulations of humans or other worthwhile intelligent life.
(Note that that question of whether simulated people are actually conscious is not that relevant to the question of whether this kind of expansion will happen. What’s relevant is the question of whether the relevant decision makers would come to think they are conscious. For example, even if simulated people aren’t actually conscious, after interacting with simulated people intergrated into society all their lives most non-simulated people would probably think they are conscious, and thus worth sending out to colonize space. And the simulated people themselves will definitely think they are conscious.)
I wasn’t limiting myself to biology, hence talking about ‘replicating systems’. I was more going for the possibility that the sorts of places that non-biologically-descended replicators can replicate are also very limited, possibly to not terribly much wider-ranging to those in which biological replicators can work.
We can send one-off things that work for a long time all over the place, but all you need for them not to establish themselves somewhere is for the successful replacement rate to be less than one.