For one—it hasn’t already happened. And there is no public research suggesting that it is much closer to happening now than it has ever been. The first claims of impending human-level AGI were made ~50 years ago. Much money and research has been exhausted since then, but it hasn’t happened yet. AGI researchers have lost a lot of credibility because of this. Basically, extraordinary claims have been made many times. None have panned out to the generality with which they are made.
You yourself just made an extraordinary claim! Do you have a 5 year old at hand? Because there are some pretty “clever” conversation bots out there nowadays...
With regards to:
the most important such inherent power is the one that makes Folding@home work so well—the ability to simply copy the algorithm into more hardware, if all else fails, and have the copies cooperate on a problem.
Games abound on LessWrong involving AIs which can simulate entire people—and even AIs which can simulate a billion billion billion …. billion billion people simultaneously! Folding@home is the most powerful cluster on this planet at the moment, and it can simulate protein folding over an interval of about 1.5 milliseconds (according to wikipedia). So, as I said, very big claims are casually made by AGI folk, even in passing and in the face of all reason and appreciation for the short-term ETAs with which they make these claims (~20-70 years… and note that it was ~20-70 years ETA about 50 years ago as well).
I believe AGI is probably possible to construct, but not that it will be as easy and FOOMy as enthusiasts have always been wont to suggest.
For one—it hasn’t already happened. And there is no public research suggesting that it is much closer to happening now than it has ever been. The first claims of impending human-level AGI were made ~50 years ago. Much money and research has been exhausted since then, but it hasn’t happened yet.
The fact that it hasn’t happened yet is not evidence against its happening if you cannot survive its happening. If you cannot survive its happening, then the fact that it has not happened in the last 50 years is not just weaker evidence than it would otherwise be—it is not evidence at all, and your probability that it will happen now, after 50 years, should be the same as your probability would have been at 0 years.
In other words, if the past behavior of a black box is subject to strong-enough observational selection effects, you cannot use its past behavior to predict its future behavior: you have no choice but to open the black box and look inside (less metaphorically, to construct a causal model of the behavior of the box) which you have not done in the coment I am replying to. (Drawing an analogy with protein folding does not count as “looking inside”.)
Of course, if your probability that the creation of a self-improving AGI will kill all the humans is low enough, then what I just said does not apply. But that is a big if.
The fact that it hasn’t happened yet is not evidence against its happening if you cannot survive its happening. If you cannot survive its happening, then the fact that it has not happened in the last 50 years is not just weaker evidence than it would otherwise be—it is not evidence at all, and your probability that it will happen now, after 50 years, should be the same as your probability would have been at 0 years.
Do you take the Fermi paradox seriously, or is the probability of your being destroyed by a galactic civilization, assuming that one exists, low enough? The evidential gap w.r.t. ET civilization spans billions of years—but this is not evidence at all according to the above.
Neither do I believe in the coming of an imminent nuclear winter, though (a) it would leave me dead and (b) I nevertheless take the absence of such a disaster over the preceeding decades to be nontrivial evidence that its not on its way.
Say you’re playing Russian Roulette with a 6-round revolver which either has 1 or 0 live rounds in it. Pull the trigger 4 times—every time you end up still alive. According to what you have said, your probability estimates for either
there being a single round in the revolver or
the revolver being unloaded
should be the same as before you had played any rounds at all. Imagine pulling the trigger 5 times and still being alive—is there a 50⁄50 chance that the gun is loaded?
I find the technique you’re suggesting interesting, but I don’t employ it.
(Drawing an analogy with protein folding does not count as “looking inside”.)
Tiiba suggested that distributive capability is the most important of the “powers inherent to all computers”. Protein folding simulation was an illustrative example of a cutting edge distributed computing endeavor, which is still greatly underpowered in terms of what AGI needs to milk out of it to live up to FOOMy claims. He wants to catch all the fish in the sea with a large net, and I am telling him that we only have a net big enough for a few hundred fish.
edit: It occurred to me that I have written with a somewhat interrogative tone and many examples. My apologies.
edit: It occurred to me that I have written with a somewhat interrogative tone and many examples. My apologies.
Examples are great. The examples a person supplies are often more valuable than their general statements. In philosophy, one of the most valuable questions one can ask is ‘can you give an example of what you mean by that?’
Say you’re playing Russian Roulette with a 6-round revolver which either has 1 or 0 live rounds in it. Pull the trigger 4 times—every time you end up still alive. According to what you have said, your probability estimates for either
there being a single round in the revolver or
the revolver being unloaded
should be the same as before you had played any rounds at all.
If before every time I pull the trigger, I spin the revolver in such a way that it comes to a stop in a position that is completely uncorrelated with its pre-spin position, then yes, IMO the probability is the same as before I had played any rounds at all (namely .5).
If an evil demon were to adjust the revolver after I spin it and before I pull the trigger, that is a selection effect. If the demon’s adjustments are skillful enough and made for the purpose of deceiving me, my trigger pulls are no longer a random sample from the space of possible outcomes.
Probability is not a property of reality but rather a property of an observer. If a particular observer is not robust enough to survive a particular experiment, the observer will not be able to learn from the experiment the same way a more robust observer can. As I play Russian roulette, the P(gun has bullet) assigned by someone watching me at a safe distance can change, but my P(gun has bullet) cannot change because of the law of conservation of expected evidence.
In particular, a trigger pull that does not result in a bang does not decrease my probability that the gun contains a bullet because a trigger pull that results in a bang does not increase it (because I do not survive a trigger pull that results in a bang).
In particular, a trigger pull that does not result in a bang does not decrease my probability that the gun contains a bullet because a trigger pull that results in a bang does not increase it (because I do not survive a trigger pull that results in a bang).
I’m not sure this would work in practice. Let’s say you’re betting on this particular game, with the winnings/losings being useful in some way even if you don’t survive the game. Then, after spinning and pulling the trigger a million times, would you still bet as though the odds were 1:1? I’m pretty sure that’s not a winning strategy, when viewed from the outside (therefore, still not winning when viewed from the inside).
You have persuaded me that my analysis in grandparent of the Russian-roulette scenario is probably incorrect.
The scenario of the black box that responds with either “heads” or “tails” is different because in the Russian-roulette scenario, we have a partial causal model of the “bang”/”no bang” event. (In particular, we know that the revolver contains either one bullet or zero bullets.) Apparently, causal knowledge can interact with knowledge of past behavior to produce knowledge of future behavior even if the knowledge of past behavior is subject to the strongest kind of observational selection efffects.
Your last point was persuasive… though I still have some uneasiness about accepting that k pulls of the trigger, for arbitrary k, still gives the player nothing.
Would it be within the first AGI’s capabilities to immediately effect my destruction before I am able to update on its existence—provided that (a) it is developed by the private sector and not e.g. some special access DoD program, and (b) ETAs up to “sometime this century” are accurate? I think not, though I admit to being fairly uncertain.
I acknowledge that this line of reasoning presented in my original comment was not of high caliber—though I still dispute Tiiba’s claim regarding an AI advanced enough to scrape by in conversation with a 5 year old, as well as that distributive capabilities are the greatest power at play here.
Would it be within the first AGI’s capabilities to immediately effect my destruction before I am able to update on its existence . . .?
I humbly suggest that the answer to your question would not shed any particular light on what we have been talking about because even if we would certainly have noticed the birth of the AGI, there’s a selection effect if it would have killed us before we got around to having this conversation (i.e. if it would have killed us by now).
The AGI’s causing our deaths is not the only thing that would cause a selection effect: the AGI’s deleting our memories of the existence of the AGI would also do it. But the AGI’s causing our deaths is the mostly likely selection-effecting mechanism.
A nice summary of my position is that when we try to estimate the safety of AGI research done in the past, the fact that P(we would have noticed our doom by now|the research killed us or will kill us) is high does not support the safety of the research as much as one might naively think. For us to use that fact the way we use most facts, not only must we notice our doom, but also we must survive long enough to have this conversation.
Actually, we can generalize that last sentence: for a group of people correctly to use the outcome of past AGI research to help assess the safety of AGI, awareness of both possible outcomes (the good outcome and the bad outcome) of the past research must be able to reach the group and in particular must be able to reach the assessment process. More precisely, if there is a mechanism that is more likely to prevent awareness of one outcome from reaching the assessment process than the other outcome, the process has to adjust for that, and if the very existence of the assessment process completely depends on one outcome, the adjustment completely wipes out the “evidentiary value” of awareness of the outcome. The likelihood ratio gets adjusted to 1. The posterior probability (i.e., the probability after updating on the outcome of the research) that AGI is safe is the same as the prior probability.
Your last point was persuasive… though I still have some uneasiness about accepting that k pulls of the trigger, for arbitrary k, still gives the player nothing.
Like I said yesterday I retract my position on the Russian roulette. (Selection effects operate, I still believe, but not to the extent of making past behavior completely useless for predicting future behavior.)
I intentionally delayed this reply (by > 5 days) to test the hypothesis that slowing down the pace of a conversation on LW will improve it.
Do you take the Fermi paradox seriously, or is the probability of your being destroyed by a galactic civilization, assuming that one exists, low enough?
When we try to estimate the number of technological civilizations that evolved on main-sequence stars in our past light cone, we must not use the presence of at least one tech civ (namely, us) as evidence of the presence of another one (namely, ET) because if that first tech civ had not evolved, we would have no way to observe that outcome (because we would not exist). In other words, we should pretend we know nothing of our own existence or the existence of clades in our ancestral line, in particular, the existence of the eukaryotes and the metazoa, when trying to estimate the number of tech civs in our past light cone.
I am not an expert on ETIs, but the following seems (barely) worth mentioning: the fact that prokaryotic life arose so quickly after the formation of the Earth’s crust is IMHO significant evidence that there is simple (unicellular or similar) life in other star systems.
The evidential gap w.r.t. ET civilization spans billions of years—but this is not evidence at all according to the above.
It is evidence, but less strong than it would be if we fail to account for observational selection effects. Details follow.
The fact that there are no obvious signs of an ET tech civ, e.g., alien space ships in the solar system, is commonly believed to the be strongest sign that there were no ET tech civs in our past light cone with the means and desire (specifically, desire on at least part of the civ that was not thwarted by the rest of the civ) to expand outwards into space. Well, it seems to me that there is a good chance that we would not have survived an encounter with the leading wave of such an expansion, and therefore the lack of evidence of such an expansion should not cause us to update our probability of the existence of such an expansion as much as it should have if we certainly could have survived the encounter. Still, the fact that there are no obvious signs (such as alien space ships in the solar system) of ET is the strongest piece of evidence against the hypothesis of the existence of ET tech civs in our past light cone (because for example radio waves can be detected by us over a distance of only thousands of light years whereas we should be able to detect colonization waves that originated billions of light years away because once a civilization acquires the means and desire to expand, what would stop it?).
In summary, observational selection effects blunt the force of the Fermi paradox in two ways:
Selection effects drastically reduce the (likelihood) ratio by which the fact of the existence of our civilization increases our probability of the existence of another civilization.
The lack of obvious signs (such as alien space ships) of ET in our immediate vicinity is commonly taken as evidence that drastically lowers the probability of ET. Observational selection effects mean that P(ET) is not lowered as much as we would otherwise think.
(end of list)
So, yeah, to me, there is no Fermi paradox requiring explanation, nor do I expect any observations made during my lifetime to create a Fermi paradox.
When we try to estimate the number of technological civilizations that evolved on main-sequence stars in our past light cone, we must not use the presence of at least one tech civ (namely, us) as evidence of the presence of another one (namely, ET) because if that first tech civ had not evolved, we would have no way to observe that outcome (because we would not exist).
If there were two universes, one very likely to evolve life and one very unlikely, and all we knew was that we existed in one, then we are much more likely to exist in the first universe. Hence our own existence is evidence about the likelihood of life evolving, and there still is a Fermi paradox.
If there were two universes, one very likely to evolve life and one very unlikely, and all we knew was that we existed in one, then we are much more likely to exist in the first universe.
Agree.
Hence our own existence is evidence about the likelihood of life evolving [in the situation in which we find ourselves].
Disagree because your hypothetical situation requires a different analysis than the situation we find ourselves in.
In your hypothetical, we have somehow managed to acquire evidence for the existence of a second universe and to acquire evidence that life is much more likely in one than in the other.
Well, let us get specific about how that might come about.
Our universe contains gamma-ray bursters that probably kill any pre-intelligence-explosion civilization within ten light-years or so of them, and our astronomers have observed the rate * density at which these bursters occur.
Consequently, we might discover that one of the two universes has a much higher rate * density of bursters than the other universe. For that discovery to be consistent with the hypothetical posed in parent, we must have discovered that fact while somehow becoming or remaining completely ignorant as to which universe we are in.
We might discover further that although we have managed to determine the rate * density of the bursters in the other universe, we cannot travel between the universes. We must suppose something like that because the hypothetical in parent requires that no civilization in one universe can spread to the other one. (We can infer that requirement from the analysis and the conclusion in parent.)
I hope that having gotten specific and fleshed out your hypothetical a little, you have become open to the possibility that your hypothetical situation is different enough from the situation in which we find ourselves for us to reach a different conclusion.
In the situation in which we find ourselves, one salient piece of evidence we have for or against ET in our past light cone is the fact that there is no obvious evidence of ET in our vicinity, e.g., here on Earth or on the Moon or something.
And again, this piece of evidence is really only evidence against ETs that would let us continue to exist if their expansion reached us, but there’s a non-negligible probability that an ET would in fact let us continue to exist because there no strong reason for us to be confident that the ET would not.
In contrast to the situation in which we find ourselves, in the hypothetical posed in parent, there is an important piece of evidence in addition to the piece I just described in just the same way that whatever evidence we used to conclude that the revolver contains either zero or one bullet is an additional important piece of evidence that when combined with the evidence of the results of 1,000,000 iterations of Russian roulette would cause a perfect Bayesian reasoner to reach a different conclusion than it would if it knew nothing of the causal mechanism that exists between {a spin of the revolver followed by a pull of the trigger} and {death or not-death}.
In your hypothetical, we have somehow managed to acquire evidence for the existence of a second universe and to acquire evidence that life is much more likely in one than in the other.
These need not be actual universes, just hypothetical universes that we have assigned a probability to.
Given most priors over possible universes, the fact we exist will bump up the probability of there being lots of life. The fact we observe no life will bump down the probability, but the first effect can’t be ignored.
Hence our own existence is evidence about the likelihood of life evolving [you write in great grandparent]
So in your view there is zero selection effect in this probability calculation?
In other words, our own existence increases your probability of there being lots of life just as much as the existence of an extraterrestrial civilization would?
In the previous sentence, please interpret “increase your probability just as much as” as “is represented by the same likelihood ratio as”.
And the existence of human civilization increases your P(lots of life) just as much as it would if you were an immortal invulnerable observer who has always existed and who would have survived any calamity that would have killed the humans or prevented the evolution of humans?
Finally, is there any probability calculation in which you would adjust the results of the calculation to account for an observational selection effect?
Would you for example take observational selection effects into account in calculating the probability that you are a Boltzmann brain?
I can get more specific with that last question if you like.
So in your view there is zero selection effect in this probability calculation?
In other words, our own existence increases your probability of there being lots of life just as much as the existence of an extraterrestrial civilization would?
Depends how independent the two are. Also, myself existing increases the probability of human-like life existing, while the alien civilization increases the probability of life similar to themselves existing. If we’re similar, the combined effects will be particularly strong for theories of convergent evolution.
The line of reasoning for immortal observers is similar.
Finally, is there any probability calculation in which you would adjust the results of the calculation to account for an observational selection effect?
I thought that was exactly what I was doing? To be technical, I was using a variant of full non-indexical conditioning (FNC), which is an unloved bastard son of the SIA (self-indication assumption).
Can I get a yes or no on my question of whether you take the existence of human civilization to be just as strong evidence for the probabilities we have been discussing as you would have taken it to be if you were a non-human observing human civilization from a position of invulnerability?
Actually, “invulnerability” is not the right word: what I mean is, “if you were a non-human whose coming into existence was never in doubt and whose ability to observe the non-appearance of human civilization was never in doubt.”
Can I get a yes or no on my question of whether you take the existence of human civilization to be just as strong evidence for the probabilities we have been discussing as you would have taken it to be if you were a non-human observing human civilization from a position of invulnerability?
If the existence of the “invulnerable non-human” (INH) is completely independent from the existence of human-like civilizations, then:
If the INH gets the information “there are human-like civilizations in your universe” then this changes his prior for “lots of human-like civilizations” much less that we get from noticing that we exist.
If the INH gets the information “there are human-like civilizations in your immediate neighbourhood”, then his prior is updated pretty similarly to ours.
Thanks for answering my question. I repeat that you and I are in disagreement about this particular application of observational selection effects, a.k.a., the anthropic principle and would probably also disagree about their application to an existential risk.
I think you should write that post because thoughtful respected participants on LW use the anthropic principle incorrectly, IMHO. The gentleman who wrote great grandparent for example is respected enough to have been invited to attend SIAI’s workshop on decision theory earlier this year. And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment. I say “probably” because the context has to do with “modal realism” and other wooly thinking that I cannot digest, but I have not been able to think of any context in which Cousin It’s “every passing day without incident should weaken your faith in the anthropic explanation” is a sound argument.
(Many less thoughtful or less respected participants here have misapplied or failed to take into account the anthropic principle, too.)
And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment.
It has been a while since I skimmed “Anthropic Shadow”, but IIRC a key point or assumption in their formula was that the more recent a risk would have occurred or not, the less likely ‘we’ are to have observed the risk occurring, because more recently = less time for observers to recover from the existential risk or fresh observers to have evolved. This suggests a weak version: the longer we exist, the fewer risks’ absence we need to appeal to an observer-based principle.
(But thinking about it, maybe the right version is the exact opposite. It’s hard to think about this sort of thing.)
I’ve read “Anthropic Shadow” a few times now. I don’t think I will write a post on it. It does a pretty good job of explaining itself, and I couldn’t think of any uses for it.
The Shadow only biases estimates when some narrow conditions are met:
your estimate has to be based strictly on your past
of a random event
the events have to be very destructive to observers like yourself
and also rare to begin with
So it basically only applies to global existential risks, and there aren’t that many of them. Nor can we apply it to interesting examples like the Singularity, because that’s not a random event but dependent on our development.
Thanks for answering my question. I repeat that you and I are in disagreement about this particular application of observational selection effects, a.k.a., the anthropic principle and would probably also disagree about their application to an existential risk.
Indeed. I, for one, do not worry about the standard doomesday argument, and such. I would argue that SIA is the only consistent anthropic principle, but that’s a long argument, and a long post to write one day.
Fortunately, the Anthropic shadow argument can be accepted whatever type of anthropic reasoning you use.
The fact that it hasn’t happened yet is not evidence against its happening if you cannot survive its happening. If you cannot survive its happening, then the fact that it has not happened in the last 50 years is not just weaker evidence than it would otherwise be—it is not evidence at all.
I’m not convinced that works that way.
Suppose I have the following (unreasonable, but illustrative) prior: 0.5 for P=(AGI is possible), 1 for Q=(if AGI is possible, then it will occur in 2011), 0.1 for R=(if AGI occurs, then I will survive), and 1 for S=(I will survive if AGI is impossible or otherwise fails to occur in 2011). The events of interest are P and R.
P,R: 0.05. I survive.
P,~R: 0.45. I do not survive. (This outcome will not be observed.)
~P: 0.5. I survive. R is irrelevant.
After I observe myself to still be alive at the end of 2011 (which, due to anthropic bias, is guaranteed provided I’m there to make the observation), my posterior probability for P (AGI is possible) should be 0.05/(0.05+0.5) = 5⁄55 = 1⁄11 = 0.0909..., which is considerably less than the 0.5 I would have estimated beforehand.
By updating on my own existence, I infer a lower probability of the possibility of something that could kill me.
Well, yeah, if we knew what you call S (that AGI would occur in 2011 or would never occur), then our surviving 2011 would mean that AGI will never occur.
But your example fails to shed light on the argument in great grandparent.
If I may suggest a different example, one which I believe is analogous to the argument in great grandparent:
Suppose I give you a box that displays either “heads” or “tails” when you press a button on the box.
The reason I want you to consider a box rather than a coin is that a person can make a pretty good estimate of the “fairness” of a coin just by looking at it and hold it in one’s hand.
Do not make any assumptions about the “fairness” of the box. Do not for example assume that if you push the button a million times, the box would display “heads” about 500,000 times.
What is your probability that the box will display “heads” when you push the button?
.5 obviously because even if the box is extremely “unfair” or biased, you have no way to know whether it is biased towards “heads” or biased towards “tails”.
Suppose further that you cannot survive the box coming up “tails”.
Now suppose you push the button ten times and of course it comes up “heads” all ten times.
Updating on the results of your first ten button-presses, what is your probability that it will come up “heads” if you push the button an eleventh time?
Do you for example say, “Well, clearly this box is very biased towards heads.”
Do you use Laplace’s law of succession to compute the probability?
This is more or less what I was trying to do, but I neglected to treat “AGI is impossible” as equivalent to “AGI will never happen”.
I need to have a prior in order to update, so sure, let’s use Laplace.
I’d have to be an idiot to ever press the button at all, but let’s say I’m in Harry’s situation with the time-turner and someone else pushed the button ten times before I could tell them not to.
I don’t feel like doing the calculus to actually apply Bayes myself here, so I’ll use my vague nonunderstanding of Wikipedia’s formula for the rule of succession and say p=11/12.
The difficulty of creating an AGI drops slightly every time computational power increases. We know that people greatly underestimated the difficulty of creating AGI in the past, but we don’t know how fast the difficulty is decreasing, how difficult it is now, whether it will ever stop decreasing, or where.
I agree that those rates are hard to determine. I am also weary of “AI FOOM is a certainty” type statements, and appeals to the nebulous “powers that all computers inherently have”.
For one—it hasn’t already happened. And there is no public research suggesting that it is much closer to happening now than it has ever been. The first claims of impending human-level AGI were made ~50 years ago. Much money and research has been exhausted since then, but it hasn’t happened yet. AGI researchers have lost a lot of credibility because of this. Basically, extraordinary claims have been made many times. None have panned out to the generality with which they are made.
You yourself just made an extraordinary claim! Do you have a 5 year old at hand? Because there are some pretty “clever” conversation bots out there nowadays...
With regards to:
Games abound on LessWrong involving AIs which can simulate entire people—and even AIs which can simulate a billion billion billion …. billion billion people simultaneously! Folding@home is the most powerful cluster on this planet at the moment, and it can simulate protein folding over an interval of about 1.5 milliseconds (according to wikipedia). So, as I said, very big claims are casually made by AGI folk, even in passing and in the face of all reason and appreciation for the short-term ETAs with which they make these claims (~20-70 years… and note that it was ~20-70 years ETA about 50 years ago as well).
I believe AGI is probably possible to construct, but not that it will be as easy and FOOMy as enthusiasts have always been wont to suggest.
The fact that it hasn’t happened yet is not evidence against its happening if you cannot survive its happening. If you cannot survive its happening, then the fact that it has not happened in the last 50 years is not just weaker evidence than it would otherwise be—it is not evidence at all, and your probability that it will happen now, after 50 years, should be the same as your probability would have been at 0 years.
In other words, if the past behavior of a black box is subject to strong-enough observational selection effects, you cannot use its past behavior to predict its future behavior: you have no choice but to open the black box and look inside (less metaphorically, to construct a causal model of the behavior of the box) which you have not done in the coment I am replying to. (Drawing an analogy with protein folding does not count as “looking inside”.)
Of course, if your probability that the creation of a self-improving AGI will kill all the humans is low enough, then what I just said does not apply. But that is a big if.
Do you take the Fermi paradox seriously, or is the probability of your being destroyed by a galactic civilization, assuming that one exists, low enough? The evidential gap w.r.t. ET civilization spans billions of years—but this is not evidence at all according to the above.
Neither do I believe in the coming of an imminent nuclear winter, though (a) it would leave me dead and (b) I nevertheless take the absence of such a disaster over the preceeding decades to be nontrivial evidence that its not on its way.
Say you’re playing Russian Roulette with a 6-round revolver which either has 1 or 0 live rounds in it. Pull the trigger 4 times—every time you end up still alive. According to what you have said, your probability estimates for either
there being a single round in the revolver or
the revolver being unloaded
should be the same as before you had played any rounds at all. Imagine pulling the trigger 5 times and still being alive—is there a 50⁄50 chance that the gun is loaded?
I find the technique you’re suggesting interesting, but I don’t employ it.
Tiiba suggested that distributive capability is the most important of the “powers inherent to all computers”. Protein folding simulation was an illustrative example of a cutting edge distributed computing endeavor, which is still greatly underpowered in terms of what AGI needs to milk out of it to live up to FOOMy claims. He wants to catch all the fish in the sea with a large net, and I am telling him that we only have a net big enough for a few hundred fish.
edit: It occurred to me that I have written with a somewhat interrogative tone and many examples. My apologies.
Examples are great. The examples a person supplies are often more valuable than their general statements. In philosophy, one of the most valuable questions one can ask is ‘can you give an example of what you mean by that?’
If before every time I pull the trigger, I spin the revolver in such a way that it comes to a stop in a position that is completely uncorrelated with its pre-spin position, then yes, IMO the probability is the same as before I had played any rounds at all (namely .5).
If an evil demon were to adjust the revolver after I spin it and before I pull the trigger, that is a selection effect. If the demon’s adjustments are skillful enough and made for the purpose of deceiving me, my trigger pulls are no longer a random sample from the space of possible outcomes.
Probability is not a property of reality but rather a property of an observer. If a particular observer is not robust enough to survive a particular experiment, the observer will not be able to learn from the experiment the same way a more robust observer can. As I play Russian roulette, the P(gun has bullet) assigned by someone watching me at a safe distance can change, but my P(gun has bullet) cannot change because of the law of conservation of expected evidence.
In particular, a trigger pull that does not result in a bang does not decrease my probability that the gun contains a bullet because a trigger pull that results in a bang does not increase it (because I do not survive a trigger pull that results in a bang).
I’m not sure this would work in practice. Let’s say you’re betting on this particular game, with the winnings/losings being useful in some way even if you don’t survive the game. Then, after spinning and pulling the trigger a million times, would you still bet as though the odds were 1:1? I’m pretty sure that’s not a winning strategy, when viewed from the outside (therefore, still not winning when viewed from the inside).
You have persuaded me that my analysis in grandparent of the Russian-roulette scenario is probably incorrect.
The scenario of the black box that responds with either “heads” or “tails” is different because in the Russian-roulette scenario, we have a partial causal model of the “bang”/”no bang” event. (In particular, we know that the revolver contains either one bullet or zero bullets.) Apparently, causal knowledge can interact with knowledge of past behavior to produce knowledge of future behavior even if the knowledge of past behavior is subject to the strongest kind of observational selection efffects.
Your last point was persuasive… though I still have some uneasiness about accepting that k pulls of the trigger, for arbitrary k, still gives the player nothing.
Would it be within the first AGI’s capabilities to immediately effect my destruction before I am able to update on its existence—provided that (a) it is developed by the private sector and not e.g. some special access DoD program, and (b) ETAs up to “sometime this century” are accurate? I think not, though I admit to being fairly uncertain.
I acknowledge that this line of reasoning presented in my original comment was not of high caliber—though I still dispute Tiiba’s claim regarding an AI advanced enough to scrape by in conversation with a 5 year old, as well as that distributive capabilities are the greatest power at play here.
I humbly suggest that the answer to your question would not shed any particular light on what we have been talking about because even if we would certainly have noticed the birth of the AGI, there’s a selection effect if it would have killed us before we got around to having this conversation (i.e. if it would have killed us by now).
The AGI’s causing our deaths is not the only thing that would cause a selection effect: the AGI’s deleting our memories of the existence of the AGI would also do it. But the AGI’s causing our deaths is the mostly likely selection-effecting mechanism.
A nice summary of my position is that when we try to estimate the safety of AGI research done in the past, the fact that P(we would have noticed our doom by now|the research killed us or will kill us) is high does not support the safety of the research as much as one might naively think. For us to use that fact the way we use most facts, not only must we notice our doom, but also we must survive long enough to have this conversation.
Actually, we can generalize that last sentence: for a group of people correctly to use the outcome of past AGI research to help assess the safety of AGI, awareness of both possible outcomes (the good outcome and the bad outcome) of the past research must be able to reach the group and in particular must be able to reach the assessment process. More precisely, if there is a mechanism that is more likely to prevent awareness of one outcome from reaching the assessment process than the other outcome, the process has to adjust for that, and if the very existence of the assessment process completely depends on one outcome, the adjustment completely wipes out the “evidentiary value” of awareness of the outcome. The likelihood ratio gets adjusted to 1. The posterior probability (i.e., the probability after updating on the outcome of the research) that AGI is safe is the same as the prior probability.
Like I said yesterday I retract my position on the Russian roulette. (Selection effects operate, I still believe, but not to the extent of making past behavior completely useless for predicting future behavior.)
I intentionally delayed this reply (by > 5 days) to test the hypothesis that slowing down the pace of a conversation on LW will improve it.
When we try to estimate the number of technological civilizations that evolved on main-sequence stars in our past light cone, we must not use the presence of at least one tech civ (namely, us) as evidence of the presence of another one (namely, ET) because if that first tech civ had not evolved, we would have no way to observe that outcome (because we would not exist). In other words, we should pretend we know nothing of our own existence or the existence of clades in our ancestral line, in particular, the existence of the eukaryotes and the metazoa, when trying to estimate the number of tech civs in our past light cone.
I am not an expert on ETIs, but the following seems (barely) worth mentioning: the fact that prokaryotic life arose so quickly after the formation of the Earth’s crust is IMHO significant evidence that there is simple (unicellular or similar) life in other star systems.
It is evidence, but less strong than it would be if we fail to account for observational selection effects. Details follow.
The fact that there are no obvious signs of an ET tech civ, e.g., alien space ships in the solar system, is commonly believed to the be strongest sign that there were no ET tech civs in our past light cone with the means and desire (specifically, desire on at least part of the civ that was not thwarted by the rest of the civ) to expand outwards into space. Well, it seems to me that there is a good chance that we would not have survived an encounter with the leading wave of such an expansion, and therefore the lack of evidence of such an expansion should not cause us to update our probability of the existence of such an expansion as much as it should have if we certainly could have survived the encounter. Still, the fact that there are no obvious signs (such as alien space ships in the solar system) of ET is the strongest piece of evidence against the hypothesis of the existence of ET tech civs in our past light cone (because for example radio waves can be detected by us over a distance of only thousands of light years whereas we should be able to detect colonization waves that originated billions of light years away because once a civilization acquires the means and desire to expand, what would stop it?).
In summary, observational selection effects blunt the force of the Fermi paradox in two ways:
Selection effects drastically reduce the (likelihood) ratio by which the fact of the existence of our civilization increases our probability of the existence of another civilization.
The lack of obvious signs (such as alien space ships) of ET in our immediate vicinity is commonly taken as evidence that drastically lowers the probability of ET. Observational selection effects mean that P(ET) is not lowered as much as we would otherwise think.
(end of list)
So, yeah, to me, there is no Fermi paradox requiring explanation, nor do I expect any observations made during my lifetime to create a Fermi paradox.
If there were two universes, one very likely to evolve life and one very unlikely, and all we knew was that we existed in one, then we are much more likely to exist in the first universe. Hence our own existence is evidence about the likelihood of life evolving, and there still is a Fermi paradox.
Agree.
Disagree because your hypothetical situation requires a different analysis than the situation we find ourselves in.
In your hypothetical, we have somehow managed to acquire evidence for the existence of a second universe and to acquire evidence that life is much more likely in one than in the other.
Well, let us get specific about how that might come about.
Our universe contains gamma-ray bursters that probably kill any pre-intelligence-explosion civilization within ten light-years or so of them, and our astronomers have observed the rate * density at which these bursters occur.
Consequently, we might discover that one of the two universes has a much higher rate * density of bursters than the other universe. For that discovery to be consistent with the hypothetical posed in parent, we must have discovered that fact while somehow becoming or remaining completely ignorant as to which universe we are in.
We might discover further that although we have managed to determine the rate * density of the bursters in the other universe, we cannot travel between the universes. We must suppose something like that because the hypothetical in parent requires that no civilization in one universe can spread to the other one. (We can infer that requirement from the analysis and the conclusion in parent.)
I hope that having gotten specific and fleshed out your hypothetical a little, you have become open to the possibility that your hypothetical situation is different enough from the situation in which we find ourselves for us to reach a different conclusion.
In the situation in which we find ourselves, one salient piece of evidence we have for or against ET in our past light cone is the fact that there is no obvious evidence of ET in our vicinity, e.g., here on Earth or on the Moon or something.
And again, this piece of evidence is really only evidence against ETs that would let us continue to exist if their expansion reached us, but there’s a non-negligible probability that an ET would in fact let us continue to exist because there no strong reason for us to be confident that the ET would not.
In contrast to the situation in which we find ourselves, in the hypothetical posed in parent, there is an important piece of evidence in addition to the piece I just described in just the same way that whatever evidence we used to conclude that the revolver contains either zero or one bullet is an additional important piece of evidence that when combined with the evidence of the results of 1,000,000 iterations of Russian roulette would cause a perfect Bayesian reasoner to reach a different conclusion than it would if it knew nothing of the causal mechanism that exists between {a spin of the revolver followed by a pull of the trigger} and {death or not-death}.
These need not be actual universes, just hypothetical universes that we have assigned a probability to.
Given most priors over possible universes, the fact we exist will bump up the probability of there being lots of life. The fact we observe no life will bump down the probability, but the first effect can’t be ignored.
So in your view there is zero selection effect in this probability calculation?
In other words, our own existence increases your probability of there being lots of life just as much as the existence of an extraterrestrial civilization would?
In the previous sentence, please interpret “increase your probability just as much as” as “is represented by the same likelihood ratio as”.
And the existence of human civilization increases your P(lots of life) just as much as it would if you were an immortal invulnerable observer who has always existed and who would have survived any calamity that would have killed the humans or prevented the evolution of humans?
Finally, is there any probability calculation in which you would adjust the results of the calculation to account for an observational selection effect?
Would you for example take observational selection effects into account in calculating the probability that you are a Boltzmann brain?
I can get more specific with that last question if you like.
Depends how independent the two are. Also, myself existing increases the probability of human-like life existing, while the alien civilization increases the probability of life similar to themselves existing. If we’re similar, the combined effects will be particularly strong for theories of convergent evolution.
The line of reasoning for immortal observers is similar.
I thought that was exactly what I was doing? To be technical, I was using a variant of full non-indexical conditioning (FNC), which is an unloved bastard son of the SIA (self-indication assumption).
Can I get a yes or no on my question of whether you take the existence of human civilization to be just as strong evidence for the probabilities we have been discussing as you would have taken it to be if you were a non-human observing human civilization from a position of invulnerability?
Actually, “invulnerability” is not the right word: what I mean is, “if you were a non-human whose coming into existence was never in doubt and whose ability to observe the non-appearance of human civilization was never in doubt.”
If the existence of the “invulnerable non-human” (INH) is completely independent from the existence of human-like civilizations, then:
If the INH gets the information “there are human-like civilizations in your universe” then this changes his prior for “lots of human-like civilizations” much less that we get from noticing that we exist.
If the INH gets the information “there are human-like civilizations in your immediate neighbourhood”, then his prior is updated pretty similarly to ours.
Thanks for answering my question. I repeat that you and I are in disagreement about this particular application of observational selection effects, a.k.a., the anthropic principle and would probably also disagree about their application to an existential risk.
I notice that last month saw the publication of a new paper, “Anthropic Shadow: Observation Selection Effects and Human Extinction Risk” by Bostrum, Sandberg and my favorite astronomy professor, Milan M Circovic.
As an aid to navigation, let me link to the ancestor to this comment at which the conversation turned to observation selection effects.
I have been meaning to write a post summarizing “Anthropic Shadow”; would anyone besides you and me be interested in it?
I think you should write that post because thoughtful respected participants on LW use the anthropic principle incorrectly, IMHO. The gentleman who wrote great grandparent for example is respected enough to have been invited to attend SIAI’s workshop on decision theory earlier this year. And thoughtful respected participant Cousin It probably misapplied the anthropic principle in the first paragraph of this comment. I say “probably” because the context has to do with “modal realism” and other wooly thinking that I cannot digest, but I have not been able to think of any context in which Cousin It’s “every passing day without incident should weaken your faith in the anthropic explanation” is a sound argument.
(Many less thoughtful or less respected participants here have misapplied or failed to take into account the anthropic principle, too.)
It has been a while since I skimmed “Anthropic Shadow”, but IIRC a key point or assumption in their formula was that the more recent a risk would have occurred or not, the less likely ‘we’ are to have observed the risk occurring, because more recently = less time for observers to recover from the existential risk or fresh observers to have evolved. This suggests a weak version: the longer we exist, the fewer risks’ absence we need to appeal to an observer-based principle.
(But thinking about it, maybe the right version is the exact opposite. It’s hard to think about this sort of thing.)
I’ve read “Anthropic Shadow” a few times now. I don’t think I will write a post on it. It does a pretty good job of explaining itself, and I couldn’t think of any uses for it.
The Shadow only biases estimates when some narrow conditions are met:
your estimate has to be based strictly on your past
of a random event
the events have to be very destructive to observers like yourself
and also rare to begin with
So it basically only applies to global existential risks, and there aren’t that many of them. Nor can we apply it to interesting examples like the Singularity, because that’s not a random event but dependent on our development.
Indeed. I, for one, do not worry about the standard doomesday argument, and such. I would argue that SIA is the only consistent anthropic principle, but that’s a long argument, and a long post to write one day.
Fortunately, the Anthropic shadow argument can be accepted whatever type of anthropic reasoning you use.
I’m not convinced that works that way.
Suppose I have the following (unreasonable, but illustrative) prior: 0.5 for P=(AGI is possible), 1 for Q=(if AGI is possible, then it will occur in 2011), 0.1 for R=(if AGI occurs, then I will survive), and 1 for S=(I will survive if AGI is impossible or otherwise fails to occur in 2011). The events of interest are P and R.
P,R: 0.05. I survive. P,~R: 0.45. I do not survive. (This outcome will not be observed.) ~P: 0.5. I survive. R is irrelevant.
After I observe myself to still be alive at the end of 2011 (which, due to anthropic bias, is guaranteed provided I’m there to make the observation), my posterior probability for P (AGI is possible) should be 0.05/(0.05+0.5) = 5⁄55 = 1⁄11 = 0.0909..., which is considerably less than the 0.5 I would have estimated beforehand.
By updating on my own existence, I infer a lower probability of the possibility of something that could kill me.
Well, yeah, if we knew what you call S (that AGI would occur in 2011 or would never occur), then our surviving 2011 would mean that AGI will never occur.
But your example fails to shed light on the argument in great grandparent.
If I may suggest a different example, one which I believe is analogous to the argument in great grandparent:
Suppose I give you a box that displays either “heads” or “tails” when you press a button on the box.
The reason I want you to consider a box rather than a coin is that a person can make a pretty good estimate of the “fairness” of a coin just by looking at it and hold it in one’s hand.
Do not make any assumptions about the “fairness” of the box. Do not for example assume that if you push the button a million times, the box would display “heads” about 500,000 times.
What is your probability that the box will display “heads” when you push the button?
.5 obviously because even if the box is extremely “unfair” or biased, you have no way to know whether it is biased towards “heads” or biased towards “tails”.
Suppose further that you cannot survive the box coming up “tails”.
Now suppose you push the button ten times and of course it comes up “heads” all ten times.
Updating on the results of your first ten button-presses, what is your probability that it will come up “heads” if you push the button an eleventh time?
Do you for example say, “Well, clearly this box is very biased towards heads.”
Do you use Laplace’s law of succession to compute the probability?
This is more or less what I was trying to do, but I neglected to treat “AGI is impossible” as equivalent to “AGI will never happen”.
I need to have a prior in order to update, so sure, let’s use Laplace.
I’d have to be an idiot to ever press the button at all, but let’s say I’m in Harry’s situation with the time-turner and someone else pushed the button ten times before I could tell them not to.
I don’t feel like doing the calculus to actually apply Bayes myself here, so I’ll use my vague nonunderstanding of Wikipedia’s formula for the rule of succession and say p=11/12.
The difficulty of creating an AGI drops slightly every time computational power increases. We know that people greatly underestimated the difficulty of creating AGI in the past, but we don’t know how fast the difficulty is decreasing, how difficult it is now, whether it will ever stop decreasing, or where.
I agree that those rates are hard to determine. I am also weary of “AI FOOM is a certainty” type statements, and appeals to the nebulous “powers that all computers inherently have”.