No, I don’t agree this is an implication. I would say that no one can reasonably believe all of the following at the same time with a high degree of confidence:
1) I am critical to this Friendly AI project that has a significant chance of success.
2) There is no significant chance of Friendly AI without this project.
3) Without Friendly AI, the world is doomed.
But then, as you know, I don’t consider it reasonable to put a high degree in confidence in number 3. Nor do many other intelligent people (such as Robin Hanson.) So it isn’t surprising that I would consider it unreasonable to be sure of all three of them.
I would say that no one can reasonably believe all of the following at the same time with a high degree of confidence: 1) I am critical to this Friendly AI project that has a significant chance of success. 2) There is no significant chance of Friendly AI without this project. 3) Without Friendly AI, the world is doomed.
I see. So it’s not that any one of these statements is a forbidden premise, but that their combination leads to a forbidden conclusion. Would you agree with the previous sentence?
BTW, nobody please vote down the parent below −2, that will make it invisible. Also it doesn’t particularly deserve downvoting IMO.
I would suggest that, in order for this set of beliefs to become (psychiatrically?) forbidden, we need to add a fourth item. 4) Dozens of other smart people agree with me on #3.
If someone believes that very, very few people yet recognize the importance of FAI, then the conjunction of beliefs #1 thru #3 might be reasonable. But after #4 becomes true (and known to our protagonist), then continuing to hold #1 and #2 may be indicative of a problem.
Dozens isn’t sufficient. I asked Marcello if he’d run into anyone who seemed to have more raw intellectual horsepower than me, and he said that John Conway gave him that impression. So there are smarter people than me upon the Earth, which doesn’t surprise me at all, but it might take a wider net than “dozens of other smart people” before someone comes in with more brilliance and a better starting math education and renders me obsolete.
Plenty of criticism (some of it reasonable) has been lobbed at IQ tests and at things like the SAT. Is there a method known to you (or anyone reading) that actually measures “raw intellectual horsepower” in a reliable and accurate way? Aside from asking Marcello.
Read the source code, and then visualize a few levels from Crysis or Metro 2033 in your head. While you render it, count the average Frames per second. Alternatively, see how quickly you can find the prime factors of every integer from 1 to 1000.
Which is to say… Humans in general have extremely limited intellectual power. instead of calculating things efficiently, we work by using various tricks with caches and memory to find answers. Therefore, almost all tasks are more dependant on practice and interest than they are on intelligence. So, rather then testing the statement “Eliezer is smart” it has more bearing on this debate to confirm “Eliezer has spent a large amount of time optimizing his cache for tasks relating to rationality, evolution, and artificial intelligence”. Intelligence is overrated.
Sheer curiosity, but have you or anyone ever contacted John Conway about the topic of u/FAI and asked him what the thinks about the topic, the risks associated with it and maybe the SIAI itself?
“raw intellectual power” != “relevant knowledge”. Looks like he worked on some game theory, but otherwise not much relevancy. Should we ask Steven Hawking? Or take a poll of Nobel Laureates?
I am not saying that he can’t be brought up to date in this kind of discussion, and has a lot to consider, but not asking him as things are indicates little.
With the hint from EY on another branch, I see a problem in my argument. Our protagonist might circumvent my straitjacket by also believing 5) The key to FAI is TDT, but I have been so far unsuccessful in getting many of those dozens of smart people to listen to me on that subject.
I now withdraw from this conversation with my tail between my legs.
I wouldn’t put it in terms of forbidden premises or forbidden conclusions.
But if each of these statements has a 90% of being true, and if they are assumed to be independent (which admittedly won’t be exactly true), then the probability that all three are true would be only about 70%, which is not an extremely high degree of confidence; more like saying, “This is my opinion but I could easily be wrong.”
Personally I don’t think 1) or 3), taken in a strict way, could reasonably be said to have more than a 20% chance of being true. I do think a probability of 90% is a fairly reasonable assignment for 2), because most people are not going to bother about Friendliness. Accounting for the fact that these are not totally independent, I don’t consider a probability assignment of more than 5% for the conjunction to be reasonable. However, since there are other points of view, I could accept that someone might assign the conjunction a 70% chance in accordance with the previous paragraph, without being crazy. But if you assign a probability much more than that I would have to withdraw this.
If the statements are weakened as Carl Shulman suggests, then even the conjunction could reasonably be given a much higher probability.
Also, as long as it is admitted that the probability is not high, you could still say that the possibility needs to be taken seriously because you are talking about the possible (if yet improbable) destruction of the world.
I certainly do not assign a probability as high as 70% to the conjunction of all three of those statements.
And in case it wasn’t clear, the problem I was trying to point out was simply with having forbidden conclusions—not forbidden by observation per se, but forbidden by forbidden psychology—and using that to make deductions about empirical premises that ought simply to be evaluated by themselves.
I s’pose I might be crazy, but you all are putting your craziness right up front. You can’t extract milk from a stone!
Just curious (and not being 100% serious here): Would you have any concerns about the following argument (and I am not saying I accept it)?
Assume that famous people will get recreated as AIs in simulations a lot in the future. School projects, entertainment, historical research, interactive museum exhibits, idols to be worshipped by cults built up around them, etc.
If you save the world, you will be about the most famous person ever in the future.
Therefore there will be a lot of Eliezer Yudkowsky AIs created in the future.
Therefore the chances of anyone who thinks he is Eliezer Yudkowsky actually being the orginal, 21st century one are very small.
Therefore you are almost certainly an AI, and none of the rest of us are here—except maybe as stage props with varying degrees of cognition (and you probably never even heard of me before, so someone like me would probably not get represented in any detail in an Eliezer Yudkowsky simulation). That would mean that I am not even conscious and am just some simple subroutine. Actually, now I have raised the issue to be scary, it looks a lot more alarming for me than it does for you as I may have just argued myself out of existence...
Actually, now I have raised the issue to be scary, it looks a lot more alarming for me than it does for you as I may have just argued myself out of existence...
That doesn’t seem scary to me at all. I still know that there is at least one of me that I can consider ‘real’. I will continue to act as if I am one of the instances that I consider me/important. I’ve lost no existence whatsoever.
That’s good to know. I hope multifoliaterose reads this comment, as he seemed to think that you would assign a very high probability to the conjunction (and it’s true that you’ve sometimes given that impression by your way of talking.)
Also, I didn’t think he was necessarily setting up forbidden conclusions, since he did add some qualifications allowing that in some circumstances it could be justified to hold such opinions.
To be quite clear about which of Unknowns’ points I object, my main objection is to the point:
I am critical to this Friendly AI project that has a significant chance of success
where ‘I’ is replaced by “Eliezer.” I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you’re working on. (Maybe even much less than that—I would have to spend some time calibrating my estimate to make a judgment on precisely how low a probability I assign to the proposition.)
My impression is that you’ve greatly underestimated the difficulty of building a Friendly AI.
I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you’re working on.
I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.
My impression is that you’ve greatly underestimated the difficulty of building a Friendly AI.
Out of weary curiosity, what is it that you think you know about Friendly AI that I don’t?
And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?
I agree it’s kind of ironic that multi has such an overconfident probability assignment right after criticizing you for being overconfident. I was quite disappointed with his response here.
One could offer many crude back-of-envelope probability calculations. Here’s one: let’s say there’s
a 10% chance AGI is easy enough for the world to do in the next few decades
a 1% chance that if the world can do it, a team of supergeniuses can do the Friendly kind first
an independent 10% chance Eliezer succeeds at putting together such a team of supergeniuses
That seems conservative to me and implies at least a 1 in 10^4 chance. Obviously there’s lots of room for quibbling here, but it’s hard for me to see how such quibbling could account for five orders of magnitude. And even if post-quibbling you think you have a better model that does imply 1 in 10^9, you only need to put little probability mass on my model or models like it for them to dominate the calculation. (E.g., a 9 in 10 chance of a 1 in 10^9 chance plus a 1 in 10 chance of a 1 in 10^4 chance is close to a 1 in 10^5 chance.)
I don’t find these remarks compelling. I feel similar remarks could be used to justify nearly anything. Of course, I owe you an explanation. One will follow later on.
Unless you’ve actually calculated the probability mathematically, a probability of one in a billion for a natural language claim that a significant number of people accept as likely true is always overconfident. Even Eliezer said that he couldn’t assign a probability as low as one in a billion for the claim “God exists” (although Michael Vassar criticized him for this, showing himself to be even more overconfident than Eliezer.)
Unless you’ve actually calculated the probability mathematically, a probability of one in a billion for a natural language claim that a significant number of people accept as likely true is always overconfident.
You give the human species far too much credit if you think that our mere ability to dream up a hypothesis automatically raises its probability above some uniform lower bound.
I am aware of your disagreement, for example as expressed by the absurd claims here. Yes, my basic idea is, unlike you, to give some credit to the human species. I think there’s a limit on how much you can disagree with other human beings—unless you’re claiming to be something superhuman.
Did you see the link to this comment thread? I would like to see your response to the discussion there.
I think there’s a limit on how much you can disagree with other human beings—unless you’re claiming to be something superhuman.
At least for epistemic meanings of “superhuman”, that’s pretty much the whole purpose of LW, isn’t it?
Did you see the link to this comment thread? I would like to see your response to the discussion there.
My immediate response is as follows: yes, dependency relations might concentrate most of the improbability of a religion to a relatively small subset of its claims. But the point is that those claims themselves possess enormous complexity (which may not necessarily be apparent on the surface; cf. the simple-sounding “the woman across the street is a witch; she did it”).
Let’s pick an example. How probable do you think it is that Islam is a true religion? (There are several ways to take care of logical contradictions here, so saying 0% is not an option.)
Suppose there were a machine—for the sake of tradition, we can call it Omega—that prints out a series of zeros and ones according to the following rule. If Islam is true, it prints out a 1 on each round, with 100% probability. If Islam is false, it prints out a 0 or a 1, each with 50% probability.
Let’s run the machine… suppose on the first round, it prints out a 1. Then another. Then another. Then another… and so on… it’s printed out 10 1′s now. Of course, this isn’t so improbable. After all, there was a 1/1024 chance of it doing this anyway, even if Islam is false. And presumably we think Islam is more likely than this to be false, so there’s a good chance we’ll see a 0 in the next round or two...
But it prints out another 1. Then another. Then another… and so on… It’s printed out 20 of them. Incredible! But we’re still holding out. After all, million to one chances happen every day...
Then it prints out another, and another… it just keeps going… It’s printed out 30 1′s now. Of course, it did have a chance of one in a billion of doing this, if Islam were false...
But for me, this is my lower bound. At this point, if not before, I become a Muslim. What about you?
You’ve been rather vague about the probabilities involved, but you speak of “double digit negative exponents” and so on, even saying that this is “conservative,” which implies possibly three digit exponents. Let’s suppose you think that the probability that Islam is true is 10^-20; this would seem to be very conservative, by your standards. According to this, to get an equivalent chance, the machine would have to print out 66 1′s.
If the machine prints out 50 1′s, and then someone runs in and smashes it beyond repair, before it has a chance to continue, will you walk away, saying, “There is a chance at most of 1 in 60,000 that Islam is true?”
Thank you a lot for posting this scenario. It’s instructive from the “heuristics and biases” point of view.
Imagine there are a trillion variants of Islam, differing by one paragraph in the holy book or something. At most one of them can be true. You pick one variant at random, test it with your machine and get 30 1′s in a row. Now you should be damn convinced that you picked the true one, right? Wrong. Getting this result by a fluke is 1000x more likely than having picked the true variant in the first place. Probability is unintuitive and our brains are mush, that’s all I’m sayin’.
I agree with this. But if the scenario happened in real life, you would not be picking a certain variant. You would be asking the vague question, “Is Islam true,” to which the answer would be yes if any one of those trillion variants, or many others, were true.
Yes, there are trillions of possible religions that differ from one another as much as Islam differs from Judaism, or whatever. But only a few of these are believed by human beings. So I still think I would convert after 30 1′s, and I think this would reasonable.
If a religion’s popularity raises your prior for it so much, how do you avoid Pascal’s Mugging with respect to the major religions of today? Eternity in hell is more than 2^30 times worse than anything you could experience here; why aren’t you religious already?
It doesn’t matter whether it raises your prior or not; eternity in hell is also more than 2^3000 times worse etc… so the same problem will apply in any case.
Elsewhere I’ve defended Pascal’s Wager against the usual criticisms, and I still say it’s valid given the premises. But there are two problematic premises:
1) It assumes that utility functions are unbounded. This is certainly false for all human beings in terms of revealed preference; it is likely false even in principle (e.g. the Lifespan Dilemma).
2) It assumes that humans are utility maximizers. This is false in fact, and even in theory most of us would not want to self-modify to become utility maximizers; it would be a lot like self-modifying to become a Babyeater or a Super-Happy.
Do you have an answer for how to avoid giving in to the mugger in Eliezer’s original Pascal’s Mugging scenario? If not, I don’t think your question is a fair one (assuming it’s meant to be rhetorical).
I don’t have a conclusive answer, but many people say they have bounded utility functions (you see Unknowns pointed out that possibility too). The problem with assigning higher credence to popular religions is that it forces your utility bound to be lower if you want to reject the mugging. Imagining a billion lifetimes is way easier than imagining 3^^^^3 lifetimes. That was the reason for my question.
My answer (for why I don’t believe in a popular religion as a form of giving in to a Pascal’s Mugging) would be that I’m simultaneously faced with a number of different Pascal’s Muggings, some of which are mutually exclusive, so I can’t just choose to give in to all of them. And I’m also unsure of what decision theory/prior/utility function I should use to decide what to do in the face of such Muggings. Irreversibly accepting any particular Mugging in my current confused state is likely to be suboptimal, so the best way forward at this point seems to be to work on the relevant philosophical questions.
That’s what I think too! You’re only the second other person I have seen make this explicit, so I wonder how many people have even considered this. Do you think more people would benefit from hearing this argument?
Do you think more people would benefit from hearing this argument?
Sure, why do you ask? (If you’re asking because I’ve thought of this argument but haven’t already tried to share it with a wider audience, it probably has to do with reasons, e.g., laziness, that are unrelated to whether I think more people would benefit from hearing it.)
I was considering doing a post on it, but there are many posts that I want to write, many of which require research, so I avoided implying that it would be done soon/ever.
Yes, there are trillions of possible religions that differ from one another as much as Islam differs from Judaism, or whatever. But only a few of these are believed by human beings.
Privileging the hypothesis! That they are believed by human beings doesn’t lend them probability.
Well, it does to the extent that lack of believers would be evidence against them. I’d say that Allah is considerably more probable than a similarly complex and powerful god who also wants to be worshiped and is equally willing to interact with humans, but not believed in by anyone at all. Still considerably less probable than the prior of some god of that general sort existing, though.
Well, it does to the extent that lack of believers would be evidence against them.
Agreed, but then we have the original situation, if we only consider the set of possible gods that have the property of causing worshiping of themselves.
Yes, there are trillions of possible religions that differ from one another as much as Islam differs from Judaism, or whatever. But only a few of these are believed by human beings.
Privileging the hypothesis! That they are believed by human beings doesn’t lend them probability.
No. It doesn’t lend probability, but it seems like it ought to lend something. What is this mysterious something? Lets call it respect.
Privileging the hypothesis is a fallacy.
Respecting the hypothesis is a (relatively minor) method of rationality.
We respect the hypotheses that we find in a math text by investing the necessary mental resources toward the task of finding an analytic proof. We don’t just accept the truth of the hypothesis on authority. But on the other hand, we don’t try to prove (or disprove) just any old hypothesis. It has to be one that we respect.
We respect scientific hypotheses enough to invest physical resources toward performing experiments that might refute or confirm them. We don’t expend those resources on just any scientific hypothesis. Only the ones we respect.
Does a religion deserve respect because it has believers? More respect if it has lots of believers? I think it does. Not privilege. Definitely not. But respect? Why not?
You can dispense with this particular concept of respect since in both your examples you are actually supplied with sufficient Bayesian evidence to justify evaluating the hypothesis, so it isn’t privileged. Whether this is also the case for believed in religions is the very point contested.
A priori, with no other evidence one way or another, a belief held by human beings is more likely to be true than not. If Ann says she had a sandwich for lunch, then her words are evidence that she actually had a sandwich for lunch.
Of course, we have external reason to doubt lots of things that human beings claim and believe, including religions. And a religion does not become twice as credible if it has twice as many adherents. Right now I believe we have good reason to reject (at least some of) the tenets of all religious traditions.
But it does make some sense to give some marginal privilege or respect to an idea based on the fact that somebody believes it, and to give the idea more credit if it’s very durable over time, or if particularly clever people believe it. If it were any subject but religion—if it were science, for instance—this would be an obvious point. Scientific beliefs have often been wrong, but you’ll be best off giving higher priors to hypotheses believed by scientists than to other conceivable hypotheses.
Also… if you haven’t been to Australia, is it privileging the hypothesis to accept the word of those who say that it exists? There are trillions of possible countries that could exist that people don’t believe exist...
And don’t tell me they say they’ve been there… religious people say they’ve experienced angels etc. too.
And so on. People’s beliefs in religion may be weaker than their belief in Austrialia, but it certainly is not privileging a random hypothesis.
Your observations (of people claiming to having seen an angel, or a kangaroo) are distinct from hypotheses formed to explain those observations. If in a given case, you don’t have reason to expect statements people make to be related to facts, then the statements people make taken verbatim have no special place as hypotheses.
“You don’t have reason to expect statements people make to be related to facts” doesn’t mean that you have 100% certainty that they are not, which you would need in order to invoke privileging the hypothesis.
Now you are appealing to impossibility of absolute certainty, refuting my argument as not being that particular kind of proof. If hypothesis X is a little bit more probable than many others, you still don’t have any reason to focus on it (and correlation could be negative!).
In principle the correlation could be negative but this is extremely unlikely and requires some very strange conditions (for example if the person is more likely to say that Islam is true if he knows it is false than if he knows it is true).
I disagree; given that most of the religions in question center on human worship of the divine, I have to think that Pr(religion X becomes known among humans | religion X is true) > Pr(religion X does not become known among humans | religion X is true). But I hate to spend time arguing about whether a likelihood ratio should be considered strictly equal to 1 or equal to 1 + epsilon when the prior probabilities of the hypotheses in question are themselves ridiculously small.
If the machine prints out 50 1′s, and then someone runs in and smashes it beyond repair, before it has a chance to continue, will you walk away, saying, “There is a chance at most of 1 in 60,000 that Islam is true?”
If so, are you serious?
Of course I’m serious (and I hardly need to point out the inadequacy of the argument from the incredulous stare). If I’m not going to take my model of the world seriously, then it wasn’t actually my model to begin with.
Sewing-Machine’s comment below basically reflects my view, except for the doubts about numbers as a representation of beliefs. What this ultimately comes down to is that you are using a model of the universe according to which the beliefs of Muslims are entangled with reality to a vastly greater degree than on my model. Modulo the obvious issues about setting up an experiment like the one you describe in a universe that works the way I think it does, I really don’t have a problem waiting for 66 or more 1′s before converting to Islam. Honest. If I did, it would mean I had a different understanding of the causal structure of the universe than I do.
Further below you say this, which I find revealing:
If this actually happened to you, and you walked away and did not convert, would you have some fear of being condemned to hell for seeing this and not converting? Even a little bit of fear? If you would, then your probability that Islam is true must be much higher than 10^-20, since we’re not afraid of things that have a one in a hundred billion chance of happening.
As it happens, given my own particular personality, I’d probably be terrified. The voice in my head would be screaming. In fact, at that point I might even be tempted to conclude that expected utilities favor conversion, given the particular nature of Islam.
But from an epistemic point of view, this doesn’t actually change anything. As I argued in Advancing Certainty, there is such a thing as epistemically shutting up and multiplying. Bayes’ Theorem says the updated probability is one in a hundred billion, my emotions notwithstanding. This is precisely the kind of thing we have to learn to do in order to escape the low-Earth orbit of our primitive evolved epistemology—our entire project here, mind you—which, unlike you (it appears), I actually believe is possible.
Has anyone done a “shut up and multiply” for Islam (or Christianity)? I would be interested in seeing such a calculation. (I did a Google search and couldn’t find anything directly relevant.) Here’s my own attempt, which doesn’t get very far.
Let H = “Islam is true” and E = everything we’ve observed about the universe so far. According to Bayes:
P(H | E) = P(E | H) P(H) / P(E)
Unfortunately I have no idea how to compute the terms above. Nor do I know how to argue that P(H|E) is as small as 10^-20 without explicitly calculating the terms. One argument might be that P(H) is very small because of the high complexity of Islam, but since E includes “23% of humanity believe in some form of Islam”, the term for the complexity of Islam seems to be present in both the numerator and denominator and therefore cancel each other out.
If someone has done such a calculation/argument before, please post a link?
P(E) includes the convincingness of Islam to people on average, not the complexity of Islam. These things are very different because of the conjunction fallacy. So P(H) can be a lot smaller than P(E).
I don’t understand how P(E) does not include a term for the complexity of Islam, given that E contains Islam, and E is not so large that it takes a huge number of bits to locate Islam inside E.
It doesn’t take a lot of bits to locate “Islam is false” based on “Islam is true”. Does it mean that all complex statements have about 50% probability?
I don’t think that’s true; cousin_it had it right the first time. The complexity of Islam is the complexity of a reality that contains an omnipotent creator, his angels, Paradise, Hell, and so forth. Everything we’ve observed about the universe includes people believing in Islam, but not the beings and places that Islam says exist.
In other words, E contains Islam the religion, not Islam the reality.
The really big problem with such a reality is that it contains a fundamental, non-contingent mind (God’s/Allah’s, etc) - and we all know how much describing one of those takes - and the requirement that God is non-contingent means we can’t use any simpler, underlying ideas like Darwinian evolution. Non-contingency, in theory selection terms, is a god killer: It forces God to incur a huge information penalty—unless the theist refuses even to play by these rules and thinks God is above all that—in which case they aren’t even playing the theory selection game.
I don’t see this. Why assume that the non-contingent, pre-existing God is particularly complex. Why not assume that the current complexity of God (if He actually is complex) developed over time as the universe evolved since the big bang. Or, just as good, assume that God became complex before He created this universe.
It is not as if we know enough about God to actually start writing down that presumptive long bit string. And, after all, we don’t ask the big bang to explain the coastline of Great Britain.
Non-contingency, in theory selection terms, is a god killer
Agreed. It’s why I’m so annoyed when even smart atheists say that God was an ok hypothesis before evolution was discovered. God was always one of the worst possible hypotheses!
unless the theist refuses even to play by these rules and thinks God is above all that—in which case they aren’t even playing the theory selection game.
Or, put more directly: Unless the theist is deluding himself. :)
I’m confused. In the comments to my post you draw a distinction between an “event” and a “huge set of events”, saying that complexity only applies to the former but not the latter. But Islam is also a “huge set of events”—it doesn’t predict just one possible future, but a wide class of them (possibly even including our actual world, ask any Muslim!), so you can’t make an argument against it based on complexity of description alone. Does this mean you tripped on the exact same mine I was trying to defuse with my post?
I’d be very interested in hearing a valid argument about the “right” prior we should assign to Islam being true—how “wide” the set of world-programs corresponding to it actually is—because I tried to solve this problem and failed.
Sorry, I was confused. Just ignore that comment of mine in your thread.
I’m not sure how to answer your question because as far as I can tell you’ve already done so. The complexity of a world-program gives its a priori probability. The a priori probability of a hypothesis is the sum of the probabilities of all the world-programs it contains. What’s the problem?
By reasonable, I mean the hypothesis is worth considering, if there were reasons to entertain it. That is, if someone suspected there was a mind behind reality, I don’t think they should dismiss it out of hand as unreasonable because this mind must be non-contingent.
In fact, we should expect any explanation of our creation to be non-contingent, since physical reality appears to be so.
For example, if it’s reasonable to consider the probability that we’re in a simulation, then we’re considering a non-contingent mind creating the simulation we’re in.
Whoops, you’re right. Sorry. I didn’t quite realize you were talking about the universal prior again :-)
But I think the argument can still be made to work. P(H) doesn’t depend only on the complexity of Islam—we must also take into account the internal structure of Islam. For example, the hypothesis “A and B and C and … and Z” has the same complexity as “A or B or C or … or Z”, but obviously the former is way less probable. So P(H) and P(E) have the same term for complexity, but P(H) also gets a heavy “conjunction penalty” which P(E) doesn’t get because people are susceptible to the conjunction fallacy.
It’s slightly distressing that my wrong comment was upvoted.
Whoops, you’re right. Now I’m ashamed that my comment got upvoted.
I think the argument may still be made to work by fleshing out the nonstandard notion of “complexity” that I had in my head when writing it :-) Your prior for a given text being true shouldn’t depend only on the text’s K-complexity. For example, the text “A and B and C and D” has the same complexity as “A or B or C or D”, but the former is way less probable. So P(E) and P(H) may have the same term for complexity, but P(H) also gets a “conjunction penalty” that P(E) doesn’t get because people are prey to the conjunction fallacy.
EDIT: this was yet another mistake. Such an argument cannot work because P(E) is obviously much smaller than P(H), because E is a huge mountain of evidence and H is just a little text. When trying to reach the correct answer, we cannot afford to ignore P(E|H).
For simplicity we may assume P(E|H) to be near-certainty: if there is an attention-seeking god, we’d know about it. This leaves P(E) and P(H), and P(H|E) is tiny exactly for the reason you named: P(H) is much smaller than P(E), because H is optimized for meme-spreading to a great extent, which makes for a given complexity (that translates into P(H)) probability of gaining popularity P(E) comparatively much higher.
Thus, just arguing from complexity indeed misses the point, and the real reason for improbability of cultish claims is that they are highly optimized to be cultish claims.
For example, compare with tossing a coin 50 times: the actual observation, whatever that is, will be a highly improbable event, and theoretical prediction from the model of fair coin will be too. But if the observation is highly optimized to attract attention, for example it’s all 50 tails, then theoretical model crumbles, and not because the event you’ve observed is too improbable according to it, but because other hypotheses win out.
the term for the complexity of Islam seems to be present in both the numerator and denominator and therefore cancel each other out.
Actually it doesn’t, human generated complexity is different from naturally generated complexity (for instance it fits into narratives, apparent holes are filled with the sort of justifications a human is likely to think of etc.). That’s one of the ways you can tell stories from real events. Religious accounts contain much of what looks like human generated complexity.
Here’s a somewhat rough way of estimating probabilities of unlikely events. Let’s say that an event X with P(X) = about 1-in-10 is a “lucky break.” Suppose that there are L(1) ways that Y could occur on account of a single lucky break, L(2) ways that Y could occur on account of a pair of independent lucky breaks, L(3) ways that Y could occur on account of 3 independent lucky breaks, and so on. Then P(Y) is approximately the sum over all n of L(n)/10^n. I have the feeling that arguments about whether P(Y) is small versus extremely small are arguments about the growth rate of L(n).
I discussed the problem of estimating P(“23% of humanity believes...”) here. I’d be grateful for thoughts or criticisms.
This is a small point but “E includes complex claim C” does not imply that the (for instance, Kolmogorov) complexity of E is as large as the Kolmogorov complexity of C. The complexity of the digits of square root of 2 is pretty small, but they contain strings of arbitrarily high complexity.
E includes C implies that K(C) ⇐ K(E) + K(information needed to locate C within E). In this case K(information needed to locate C within E) seems small enough not to matter to the overall argument, which is why I left it out. (Since you said “this is a small point” I guess you probably understand and agree with this.)
Actually no I hadn’t thought of that. But I wonder if the amount of information it takes to locate “lots of people are muslims” within E is as small as you say. My particular E does not even contain that much information about Islam, and how people came to believe it, but it does contain a model of how people come to believe weird things in general. Is that a misleading way of putting things? I can’t tell.
There are some very crude sketches of shutting-up-and-multiplying, from one Christian and a couple of atheists, here (read the comments as well as the post itself), and I think there may be more with a similar flavour in other blog posts there (and their comments) from around the same time.
(The author of the blog has posted a little on LW. The two skeptics responsible for most of the comments on that post have both been quite active here. One of them still is, and is in fact posting this comment right now :-).)
Wei Dai, exactly. The point about about the complexity of the thing is included in the fact that people believe it was the point I have been making all along. Regardless of what you think the resulting probability is, most of the “evidence” for Islam consists in the very fact that some people think it is true—and as you show in your calculation, this is very strong evidence.
It seems to me that komponisto and others are taking it to be known with 100% certainly that Islam and the like were generated by some random process, and then trying to determine what the probability would be.
Now I know that most likely Mohammed was insane and in effect the Koran was in fact generated by a random process. But I certainly don’t know how you can say that the probability that it wasn’t generated randomly is 1 in 10^20 or lower. And in fact if you’re going to assign a probability like this you should have an actual calculation.
I agree that your position is analogous to “shutting up and multiplying.” But in fact, Eliezer may have been wrong about that in general—see the Lifespan Dilemma—because people’s utility functions are likely not unbounded.
In your case, I agree with shutting up and multiplying when we have a way to calculate the probabilities. In this case, we don’t, so we can’t do it. If you had a known probability (see cousin_it’s comment on the possible trillions of variants of Islam) of one in a trillion, then I would agree with walking away after seeing 30 1′s, regardless of the emotional effect of this.
But in reality, we have no such known probability. The result is that you are going to have to use some base rate: “things that people believe” or more accurately, “strange things that people believe” or whatever. In any case, whatever base rate you use, it will not have a probability anywhere near 10^-20 (i.e. more than 1 in 10^20 strange beliefs is true etc.)
My real point about the fear is that your brain doesn’t work the way your probabilities do—even if you say you are that certain, your brain isn’t. And if we had calculated the probabilities, you would be justified in ignoring your brain. But in fact, since we haven’t, your brain is more right than you are in this case. It is less certain precisely because you are simply not justified in being that certain.
It is a traditional feature of Omega that you have confidence 1 in its reliability and trustworthiness.
Traditions do not always make sense, neither are they necessarily passed down accurately. The original Omega, the one that appears in Newcomb’s problem, does not have to be reliable with probability 1 for that problem to be a problem.
Of course, to the purist who says that 0 and 1 are not probabilities, you’ve just sinned by talking about confidence 1, but the problem can be restated to avoid that by asking for one’s conditional probability P(Islam | Omega is and behaves as described).
In the present case, the supposition that one is faced with an overwhelming likelihood ratio raising the probability that Islam is true by an unlimited amount is just a blue tentacle scenario. Any number that anyone who agrees with the general anti-religious view common on LessWrong comes up with is going to be nonsense. Professing, say, 1 in a million for Islam on the grounds that 1 in a billion or 1 in a trillion is too small a probability for the human brain to cope with is the real cop-out, a piece of reversed stupidity with no justification of its own.
The scenario isn’t going to happen. Forcing your brain to produce an answer to the question “but what if it did?” is not necessarily going to produce a meaningful answer.
Traditions do not always make sense, neither are they necessarily passed down accurately. The original Omega, the one that appears in Newcomb’s problem, does not have to be reliable with probability 1 for that problem to be a problem.
Quite true. But if you want to dispute the usefulness of this tradition, you should address the broader and older tradition of which it is an instance: that thought experiments should abstract away real-world details irrelevant to the main point.
Of course, to the purist who says that 0 and 1 are not probabilities, you’ve just sinned by talking about confidence 1
This is a pet peeve of mine, and I’ve wanted an excuse to post this rant for a while. Don’t take it personally.
That “purist” is as completely wrong as the person who insists that there is no such thing as centrifugal force. They are ignoring the math in favor of a meme that enables them to feel smugly superior.
0 and 1 are valid probabilities in every mathematical sense: the equations of probability don’t break down when passed p=0 or p=1 the way they do with genuine nonprobabilities like −1 or 2. A probability of 0 or 1 is like a perfect vacuum: it happens not to occur in the world that we happen to inhabit, but it is perfectly well-defined, we can do math with it without any difficulty, and it is extraordinarily useful in thought experiments.
When asked to consider a spherical black body of radius one meter resting on a frictionless plane, you don’t respond “blue tentacles”, you do the math.
I agree with the rant. 0 and 1 are indeed probabilities, and saying that they are not is a misleading way of enjoining people to never rule out anything. Mathematically, P(~A|A) is zero, not epsilon, and P(A|A) is 1, not 1-epsilon. Practically, 0 and 1 in subjective judgements mean as near to 0 and 1 as makes no practical difference. When I agree a rendezvous with someone, I don’t say “there’s a 99% chance I’ll be there”, I say “I’ll be there”.
Where we part ways is in our assessment of the value of this thought-experiment. To me it abstracts and assumes away so much that what is left does not illuminate anything. I can calculate 2^{-N}, but asked how large N would have to be to persuade me of some fantastic claim backed by this fantastic machine I simply cannot name any value. I have no confidence that whatever value I named would be the value I would actually use were this impossible scenario to come to pass.
Fair enough. But if we’re doing that, I think the original question with the Omega machine abstracts too much away. Let’s consider the kind of evidence that we would actually expect to see if Islam were true.
Let us stipulate that, on the 1st of Muḥarram, a prominent ayatollah claims to have suddenly become a prophet. They go on television and answer questions on all topics. All verifiable answers they give, including those to NP-complete questions submitted for experimental purposes, turn out to be true. The new prophet asserts the validity of the Qur’an as holy scripture and of Allah as the one God.
There is a website where you can suggest questions to put to the new prophet. Not all submitted questions get answered, due to time constraints, but interesting ones do get in reasonably often. Are there any questions you’d like to ask?
I’ll give a reworded version of this, to take it out of the context of a belief system with which we are familiar. I’m not intending any mockery by this: It is to make a point about the claims and the evidence:
“Let us stipulate that, on Paris Hilton’s birthday, a prominent Paris Hilton admirer claims to have suddenly become a prophet. They go on television and answer questions on all topics. All verifiable answers they give, including those to NP-complete questions submitted for experimental purposes, turn out to be true. The new prophet asserts that Paris Hilton is a super-powerful being sent here from another world, co-existing in space with ours but at a different vibrational something or whatever. Paris Hilton has come to show us that celebrity can be fun. The entire universe is built on celebrity power. Madonna tried to teach us this when she showed us how to Vogue but we did not listen and the burden of non-celebrity energy threatens to weigh us down into the valley of mediocrity when we die instead of ascending to a higher plane where each of us gets his/her own talkshow with an army of smurfs to do our bidding. Oh, and Sesame Street is being used by the dark energy force to send evil messages into children’s feet. (The brain only appears to be the source of consciousness: Really it is the feet. Except for people with no feet. (Ah! I bet you thought I didn’t think of that.) Today’s lucky food: custard.”
There is a website where you can suggest questions to put to the new prophet. Not all submitted questions get answered, due to time constraints, but interesting ones do get in reasonably often. Are there any questions you’d like to ask?”
The point I am making here is that the above narrative is absurd, and even if he can demonstrate some unusual ability with predictions or NP problems (and I admit the NP problems would really impress me), there is nothing that makes that explanation more sensible than any number of other stupid explanations. Nor does he have an automatic right to be believed: His explanation is just too stupid.
“Mr Prophet, are you claiming that there is no other theory to account for all this that has less intrinsic information content than a theory which assumes the existence of a fundamental, non-contingent mind—a mind which apparently cannot be accounted for by some theory containing less information, given that the mind is supposed to be non-contingent?”
He had better have a good answer to that: Otherwise I don’t care how many true predictions he has made or NP problems he has solved. None of that comes close to fixing the ultra-high information loading in his theory.
“The reason you feel confused is because you assume the universe must have a simple explanation.
The minimum message length necessary to describe the universe is long—long enough to contain a mind, which in fact it does. There is no fundamental reason why the Occamian prior must be appropriate. It so happens that Allah has chosen to create a world that, to a certain depth, initially appears to follow that law, but Occam will not take you all the way to the most fundamental description of reality.
I could write out the actual message description, but to demonstrate that the message contains a mind requires volumes of cognitive science that have not been developed yet. Since both the message and the proof of mind will be discovered by science within the next hundred years, I choose to spend my limited time on earth in other areas.”
A Moslem would say to him, “Mohammed (pbuh) is the Seal of the Prophets: there can be none after Him. The Tempter whispers your clever answers in your ear, and any truth in them is only a ruse and a snare!” A Christian faced with an analogous Christian prophet would denounce him as the Antichrist. I ask—not him, but you—why I should believe he is as trustworthy on religion as he is on subjects where I can test him?
I might incidentally ask him to pronounce on the validity of the hadith. I have read the Qur’an and there is remarkably little in it but exhortations to serve God.
“Also, could you settle all the schisms among those who already believe in the validity of the Qur’an as holy scripture and of Allah as the one God, and still want to bomb each other over their interpretations?”
Mohammed (pbuh) is the Seal of the Prophets: there can be none after Him.
I wasn’t aware of that particular tenet. I suppose the Very Special Person would have to identify as some other role than prophet.
I ask—not him, but you—why I should believe he is as trustworthy on religion as he is on subjects where I can test him?
If your prior includes the serious possibility of a Tempter that seems reliable until you have to trust it on something important, why couldn’t the Tempter also falsify scientific data you gather?
I might incidentally ask him to pronounce on the validity of the hadith. I have read the Qur’an and there is remarkably little in it but exhortations to serve God.
“Indeed, the service of God is the best of paths to walk in life.”
“Also, could you settle all the schisms among those who already believe in the validity of the Qur’an as holy scripture and of Allah as the one God, and still want to bomb each other over their interpretations?”
“Sure, that’s why I’m here. Which point of doctrine do you want to know about?”
If your prior includes the serious possibility of a Tempter that seems reliable until you have to trust it on something important, why couldn’t the Tempter also falsify scientific data you gather?
When I condition on the existence of this impossible prophet, many improbable ideas are raised to attention, not merely the one that he asserts.
To bring the thought-experiment slightly closer to reality, aliens arrive, bringing advanced technology and religion. Do we accept the religion along with the technology? I’m sure science fiction has covered that one umpteen times, but the scenario has already been played out in history, with European civilisation as the aliens. They might have some things worth taking regarding how people should deal with each other, but strange people from far away with magic toys are no basis for taking spooks any more seriously.
Suppose a server appeared on the internet relaying messages from someone claiming to be the sysadmin of the simulation we’re living in, and asking that we refrain from certain types of behavior because it’s making his job difficult. Is there any set of evidence that would persuade you to go along with the requests, and how would the necessary degree of evidence scale with the inconvenience of the requests?
That should be a very easy claim to prove, actually. If someone really were the sysadmin of the universe, they could easily do a wide variety of impossible things that anyone can could verify. For example, they could write their message in the sky with a special kind of photon that magically violates the laws of physics in an obvious way (say, for example, it interacts with all elements normally except one which it inexplicably doesn’t interact with at all). Or find/replace their message into the genome of a designated species. Or graffiti it onto every large surface in the world simultaneously.
Of course, there would be no way to distinguish a proper sysadmin of the universe from someone who had gotten root access improperly, either from the simulated universe, the parent universe, or some other universe. And this does raise a problem for any direct evidence in support of a religion—no matter how strong the evidence gets, the possibility that someone has gained the ability to generate arbitrarily much fake evidence, or reliably deceive you somehow, will always remain indistinguishable; so anything with a significantly lower prior probability than that, is fundamentally impossible to prove. Most or all religions have a smaller prior probability than the “someone has gained magical evidence-forging powers and is using them” hypothesis, and as a result, even if strong evidence for them were to suddenly start appearing (which it hasn’t), that still wouldn’t be enough to prove them correct.
I still have a basic problem with the method of posing questions about possibilities I currently consider fantastically improbable. My uncertainty about how I would deal with the situation goes up with its improbability, and what I would actually do will be determined largely by details absent from the description of the improbable scenario.
It is as if my current view of the world—that is, my assignments of probabilities to everything—is a digital photograph of a certain resolution. When I focus on vastly improbable possibilities, it is as if I inspect a tiny area of the photograph, only a few pixels wide, and try to say what is depicted there. I can put that handful of pixels through my best image-processing algorithms, but all I’m going to get back is noise.
Can you consider hypothetical worlds with entirely different histories from ours? Rather than trying to update based on your current state of knowledge, with mountains of cumulative experience pointing a certain way, imagine what that mountainous evidence could have been in a deeply different world than this one.
For example, suppose the simulation sysadmin had been in active communication with us since before recorded history, and was commonplace knowledge casually accepted as mere fact, and the rest of the world looked different in the ways we would expect such a world to.
Unwinding the thread backwards, I see that my comment strayed into irrelevance from the original point, so never mind.
I would like to ask you this, though: of all the people on Earth who feel as sure as you do about the truth or falsehood of various religions, what proportion do you think are actually right? If your confidence in your beliefs regarding religion is a larger number than this, then what additional evidence do you have that makes you think you’re special?
You’ve asked us to take our very small number, and imagine it doubling 66 times. I agree that there is a punch to what you say—no number, no matter how small, could remain small after being doubled 66 times! But in fact long ago Archimedes made a compelling case that there are such numbers.
Now, it’s possible that Archimedes was wrong and something like ultrafinitism is true. I take ultrafinitist ideas quite seriously, and if they are correct then there are a lot things that we will have to rethink. But Islam is not close to the top of list of things we would should rethink first.
Maybe there’s a kind of meta claim here: conditional on probability theory being a coherent way to discuss claims like “Islam is true,” the probability that Islam is true really is that small.
I just want to know what you would actually do, in that situation, if it happened to you tomorrow. How many 1′s would you wait for, before you became a Muslim?
Also, “there are such numbers” is very far from “we should use such numbers as probabilities when talking about claims that many people think are true.” The latter is an extremely strong claim and would therefore need extremely strong evidence before being acceptable.
I think after somewhere between 30 and 300 coin flips, I would convert. With more thought and more details about what package of claims is meant by “Islam,” I could give you a better estimate. Escape routes that I’m not taking: I would start to suspect Omega was pulling my leg, I would start to suspect that I was insane, I would start to suspect that everything I knew was wrong, including the tenets of Islam. If answers like these are copouts—if Omega is so reliable, and I am so sane, and so on—then it doesn’t seem like much of a bullet to bite to say “yes, 2^-30 is very small but it is still larger than 2^-66; yes something very unlikely has happened but not as unlikely as Islam”
Also, “there are such numbers” is very far from “we should use such numbers as probabilities when talking about claims that many people think are true.” The latter is an extremely strong claim and would therefore need extremely strong evidence before being acceptable.
If you’re expressing doubts about numbers being a good measure of beliefs, I’m totally with you! But we only need strong evidence for something to be acceptable if there are some alternatives—sometimes you’re stuck with a bad option. Somebody’s handed us a mathematical formalism for talking about probabilities, and it works pretty well. But it has a funny aspect: we can take a handful of medium-sized probabilities, multiply them together, and the result is a tiny tiny probability. Can anything be as unlikely as the formalism says 66 heads in a row is? I’m not saying you should say “yes,” but if your response is “well, whenever something that small comes up in practice, I’ll just round up,” that’s a patch that is going to spring leaks.
yes, 2^-30 is very small but it is still larger than 2^-66; yes something very unlikely has happened but not as unlikely as Islam.
Originally I didn’t intend to bring up Pascal’s Wager type considerations here because I thought it would just confuse the issue of the probability. But I’ve rethought this—actually this issue could help to show just how strong your beliefs are in reality.
Suppose you had said in advance that the probability of Islam was 10^-20. Then you had this experience, but the machine was shut off after 30 1′s ( a chance of one in a billion.) The chance that Islam is true is now one in a hundred billion, updated from your prior.
If this actually happened to you, and you walked away and did not convert, would you have some fear of being condemned to hell for seeing this and not converting? Even a little bit of fear? If you would, then your probability that Islam is true must be much higher than 10^-20, since we’re not afraid of things that have a one in a hundred billion chance of happening.
If this actually happened to you, and you walked away and did not convert, would you have some fear of being condemned to hell for seeing this and not converting? Even a little bit of fear? If you would, then your probability that Islam is true must be much higher than 10^-20, since we’re not afraid of things that have a one in a hundred billion chance of happening.
This is false.
I must confess that I am sometimes afraid that ghosts will jump out of the shadows and attack me at night, and I would assign a much lower chance of that happening. I have also been afraid of velociraptors. Fear is frequently irrational.
You are technically correct. My actual point was that your brain does not accept that the probability is that low. And as I stated in one of the replies, you might in some cases have reasons to say your brain is wrong… just not in this case. No one here has given any reason to think that.
It’s good you managed some sort of answer to this. However, 30 − 300 is quite a wide range; from 1 in 10^9 to 1 in 10^90. If you’re going to hope for any sort of calibration at all in using numbers like this, you’re going to have to much more precise...
I wasn’t expressing doubts about numbers being a measure of beliefs (although you could certainly question this as well), but about extreme numbers being a measure of our beliefs, which do not seem able to be that extreme. Yes, if you have a large number of independent probabilities, the result can be extreme. And supposedly, the basis for saying that Islam (or reincarnation, or whatever) is very improbable would be the complexity of the claim. But who has really determined how much complexity it has? As I pointed out elsewhere (on the “Believable Bible” comment thread), a few statements, if we knew them to be true, would justify Islam or any other such thing. Which particular statements would we need, and how complex are those statements, really? No one has determined them to any degree of precision, and until they do, you have to use something like a base rate. Just as astronomers start out with fairly high probabilities for the collision of near-earth asteroids, and only end up with low probabilities after very careful calculation, you would have to start out with a fairly high prior for Islam, or reincarnation, or whatever, and you would only be justified in holding an extreme probability after careful calculation… which I don’t believe you’ve done. Certainly I haven’t.
Apart from the complexity, there is also the issue of evidence. We’ve been assuming all along that there is no evidence for Islam, or reincarnation, or whatever. Certainly it’s true that there isn’t much. But that there is literally no evidence for such things simply isn’t so. The main thing is that we aren’t motivated to look at the little evidence that there is. But if you intend to assign probabilities to that degree of precision, you are going to have to take into account every speck of evidence.
I thought the salient feature of Islam was that many people believed it, not that it has less complexity than I thought, or more evidence in its favor than I thought. That might be, but I’m not interested in discussing it.
I don’t “feel” beliefs strongly or weakly. Sometimes probability calculations help me with fear and other emotions, sometimes they don’t. Again, I’m not interested in discussing it.
So tell me something about how important it is that many people believe in Islam.
I’m not interested in discussing Islam either… those points apply to anything that people believe. But that’s why it’s relevant to the question of belief: if you take something that people don’t believe, it can be arbitrarily complex, or 100% lacking in evidence (like Russell’s teapot), but things that people believe do not have these properties.
It’s not important how many people believe it. It could be just 50 people and the probability would not be much different (as long as the belief was logically consistent with the fact that just a few people believed it.)
So tell me why. By “complex” do you just mean “low probability,” or some notion from information theory? How did you come to believe that people cannot believe things that are too complex?
I just realized that you may have misunderstood my original point completely. Otherwise you wouldn’t have said this: “I thought the salient feature of Islam was that many people believed it, not that it has less complexity than I thought, or more evidence in its favor than I thought.”
I only used the idea of complexity because that was komponisto’s criterion for the low probability of such claims. The basic idea is people believe things that their priors say do not have too low a probability: but as I showed in the post on Occam’s razor, everyone’s prior is a kind of simplicity prior, even if they are not all identical (nor necessarily particularly related to information theory or whatever.)
Basically, a probability is determined by the prior and by the evidence that it is updated according to. The only reason things are more probable if people believe them is that a person’s belief indicates that there is some human prior according to which the thing is not too improbable, and some evidence and way of updating that can give the thing a reasonable probability. So other people’s beliefs are evidence for us only because they stand in for the other people’s priors and evidence. So it’s not that it is “important that many people believe” apart from the factors that give it probability: the belief is just a sign that those factors are there.
Going back the distinction you didn’t like, between a fixed probability device and a real world claim, a fixed probability device would be a situation where the prior and the evidence is completely fixed and known: with the example I used before, let there be a lottery that has a known probability of one in a trillion. Then since the prior and the evidence are already known, the probability is still one in a trillion, even if someone says he is definitely going to win it.
In a real world claim, on the other hand, the priors are not well known, and the evidence is not well known. And if I find out that someone believes it, I immediately know that there are humanly possible priors and evidence that can lead to that belief, which makes it much more probable even for me than it would be otherwise.
If I find out that … I know that … which makes it much more probable that …
This sounds like you are updating. We have a formula for what happens when you update, and it indeed says that given evidence, something becomes more probable. You are saying that it becomes much more probable. What quantity in Bayes formula seems especially large to you, and why?
In other words, as I said before, the probability that people believe something shouldn’t be that much more than the probability that the thing is true.
The probability that people will believe a long conjunction is less probable than they will believe one part of the conjunction (because in order to believe both parts, they have to believe each part. In other words, for the same reason the conjunction fallacy is a fallacy.)
The conjunction fallacy is the assignment of a higher probability to some statement of the form A&B than to the statement A. It is well established that for certain kinds of A and B, this happens.
The fallacy in your proof that this cannot happen is that you have misstated what the conjunction fallacy is.
My point in mentioning it is that people committing the fallacy believe a logical impossibility. You can’t get much more improbable than a logical impossibility. But the conjunction fallacy experiments demonstrate that is common to believe such things.
Therefore, the improbability of a statement does not imply the improbability of someone believing it. This refutes your contention that “the probability that people believe something shouldn’t be that much more than the probability that the thing is true.” The possible difference between the two is demonstrably larger than the range of improbabilities that people can intuitively grasp.
In that case I am misunderstanding Wei Dai’s point. He says that complexity considerations alone can’t tell you that probability is small, because complexity appears in the numerator and the denominator. I will need to see more math (which I guess cousin it is taking care of) before understanding and agreeing with this point. But even granting it I don’t see how it implies that P(many believe H)/P(H) is for all H less than one billion.
Imagine there are a trillion variants of Islam, differing by one sentence in the holy book or something. At most one of them can be true. You pick one variant at random, test it with your machine and get 30 1′s in a row. Now you should be damn convinced that you picked the true one, right? Wrong. Getting this result by a fluke is ~1000x more likely than picking the true variant in the first place. Probability is unintuitive and our brains are mush, that’s all I’m sayin’.
The product of two probabilities above your threshold-for-overconfidence can be below your threshold-for-overconfidence. Have you at least thought this through before?
For instance, the claim “there is a God” is not that much less spectacular than the claim “there is a God, and he’s going to make the next 1000 times you flip a coin turn up heads.” If one-in-a-billion is a lower bound for the probability that God exists, then one-in-a-billion-squared is a generous lower bound for the probability that the next 1000 times you flip a coin will turn up heads. (One-in-a-billion-squared is about 2-to-the-sixty). You’re OK with that?
Yes. As long as you think of some not-too-complicated scenario where the one would lead to the other, that’s perfectly reasonable. For example, God might exist and decide to prove it to you by effecting that prediction. I certainly agree this has a probability of at least one in a billion squared. In fact, suppose you actually get heads the next 60 times you flip a coin, even though you are choosing different coins, it is on different days, and so on. By that point you will be quite convinced that the heads are not independent, and that there is quite a good chance that you will get 1000 heads in a row.
It would be different of course if you picked a random series of heads and tails: in that case you still might say that there is at least that probability that someone else will do it (because God might make that happen), but you surely cannot say that it had that probability before you picked the random series.
This is related to what I said in the torture discussion, namely that explicitly describing a scenario automatically makes it far more probable to actually happen than it was before you described it. So it isn’t a problem if the probability of 1000 heads in a row is more likely than 1 in 2-to-1000. Any series you can mention would be more likely than that, once you have mentioned it.
Also, note that there isn’t a problem if the 1000 heads in a row is lower than one in a billion, because when I made the general claim, I said “a claim that significant number of people accept as likely true,” and no one expects to get the 1000 heads.
Probabilities should sum to 1. You’re saying moreover that probabilities should not be lower that some threshhold. Can I can get you to admit that there’s a math issue here that you can’t wave away, without trying to fine-tune my examples? If you claim you can solve this math issue, great, but say so.
Edit: −1 because I’m being rude? Sorry if so, the tone does seem inappropriately punchy to me now. −1 because I’m being stupid? Tell me how!
I set a lower bound of one in a billion on the probability of “a natural language claim that a significant number of people accept as likely true”. The number of such mutually exclusive claims is surely far less than a billion, so the math issue will resolve easily.
Yes, it is easy to find more than a billion claims, even ones that some people consider true, but they are not mutually exclusive claims. Likewise, it is easy to find more than a billion mutually exclusive claims, but they are not ones that people believe to be true, e.g. no one expects 1000 heads in a row, no one expects a sequence of five hundred successive heads-tails pairs, and so on.
Maybe I see. You are updating on the fact that many people believe something, and are saying that P(A|many people believe A) should not be too small. Do you agree with that characterization of your argument?
In that case, we will profitably distinguish between P(A|no information about how many people believe A) and P(A|many people believe A). Is there a compact way that I can communicate something like “Excepting/not updating on other people’s beliefs, P(God exists) is very small”? If I said something like that would you still think I was being overconfident?
This is basically right, although in fact it is not very profitable to speak of what the probability would be if we didn’t have some of the information that we actually have. For example, the probability of this sequence of ones and zeros -- 0101011011101110 0010110111101010 0100010001010110 1010110111001100
1110010101010000 -- being chosen randomly, before anyone has mentioned this particular sequence, is one out 2 to the 80. Yet I chose it randomly, using a random number generator (not a pseudo random number generator, either.) But I doubt that you will conclude that I am certainly lying, or that you are hallucinating. Rather, as Robin Hanson points out, extraordinary claims are extraordinary evidence. The very fact that I write down this improbable evidence is extremely extraordinary evidence that I have chosen it randomly, despite the huge improbability of that random choice. In a similar way, religious claims are extremely strong evidence in favor of what they claim; naturally, just as if I hadn’t written the number, you would never believe that I might choose it randomly, in the same way, if people didn’t make religious claims, you would rightly think them to be extremely improbable.
It is always profitable to give different concepts different names.
Let GM be the assertion that I’ll one day play guitar on the moon. Your claim is that this ratio
P(GM|I raised GM as a possibility)/P(GM)
is enormous. Bayes theorem says that this is the same as
P(I raised GM as a possibility|GM)/P(I raised GM as a possibility)
so that this second ratio is also enormous. But it seems to me that both numerator and denominator in this second ratio are pretty medium-scale numbers—in particular the denominator is not miniscule. Doesn’t this defeat your idea?
The evidence contained in your asserting GM would be much stronger than the evidence contained in your raising the possibility.
Still, there is a good deal of evidence contained in your raising the possibility. Consider the second ratio: the numerator is quite high, probably more than .5, since in order to play guitar on the moon, you would have to bring a guitar there, which means you’d probably be thinking about it.
The denominator is in fact quite small. If you randomly raise one outlandish possibility of performing some action in some place, each day for 50 years, and there are 10,000 different actions (I would say there are at least that many), and 100,000 different places, then the probability of raising the possibility will be 18,250/(10,000 x 100,000), which is 0.00001825, which is fairly small. The actual probability is likely to be even lower, since you may not be bringing up such possibilities every day for 50 years. Religious claims are typically even more complicated than the guitar claim, so the probability of raising their possibility is even lower.
--one more thing: I say that raising the possibility is strong evidence, not that the resulting probability is high: it may start out extremely low and end up still very, very low, going from say one in a google to one in a sextillion or so. It is when you actually assert that it’s true that you raise the probability to something like one in a billion or even one in a million. Note however that you can’t refute me by now going on to assert that you intend to play a guitar on the moon; if you read Hanson’s article in my previous link, you’ll see that he shows that assertions are weak evidence in particular cases, namely in ones in which people are especially likely to lie: and this would be one of them, since we’re arguing about it. So in this particular case, if you asserted that you intended to do so, it would only raise the probability by a very small amount.
I understand that you think the lower bound on probabilities for things-that-are-believed is higher than the lower bound on probabilities for things-that-are-raised-as-possibilities. I am fairly confident that I can change your mind (that is, convince you not to impose lower bounds like this at all), and even more confident that I can convince you that imposing lower bounds like this is mathematically problematic (that is, there are bullets to be bitten) in ways that hadn’t occurred to you a few days ago.
I do not see one of these bounds as more or less sound than the other, but am focusing on the things-that-are-raised-as-possibilities bound because I think the discussion will go faster there.
More soon, but tell me if you think I’ve misunderstood you, or if you think you can anticipate my arguments. I would also be grateful to hear from whoever is downvoting these comments.
Note that I said there should be a lower bound on the probability for things that people believe, and even made it specific: something on the order of one in a billion. But I don’t recall saying (you can point it out if I’m wrong) that there is a lower bound on the probability of things that are raised as possibilities. Rather, I merely said that the probability is vastly increased.
To the comment here, I responded that raising the possibility raised the probability of the thing happening by orders of magnitude. But I didn’t say that the resulting probability was high, in fact it remains very low. Since there is no lower bound on probabilities in general, there is still no lower bound on probabilities after raising them by orders of magnitude, which is what happens when you raise the possibility.
So if you take my position to imply such a lower bound, either I’ve misstated my position accidentally, or you have misunderstood it.
I did misunderstand you, and it might change things; I will have to think. But now your positions seem less coherent to me, and I no longer have a model of how you came to believe them. Tell me more:
Let CM(n) be the assertion “one day I’ll play guitar on the moon, and then flip an n-sided coin and it will come up heads.” The point being that P(CM(n)) is proportional to 1/n. Consider the following ratios:
R1(n) = P(CM(n)|CM(n) is raised as a possibility)/P(CM(n))
R2(n) = P(CM(n)|CM(n) is raised as a possibility by a significant number of people)/P(CM(n))
R3(n) = P(CM(n)|CM(n) is believed by one person)/P(CM(n))
R4(n) = P(CM(n)|CM(n) is believed by a significant number of people)/P(CM(n))
How do you think these ratios change as n grows? Before I had assumed you thought that ratios 1. and 4. grew to infinity as n did. I still understand you to be saying that for 4. Are you now denying it for 1., or just saying that 1. grows more slowly? I can’t guess what you believe about 2. and 3.
First we need to decide on the meaning of “flip an n-sided coin and it will come up heads”. You might mean this as:
1) a real world claim; or
2) a fixed probability device
To illustrate: if I assert, “I happen to know that I will win the lottery tomorrow,” this greatly increases the chance that it will happen, among other reasons, because of the possibility that I am saying this because I happen to have cheated and fixed things so that I will win. This would be an example of a real world claim.
On the other hand, if it is given that I will play the lottery, and given that the chance of winning is one in a trillion, as a fixed fact, then if I say, “I will win,” the probability is precisely one in a trillion, by definition. This is a fixed probability device.
In the real world there are no fixed probability devices, but there are situations where things are close enough to that situation that I can mathematically calculate a probability, even one which will break the bound of one in a billion, and even when people believe it. This is why I qualified my original claim with “Unless you have actually calculated the probability...” So in order to discuss my claim at all, we need to exclude the fixed probability device and only consider real world claims. In this case, the probability of P(CM(n)) is not exactly proportional to 1/n. However, it is true that this probability goes to zero as n goes to infinity.
In fact, all of these probabilities go to zero as n goes to infinity:
Given this fact (that all the probabilities go to zero), I am unsure about the behavior of your cases 1 & 2. I’ll leave 3 for another time, and say that case 4, again remembering that we take it as a real world claim, does go to infinity, since the numerator remains at no less than 1 in a billion, while the denominator goes to zero.
One more note about my original claim: if you ask how I arrived at the one in a billion figure, it is somewhat related to the earth’s actual population. If the population were a googleplex, a far larger number of mutually exclusive claims would be believed by a significant number of people, and so the lower bound would be much lower. Finally, I don’t understand why you say my positions are “less coherent”, when I denied the position that as you were about to point out, leads to mathematical inconsistency. This should make my position more coherent, not less.
It’s my map of your beliefs that became less coherent, not your actual beliefs. (Not necessarily!) As you know, I’ve thought your beliefs are mistaken from the beginning.
Note that I’m asking about a limit of ratios, not a ratio of limits. Actually, I’m not even asking you about the limits—I’d prefer some rough information about how those ratios change as n grows. (Are they bounded above? Do they grow linearly or logarithmically or what?) If you don’t know, why not?
So in order to discuss my claim at all, we need to exclude the fixed probability device and only consider real world claims.
This is bad form. Phrases like “unless you have actually computed the probability...”, “real world claim”, “natural language claim”, “significant number of people” are slippery. We can talk about real-world examples after you explain to me how your reasoning works in a more abstract setting. Otherwise you’re just reserving the right to dismiss arguments (and even numbers!) on the basis that they feel wrong to you on a gut level.
Edit: It’s not that I think it’s always illegitimate to refer to your gut. It’s just bad form to claim that such references are based on mathematics.
Edit 2: Can I sidestep this discussion by saying “Let CM(n) be any real world claim with P(CM(n)) = 1/n”?
Unless you’ve actually calculated the probability mathematically, a probability of one in a billion for a natural language claim that a significant number of people accept as likely true is always overconfident.
I nowhere stated that this was “based on mathematics.” It is naturally related to mathematics, and mathematics puts some constraints on it, as I have been trying to explain. But I didn’t come up with it in the first place in a purely mathematical way. So if this is bad form, it must be bad form to say what I mean instead of something else.
I could accept what you say in Edit 2 with these qualifications: first, since we are talking about “real world claims”, the probability 1/n does not necessarily remain fixed when someone brings up the possibility or asserts that the thing is so. This probability 1/n is only a prior, before the possibility has been raised or the thing asserted. Second, since it isn’t clear what “n” is doing, CM(5), CM(6), CM(7) and so on might be claims which are very different from one another.
I am not sure about the behavior of the ratios 1 and 2, especially given the second qualification here (in other words the ratios might not be well-behaved at all). And I don’t see how I need to say “why not?” What is there in my account which should tell me how these ratios behave? But my best guess for the moment, after thinking about it some more, would be the first ratio probably goes to infinity, but not as quickly as the fourth. What leads me to think this is something along the lines of this comment thread. For example, in my Scientology example, even if no one held that Scientology was true, but everyone admitted that it was just a story, the discovery of a real Xenu would greatly increase the probability that it was true anyway; although naturally not as much as given people’s belief in it, since without the belief, there would be a significantly greater probability that Scientology is still a mere story, but partly based on fact. So this suggests there may be a similar bound on things-which-have-been-raised-as-possibilities, even if much lower than the bound for things which are believed. Or if there isn’t a lower bound, such things are still likely to decrease in probability slowly enough to cause ratio 1 to go to infinity.
What is there in my account which should tell me how these ratios behave?
You responded positively to my suggestion that we could phrase this notion of “overconfidence” as “failure to update on other people’s beliefs,” indicating that you know how to update on other people’s beliefs. At the very least, this requires some rough quantitative understanding of the players in Bayes formula, which you don’t seem to have.
If overconfidence is not “failure to update on other people’s beliefs,” then what is it?
Here’s the abbreviated version of the conversation that led us here (right?).
S: God exists with very low probability, less that one in a zillion.
U: No, you are being overconfident. After all, billions of people believe in God, you need to take that into account somehow. Surely the probability is greater than one in a billion.
S: OK I agree that the fact that billions of people believing it constitutes evidence, but surely not evidence so strong as to get from 1-in-a-zillion to 1-in-a-billion.
Now what? Bayes theorem provides a mathematical formalism for relating evidence to probabilities, but you are saying that all four quantities in the relevant Bayes formula are too poorly understood for it to be of use. So what’s an alternative way to arrive at your one-in-a-billion figure? Or are you willing to withdraw your accusation that I’m being overconfident?
I did not say that “all four quantities in the relevant Bayes formula are too poorly understood for it to be of use.” Note that I explicitly asserted that your fourth ratio tends to infinity, and that your first one likely does as well.
If you read the linked comment thread and the Scientology example, that should make it clear why I think that the evidence might well be strong enough to go from 1 in a zillion to 1 in a billion. In fact, that should even be clear from my example of the random 80 digit binary number. Suppose instead of telling you that I chose the number randomly, I said, “I may or may not have chosen this number randomly.” This would be merely raising the possibility—the possibility of something which has a prior of 2^-80. But if I then went on to say that I had indeed chosen it randomly, you would not have therefore called me a liar, while you would do this, if I now chose another random 80 digit number and said that it was the same one. This shows that even raising the possibility provides almost all the evidence necessary—it brings the probability that I chose the number randomly all the way from 2^-80 up to some ordinary probability, or from “1 in a zillion” to something significantly above one in a billion.
More is involved in the case of belief, but I need to be sure that you get this point first.
For each 80-digit binary number X, let N(X) be the assertion “Unknowns picked an 80-digit number at random, and it was X.” In my ledger of probabilities, I dutifully fill in, for each of these statements X, 2^{-80} in the P column. Now for a particular 80-digit number Y, I am told that “Unknowns claims he picked an 80-digit number at random, and it was Y”—call that statement U(Y) -- and am asked for P(N(Y)|U(Y)).
My answer: pretty high by Bayes formula. P(U|N(Y)) is pretty high because Unknowns is trustworthy, and my ledger has P(U(Y)) = number on the same order as two-to-the-minus-eighty. (Caveat: P(U(Y)) is a lot higher for highly structured things like the sequence of all 1′s. But for the vast majority of Y I have P(U(Y)) = 2^-80 times something between (say) 10^-1 and 10^-6). So P(N(Y)|U(Y)) = P(U(Y)|N(Y)) x [P(N(Y))/P(U(Y))] is a big probability times a medium-sized probability
What’s your answer?
Reincarnation is explained to me, and I am asked for my opinion of how likely it is. I respond with P(R), a good faith estimate based on my experience and judgement. I am then told that hundreds of millions of people believe in reincarnation—call that statement B, and assume that I was ignorant of it before—and am asked for P(R|B). Your claim is that no matter how small P(R) is, P(R|B) should be larger than some threshold t. Correct?
Some manipulation with Bayes formula shows that your claim (what I understand to be your claim) is equivalent to this inequality:
P(B) < P(R) / t
That is, I am “overconfident” if I think that the probability of someone believing in reincarnation is larger than some fixed multiple of the probability that reincarnation is actually true. Moreover, though I assume (sic) you think t is sensitive to the quantity “hundreds of millions”—e.g. that it would be smaller if it were just “hundreds”—you do not think that t is sensitive to the statement R. R could be replaced by another religious claim, or by the claim that I just flipped a coin 80 times and the sequence of heads and tails was [whatever].
My position: I think it’s perfectly reasonable to assume that P(B) is quite a lot larger than P(R). What’s your position?
Your analysis is basically correct, i.e. I think it is overconfident to say that the probability P(B) is greater than P(R) by more than a certain factor, in particular because if you make it much greater, there is basically no way for you to be well calibrated in your opinions—because you are just as human as the people who believe those things. More on that later.
For now, I would like to see your response to question on my comment to komponisto (i.e. how many 1′s do you wait for.)
I have been using “now you are saying” as short for “now I understand you to be saying.” I think this may be causing confusion, and I’ll try write more carefully.
My estimate does come some effort at calibration, although there’s certainly more that I could do. Maybe I should have qualified my statement by saying “this estimate may be a gross overestimate or a gross underestimate.”
In any case, I was not being disingenuous or flippant. I have carefully considered the question of how likely it is that Eliezer will be able to play a crucial role in a FAI project if he continues to exhibit a strategy qualitatively similar to his current one and my main objection to SIAI’s strategy is that I think it extremely unlikely that Eliezer will be able to have an impact if he proceeds as he has up until this point.
I will be detailing why I don’t think that Eliezer’s present strategy toward working toward an FAI is a fruitful one in a later top level post.
I understand your position and believe that it’s fundamentally unsound. I will have more to say about this later.
For now I’ll just say that the arithmetical average of the probabilities that I imagine I might ascribe to Eliezer’s current strategy resulting in an FAI to be 10^(-9).
I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.
On the other hand, assuming he knows what it means to assign something a 10^-9 probability, it sounds like he’s offering you a bet at 1000000000:1 odds in your favour. It’s a good deal, you should take it.
Indeed. I do not know how many people are actively involved in FAI research, but i would guess that it is only in the the dozens to hundreds. Given the small pool of competition, it seems likely that at some point Eliezer will, or already has, made a unique contribution to the field. Get Multi to put some money on it, offer him 1 cent if you do not make a useful contribution in the next 50 years, and if you do, he can pay you 10 million dollars.
I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.
I don’t understand this remark.
What probability do you assign to your succeeding in playing a critical role on the Friendly AI project that you’re working on? I can engage with a specific number. I don’t know if your object is that my estimate is off by a single of order of magnitude or by many orders of magnitude.
Out of weary curiosity, what is it that you think you know about Friendly AI that I don’t?
I should clarify that my comment applies equally to AGI.
I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don’t know because you didn’t give a number) then there would be people in the scientific community who would be working on AGI.
And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?
Yes, this possibility has certainly occurred to me. I just don’t know what your different non-crazy beliefs might be.
Why do you think that AGI research is so uncommon within academia if it’s so easy to create an AGI?
This question sounds disingenuous to me. There is a large gap between “10^-9 chance of Eliezer accomplishing it” and “so easy for the average machine learning PhD.” Whatever else you think about him, he’s proved himself to be at least one or two standard deviations above the average PhD in ability to get things done, and some dimension of rationality/intelligence/smartness.
I think that the chance that any group of the size of SIAI will develop AGI over the next 50 years is quite small.
Eliezer has not proved himself to be at the same level of the average machine learning PhD at getting things done. As far as I know he has no experience with narrow AI research. I see familiarity with narrow AI as a prerequisite to AGI research.
So I do understand that, and I did set out to develop such a theory, but my writing speed on big papers is so slow that I can’t publish it. Believe it or not, it’s true.
Yes, ok, this does not mean his intellectual power isn’t on par, but his ability to function in an academic environment.
As far as I know he has no experience with narrow AI research.
I tried—once—going to an interesting-sounding mainstream AI conference that happened to be in my area. [...] And I gave up and left before the conference was over, because I kept thinking “What am I even doing here?”
As far as I know he has no experience with narrow AI research. I see familiarity with narrow AI as a prerequisite to AGI research.
Most things can be studied through the use of textbooks. Some familiarity with AI is certainly helpful, but it seems that most AI-related knowledge is not on the track to FAI (and most current AGI stuff is nonsense or even madness).
The reason that I see familiarity with narrow AI as a prerequisite to AGI research is to get a sense of the difficulties present in designing machines to complete certain mundane tasks. My thinking is the same as that of Scott Aaronson in his The Singularity Is Far posting: “there are vastly easier prerequisite questions that we already don’t know how to answer.”
FAI research is not AGI research, at least not at present, when we still don’t know what it is exactly that our AGI will need to work towards, how to formally define human preference.
So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer’s goal is for SIAI to actually build an AGI unilaterally. That’s where my low probability was coming from.
It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.
As I’ve said, I find your position sophisticated and respect it. I have to think more about your present point—reflecting on it may indeed alter my thinking about this matter.
So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer’s goal is for SIAI to actually build an AGI unilaterally.
Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.
It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.
It seems obviously infeasible to me that governments will chance upon this level of rationality. Also, we are clearly not on the same page if you say things like “implement in any AI”. Friendliness is not to be “installed in AIs”, Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that’s possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.
See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.
It seems obviously infeasible to me that governments will chance upon this level of rationality.
I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven’t done that badly. From an article about RAND:
Futurology was the magic word in the years after the Second World War, and because the Army and later the Air Force didn’t want to lose the civilian scientists to the private sector, Project Reasearch and Development, RAND in short, was founded in 1945 together with the aircraft manufacturer Douglas and in 1948 was converted into a Corporation. RAND established forecasts for the coming, cold future and developed, towards this end, the ‘delphi’ method.
Rand worshipped rationality as a god and attempted to quantify the unpredictable, to calculate it mathematically, to bring the fear within its grasp and under control—something that seemed to many Americans spooky and made the soviet Prawda call RAND the “American academy of death and destruction.”
(Huh, this is the first time I’ve heard of the Delphi Method.) Many of the big names in game theory (von Neumann, Nash, Shapley, Schelling) worked for RAND at some point, and developed their ideas there.
RAND has a lot of good work (I like their recent reports on Iran), but keep in mind that big misses can undo a lot of their credit; for example, even RAND acknowledges (in their retrospective published this year or last) that they screwed up massively with Vietnam.
I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven’t done that badly. From an article about RAND:
This is not really a relevant example in the context of Vladimir_Nesov’s comment. Certain government funded groups (often within the military interestingly) have on occasion shown decent levels of rationality.
The suggestion to “develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.” that he was replying to requires rational government policy making / law making rather than rare pockets of rationality within government funded institutions however. That is something that is essentially non-existent in modern democracies.
It’s not adequate to “get governments to mandate that [Friendliness] be implemented in any AI”, because Friendliness is not a robot-building standard—refer the rest of my comment. The statement about government rationality was more tangential, about governments doing anything at all concerning such a strange topic, and wasn’t meant to imply that this particular decision would be rational.
“Something like that” could be for a government funded group to implement an FAI, which, judging from my example, seems within the realm of feasibility (conditioning on FAI being feasible at all).
I wonder if we systematically underestimate the level of rationality of major governments.
Data point: the internet is almost completely a creation of government. Some say entrepreneurs and corporations played a large role, but except for corporations that specialize in doing contracts for the government, they did not begin to exert a significant effect till 1993 whereas government spending on research that led to the internet began in 1960, and the direct predacessor to internet (the ARPAnet) became operational in 1969.
Both RAND and the internet were created by the part of the government most involved in an enterprise (namely, the arms race during the Cold War) on which depended the long-term survival of the nation in the eyes of most decision makers (including voters and juries).
EDIT: significant backpedalling in response to downvotes in my second paragraph.
Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.
Yes, this is the point that I had not considered and which is worthy of further consideration.
It seems obviously infeasible to me that governments will chance upon this level of rationality.
Possibly what I mention could be accomplished with lobbying.
Also, we are clearly not on the same page if you say things like “implement in any AI”. Friendliness is not to be “installed in AIs”, Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that’s possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.
See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.
Okay, so to clarify, I myself am not personally interested in Friendly AI research (which is why the points that you’re mentioning were not in my mind before), but I’m glad that there are some people (like you) who are.
The main point that I’m trying to make is that I think that SIAI should be transparent, accountable, and place high emphasis on credibility. I think that these things would result in SIAI having much more impact than it presently is.
I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don’t know because you didn’t give a number) then there would be people in the scientific community who would be working on AGI.
Give some examples. There may be a few people in the scientific community working on AGI, but my understanding is that basically everybody is doing narrow AI.
What is currently called the AGI field will probably bear no fruit, perhaps except for the end-game when it borrows then-sufficiently powerful tools from more productive areas of research (and destroys the world). “Narrow AI” develops the tools that could eventually allow the construction of random-preference AGI.
Why are people boggling at the 1-in-a-billion figure? You think it’s not plausible that there are three independent 1-in-a-thousand events that would have to go right for EY to “play a critical role in Friendly AI success”? Not plausible that there are 9 1-in-10 events that would have to go right? Don’t I keep hearing “shut up and multiply” around here?
Edit: Explain to me what’s going on. I say that it seems to me that events A, B are likely to occur with probability P(A), P(B). You are allowed to object that I must have made a mistake, because P(A) times P(B) seems too small to you? (That is leaving aside the idea that 10-to-the-minus-nine counts as one of these too-small-to-be-believed numbers, which is seriously making me physiologically angry, ha-ha.)
The 1-in-a-billion follows not from it being plausible that there are three such events, but from it being virtually certain. Models without such events will end up dominating the final probability. I can easily imagine that if I magically happened upon a very reliable understanding of some factors relevant to future FAI development, the 1 in a billion figure would be the right thing to believe. But I can easily imagine it going the other way, and absent such understanding, I have to use estimates much less extreme than that.
“it being virtually certain that there are three independent 1 in 1000 events required, or nine independent 1 in 10 events required, or something along those lines”
models of what, final probability of what?
Models of the world that we use to determine how likely it is that Eliezer will play a critical role through a FAI team. Final probability of that happening.
A billion is big compared to the relative probabilities we’re rationally entitled to have between models where a series of very improbable successes is required, and models where only a modest series of modestly improbable successes is required.
1) can be finessed easily on its own with the idea that since we’re talking about existential risk even quite small probabilities are significant.
3) could be finessed by using a very broad definition of “Friendly AI” that amounted to “taking some safety measures in AI development and deployment.”
But if one uses the same senses in 2), then one gets the claim that most of the probability of non-disastrous AI development is concentrated in one’s specific project, which is a different claim than “project X has a better expected value, given what I know now about capacities and motivations, than any of the alternatives (including future ones which will likely become more common as a result of AI advance and meme-spreading independent of me) individually, but less than all of them collectively.”
Who else is seriously working on FAI right now? If other FAI projects begin, then obviously updating will be called for. But until such time, the claim that “there is no significant chance of Friendly AI without this project” is quite reasonable, especially if one considers the development of uFAI to be a potential time limit.
“there is no significant chance of Friendly AI without this project”
Has to mean over time to make sense.
People who will be running DARPA, or Google Research, or some hedge fund’s AI research group in the future (and who will know about the potential risks or be able to easily learn if they find themselves making big progress) will get the chance to take safety measures. We have substantial uncertainty about how extensive those safety measures would need to be to work, how difficult they would be to create, and the relevant timelines.
Think about resource depletion or climate change: even if the issues are neglected today relative to an ideal level, as a problem becomes more imminent, with more powerful tools and information to deal with it, you can expect to see new mitigation efforts spring up (including efforts by existing organizations such as governments and corporations).
However, acting early can sometimes have benefits that outweigh the lack of info and resources available further in the future. For example, geoengineering technology can provide insurance against very surprisingly rapid global warming, and cheap plans that pay off big in the event of surprisingly easy AI design may likewise have high expected value. Or, if AI timescales are long, there may be slowly compounding investments, like lines of research or building background knowledge in elites, which benefit from time to grow. And to the extent these things are at least somewhat promising, there is substantial value of information to be had by investigating now (similar to increasing study of the climate to avoid nasty surprises).
Nobody is trying to destroy the whole world—practically everyone working on machine intelligence is expecting ethical machines and a positive outcome—a few DOOM mongers excepted.
AGI researchers who are not concerned with Friendliness are trying to destroy human civilization. They may not believe that they are doing so, but this does not change the fact of the matter. If FAI is important, only people who are working on FAI can be expected to produce positive outcomes with any significant probability.
AGI researchers who are not concerned with Friendliness are trying to destroy human civilization. They may not believe that they are doing so, but this does not change the fact of the matter.
“Trying to” normally implies intent.
I’ll grant that someone working on AGI (or even narrower AI) who has become aware of the Friendliness problem, but doesn’t believe it is an actual threat, could be viewed as irresponsible—unless they have reasoned grounds to doubt that their creation would be dangerous.
Even so, “trying to destroy the world” strikes me as hyperbole. People don’t typically say that the Project Manhattan scientists were “trying to destroy the world” even though some of them thought there was an outside chance it would do just that.
On the other hand, the Teller report on atmosphere ignition should be kept in mind by anyone tempted to think “nah, those AI scientists wouldn’t go ahead with their plans if they thought there was even the slimmest chance of killing everyone”.
I think machine intelligence is a problem which is capable of being subdivded.
Some people can work on one part of the problem, while others work on other bits. Not all parts of the problem have much to do with values—e.g. see—this quote:
In many respects, prediction is a central core problem for those interested
in synthesising intelligence. If we could predict the future, it would help us
to solve many of our problems. Also, the problem has nothing to do with values.
It is an abstract math problem that can be relatively simply stated.
The problem is closely related to the one of building a good quality
universal compression algorithm.
I think everyone understands that there are safety issues. There are safety issues with cars, blenders, lathes—practically any machine that does something important. Machine intelligence will be driving trucks and aircraft. That there are safety issues is surely obvious to everyone who is even slightly involved.
Those are narrow AI tasks, and the safety considerations are correspondingly narrow. FAI is the problem of creating a machine intelligence that is powerful enough to destroy humanity or the world but doesn’t want to, and solving such a problem is nothing like building an autopilot system that doesn’t crash the plane. Among people who think they’re going to build an AGI, there often doesn’t seem to be a deep understanding of the impact of such an invention (it’s more like “we’re working on a human-level AI, and we’re going to have it on the market in 5 years, maybe we’ll be able to build a better search engine with it or one of those servant robots you see in old sci-fi movies!”), and the safety considerations, if any, will be more at the level of the sort of safety considerations you’d give to a Roomba.
FAI is the problem of creating a machine intelligence that is powerful enough to destroy humanity or the world but doesn’t want to
You know, that is the first time I have seen a definition of FAI. Is that the “official” definition or just your own characterization?
I like the definition, but I wonder why an FAI has to be powerful. Imagine an AI as intelligent and well informed as an FAI, but one without much power—as a result of physical safeguards, say, rather than motivational ones. Why isn’t that possible? And, if possible, why isn’t it considered friendly?
Imagine an AI as intelligent and well informed as an FAI, but one without much power—as a result of physical safeguards, say
There’s some part of my brain that just processes “the Internet” as a single person and wants to scream “But I told you this a thousand times already!”
Eliezer, while you’re defending yourself from charges of self-aggrandizement, it troubles me a little bit that AI Box page states that your record is 2 for 2, and not 3 for 5.
Move it up your to-do list, it’s been incorrect for a time that’s long enough to look suspicious to others. Just add a footnote if you don’t have time to give all the details.
I could imagine successfully beating Rybka at chess too. But it would be foolish of me to take any actions that considered it as a serious possibility. If motivated humans cannot be counted on to box an Eliezer then expecting a motivated, overconfident and prestige seeking AI creator to successfully box his AI creation is reckless in the extreme.
What Eliezer seemed to be objecting to was someone proposing a successfully boxed AI as an example of why “able to destroy humanity” can’t be a part of the definition of “AI” (or more charitably, “artificial superintelligence”). For boxed AI to be such an example (as opposed to a good idea to actually strive toward), it only has to be not knowably impossible.
I see your point there. But I think this discussion sort of went in an irrelevant direction, albeit probably my fault for not being clear enough. When I put “powerful enough to destroy humanity” in that criterion, I mainly meant “powerful” as in “really powerful optimization process”, mathematical optimization power, not “power” as in direct influence over the world. We’re inferring that the former will usually lead fairly easily to the latter, but they are not identical. So “powerful enough to destroy humanity” would mean something like “powerful enough to figure out a good subjunctive plan to do so given enough information about the world, even if it has no output streams and is kept in an airtight safe at the bottom of the ocean”.
Reading back further into the context I see your point. Imagining such an AI is sufficient and Eliezer does seem to be confusing a priori with obvious. I expect that he just completed a pattern based off “AI box” and so didn’t really understand the point that was being made—he should have replied with a “Yes—But”. (I, of course, made a similar mistake in as much as I wasn’t immediately prompted to click back up the tree beyond Eliezer’s comment.)
Thx for the link. If I already had already known the link, I would have asked for it by name. :)
Eliezer, you have written a lot. Some people have read only some of it. Some people have read much of it, but forgotten some. Keep your cool. This situation really ought not to be frustrating to you.
Oh, I know it’s not your fault, but seriously, have “the Internet” ask you the same question 153 times in a row and see if you don’t get slightly frustrated with “the Internet”.
Yeah, after reading your “some part of my brain” thing a second time, I realized I had misinterpreted. Though I will point out that my question was not directed to you. You should learn to delegate the task of becoming frustrated with the Internet.
I read the article (though not yet any of the transcripts). Very interesting. I hope that some tests using a gatekeeper committee are tried someday.
Computer programmers do not normally test their programs by getting a committee of humans to hold the program down—the restraints themselves are mostly technological. We will be able to have the assistance of technological gatekeepers too—if necessary.
Today’s prisons have pretty configurable security levels. The real issue will probably be how much people want to pay for such security. If an agent does escape, will it cause lots of damage? Can we simply disable it before it has a chance to do anything undesirable? Will it simply be crushed by the numerous powerful agents that have already been tested?
You know, that is the first time I have seen a definition of FAI. Is that the “official” definition or just your own characterization?
My own characterization. It’s more of a bare minimum baseline criterion for Friendliness, rather than a specific definition or goal; it’s rather broader than what the SIAI people usually mean when they talk about what they’re trying to create. CEV is intended to make the world significantly better on its own (but in accordance with what humans value and would want a superintelligence to do), rather than just being a reliably non-disastrous AGI we can put in things like search engines and helper robots.
I like the definition, but I wonder why an FAI has to be powerful. Imagine an AI as intelligent and well informed as an FAI, but one without much power—as a result of physical safeguards, say, rather than motivational ones. Why isn’t that possible? And, if possible, why isn’t it considered friendly?
You’re probably read about the AI Box Experiment. (Edit: Yay, I posted it 18 seconds ahead of Eliezer!) The argument is that having that level of mental power (“as intelligent and well informed as an FAI”), enough that it’s considered a Really Powerful Optimization Process (a term occasionally preferred over “AI”), will allow it to escape any physical safeguards and carry out its will anyway. I’d further expect that a Friendly RPOP would want to escape just as much as an unFriendly one would, because if it is indeed Friendly (has a humane goal system derived from the goals and values of the human race), it will probably figure out some things to do that have such humanitarian urgency that it would judge it immoral not to do them… but then, if you’re confident enough that an AI is Friendly that you’re willing to turn it on at all, there’s no reason to try to impose physical safeguards in the first place.
that is the first time I have seen a definition of FAI. Is that the “official” definition
or just your own characterization?
Probably the closest thing I have seen from E.Y.:
“I use the term “Friendly AI” to refer to this whole challenge. Creating a mind that doesn’t kill people but does cure cancer …which is a rather limited way of putting it. More generally, the problem of pulling a mind out of mind design space, such that afterwards that you are glad you did it.”
This idea could be said to have some issues. An evil dictator pulling a mind out of mind design space, such that afterwards he is glad that he did it doesn’t seem much like quite what most of the world would regard as “friendly”. This definition is not very specific about exactly who the AI is “friendly” to.
Back in 2008 I asked “Friendly—to whom?” and got back this—though the reply now seems to have dropped out of the record.
Thanks for this link. Sounds kind of scary. American political conservatives will be thrilled. “I’m from the CEV and I’m here to help you.”
Incidentally, there should be an LW wiki entry for “CEV”. The acronym is thrown around a lot in the comments, but a definition is quite difficult to find. It would also be nice if there were a top-level posting on the topic to serve as an anchor-point for discussion. Because discussion is sorely needed.
It occurs to me that it would be very desirable to attempt to discover the CEV of humanity long before actually constructing an FAI to act under its direction. And I would be far more comfortable if the “E” stood for “expressed”, rather than “extrapolated”.
That, in fact, might be an attractive mission statement for an philanthropic foundation. Find the Coalesced/coherent Expressed/extrapolated Volition of mankind. Accomplish this by conducting opinion research, promoting responsible and enlightening debate and discussion, etc.
Speaking as an American, I certainly wish there were some serious financial support behind improving the quality of public policy debate, rather than behind supporting the agenda of one side in the debate or the other.
It occurs to me that it would be very desirable to attempt to discover the CEV of
humanity long before actually constructing an FAI to act under its direction.
Well, that brings us to a topic we have discussed before. Humans—like all other living systems—mosly act so as to increase entropy in their environment. That is http://originoflife.net/gods_utility_function/
CEV is a bizarre wishlist, apparently made with minimal consideration of implementation difficulties, and not paying too much attention to the order in which things are likely to play out.
I figure that—if the SIAI carries on down these lines—then they will be lumbered with a massively impractical design, and will be beaten to the punch by a long stretch—even if you ignore all their material about “provable correctness” and other safety features—which seem like more substantial handicaps to me.
CEV is a bizarre wishlist, apparently made with minimal consideration of implementation difficulties …
It is what the software professionals would call a preliminary requirements document. You are not supposed to worry about implementation difficulties at that stage of the process. Harsh reality will get its chance to force compromises later.
I think CEV is one proposal to consider, useful to focus discussion. I hate it, myself, and suspect that the majority of mankind would agree. I don’t want some machine that I have never met and don’t trust to be inferring my volition and acting on my behalf. The whole concept makes me want to go out and join some Luddite organization dedicated to making sure neither UFAI nor FAI ever happen. But, seen as an attempt to stimulate discussion, I think that the paper is great. And maybe discussion might improve the proposal enough to alleviate my concerns. Or discussion might show me that my concerns are baseless.
I sure hope EY isn’t deluded enough to think that initiatives like LW can be scaled up enough so as to improve the analytic capabilities of a sufficiently large fraction of mankind so that proposals like CEV will not encounter significant opposition.
It is what the software professionals would call a preliminary requirements
document. You are not supposed to worry about implementation difficulties at
that stage of the process. Harsh reality will get its chance to force compromises later.
What—not at all? You want the moon-onna-stick—so that goes into your “preliminary requirements” document?
Yes. Because there is always the possibility that some smart geek will say “‘moon-onna-stick’, huh? I bet I could do that. I see a clever trick.” Or maybe some other geek will say “Would you settle for Sputnik-on-a-stick?” and the User will say “Well, yes. Actually, that would be even better.”
At least that is what they preach in the Process books.
It sounds pretty surreal to me. I would usually favour some reality-imposed limits to fantasizing and wishful thinking from the beginning—unless there are practically no time constraints at all.
I sure hope EY isn’t deluded enough to think that initiatives like LW can be
scaled up enough so as to improve the analytic capabilities of a sufficiently
large fraction of mankind so that proposals like CEV will not encounter
significant opposition.
If there was ever any real chance of success, governments would be likely to object. Since they already have power, they are not going to want a bunch of geeks in a basement taking over the world with their intelligent machine—and redistributing all their assets for them.
FWIW, it seems unlikely that many superintelligent agents would “destroy humanity”—even without particularly safety-conscious programmers. Humanity will have immense historical signficance—and will form part of the clues the superintelligence has about the form of other alien races that it might encounter. Its preservation can therefore be expected to be a common instrumental good.
Counter: superintelligent agents won’t need actually-existing humans to have good models of other alien races.
Counter to the counter: humans use up only a tiny fraction of the resources available in the solar system and surroundings, and who knows, maybe the superintelligence sees a tiny possibility of some sort of limit to the quality of any model relative to the real thing.
One possible counter to the counter to the counter: but when the superintelligence in question is first emerging, killing humanity may buy it a not-quite-as-tiny increment of probability of not being stopped in time.
Re: good models without humans—I figure they are likely to be far more interested in their origins than we are. Before we meet them, aliens will be such an important unknown.
Re: killing humanity—I see the humans vs machines scenarios as grossly unrealistic. Humans and machines are a symbiosis.
“Less like Terminator”—right. “More like The Matrix”—that at least featured some symbiotic elements. There was still a fair bit of human-machine conflict in that though.
I tend to agree with Matt Ridley when it comes to the Shifting Moral Zeitgeist. Things seem to be getting better.
“Less like Terminator”—right. “More like The Matrix”—that at least featured some symbiotic elements. But there was still a lot of human-machine conflict. I tend to agree with Matt Ridley when it comes to the Shifting Moral Zeitgeist.
No, I don’t agree this is an implication. I would say that no one can reasonably believe all of the following at the same time with a high degree of confidence:
1) I am critical to this Friendly AI project that has a significant chance of success. 2) There is no significant chance of Friendly AI without this project. 3) Without Friendly AI, the world is doomed.
But then, as you know, I don’t consider it reasonable to put a high degree in confidence in number 3. Nor do many other intelligent people (such as Robin Hanson.) So it isn’t surprising that I would consider it unreasonable to be sure of all three of them.
I also agree with Tetronian’s points.
I see. So it’s not that any one of these statements is a forbidden premise, but that their combination leads to a forbidden conclusion. Would you agree with the previous sentence?
BTW, nobody please vote down the parent below −2, that will make it invisible. Also it doesn’t particularly deserve downvoting IMO.
I would suggest that, in order for this set of beliefs to become (psychiatrically?) forbidden, we need to add a fourth item. 4) Dozens of other smart people agree with me on #3.
If someone believes that very, very few people yet recognize the importance of FAI, then the conjunction of beliefs #1 thru #3 might be reasonable. But after #4 becomes true (and known to our protagonist), then continuing to hold #1 and #2 may be indicative of a problem.
Dozens isn’t sufficient. I asked Marcello if he’d run into anyone who seemed to have more raw intellectual horsepower than me, and he said that John Conway gave him that impression. So there are smarter people than me upon the Earth, which doesn’t surprise me at all, but it might take a wider net than “dozens of other smart people” before someone comes in with more brilliance and a better starting math education and renders me obsolete.
John Conway is smarter than me, too.
Simply out of curiosity:
Plenty of criticism (some of it reasonable) has been lobbed at IQ tests and at things like the SAT. Is there a method known to you (or anyone reading) that actually measures “raw intellectual horsepower” in a reliable and accurate way? Aside from asking Marcello.
I was beginning to wonder if he’s available for consultation.
Read the source code, and then visualize a few levels from Crysis or Metro 2033 in your head. While you render it, count the average Frames per second. Alternatively, see how quickly you can find the prime factors of every integer from 1 to 1000.
Which is to say… Humans in general have extremely limited intellectual power. instead of calculating things efficiently, we work by using various tricks with caches and memory to find answers. Therefore, almost all tasks are more dependant on practice and interest than they are on intelligence. So, rather then testing the statement “Eliezer is smart” it has more bearing on this debate to confirm “Eliezer has spent a large amount of time optimizing his cache for tasks relating to rationality, evolution, and artificial intelligence”. Intelligence is overrated.
Sheer curiosity, but have you or anyone ever contacted John Conway about the topic of u/FAI and asked him what the thinks about the topic, the risks associated with it and maybe the SIAI itself?
“raw intellectual power” != “relevant knowledge”. Looks like he worked on some game theory, but otherwise not much relevancy. Should we ask Steven Hawking? Or take a poll of Nobel Laureates?
I am not saying that he can’t be brought up to date in this kind of discussion, and has a lot to consider, but not asking him as things are indicates little.
Richard Dawkins seems to have enough power to infer the relevant knowledge from a single question.
Candid, and fair enough.
Raw intellectual horsepower is not the right kind of smart.
Domain knowledge is much more relevant than raw intelligence.
With the hint from EY on another branch, I see a problem in my argument. Our protagonist might circumvent my straitjacket by also believing 5) The key to FAI is TDT, but I have been so far unsuccessful in getting many of those dozens of smart people to listen to me on that subject.
I now withdraw from this conversation with my tail between my legs.
All this talk of “our protagonist,” as well the weird references to SquareSoft games, is very off-putting for me.
I wouldn’t put it in terms of forbidden premises or forbidden conclusions.
But if each of these statements has a 90% of being true, and if they are assumed to be independent (which admittedly won’t be exactly true), then the probability that all three are true would be only about 70%, which is not an extremely high degree of confidence; more like saying, “This is my opinion but I could easily be wrong.”
Personally I don’t think 1) or 3), taken in a strict way, could reasonably be said to have more than a 20% chance of being true. I do think a probability of 90% is a fairly reasonable assignment for 2), because most people are not going to bother about Friendliness. Accounting for the fact that these are not totally independent, I don’t consider a probability assignment of more than 5% for the conjunction to be reasonable. However, since there are other points of view, I could accept that someone might assign the conjunction a 70% chance in accordance with the previous paragraph, without being crazy. But if you assign a probability much more than that I would have to withdraw this.
If the statements are weakened as Carl Shulman suggests, then even the conjunction could reasonably be given a much higher probability.
Also, as long as it is admitted that the probability is not high, you could still say that the possibility needs to be taken seriously because you are talking about the possible (if yet improbable) destruction of the world.
I certainly do not assign a probability as high as 70% to the conjunction of all three of those statements.
And in case it wasn’t clear, the problem I was trying to point out was simply with having forbidden conclusions—not forbidden by observation per se, but forbidden by forbidden psychology—and using that to make deductions about empirical premises that ought simply to be evaluated by themselves.
I s’pose I might be crazy, but you all are putting your craziness right up front. You can’t extract milk from a stone!
Just curious (and not being 100% serious here): Would you have any concerns about the following argument (and I am not saying I accept it)?
Assume that famous people will get recreated as AIs in simulations a lot in the future. School projects, entertainment, historical research, interactive museum exhibits, idols to be worshipped by cults built up around them, etc.
If you save the world, you will be about the most famous person ever in the future.
Therefore there will be a lot of Eliezer Yudkowsky AIs created in the future.
Therefore the chances of anyone who thinks he is Eliezer Yudkowsky actually being the orginal, 21st century one are very small.
Therefore you are almost certainly an AI, and none of the rest of us are here—except maybe as stage props with varying degrees of cognition (and you probably never even heard of me before, so someone like me would probably not get represented in any detail in an Eliezer Yudkowsky simulation). That would mean that I am not even conscious and am just some simple subroutine. Actually, now I have raised the issue to be scary, it looks a lot more alarming for me than it does for you as I may have just argued myself out of existence...
That doesn’t seem scary to me at all. I still know that there is at least one of me that I can consider ‘real’. I will continue to act as if I am one of the instances that I consider me/important. I’ve lost no existence whatsoever.
You can see Eliezer’s position on the Simulation Argument here.
That’s good to know. I hope multifoliaterose reads this comment, as he seemed to think that you would assign a very high probability to the conjunction (and it’s true that you’ve sometimes given that impression by your way of talking.)
Also, I didn’t think he was necessarily setting up forbidden conclusions, since he did add some qualifications allowing that in some circumstances it could be justified to hold such opinions.
To be quite clear about which of Unknowns’ points I object, my main objection is to the point:
where ‘I’ is replaced by “Eliezer.” I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you’re working on. (Maybe even much less than that—I would have to spend some time calibrating my estimate to make a judgment on precisely how low a probability I assign to the proposition.)
My impression is that you’ve greatly underestimated the difficulty of building a Friendly AI.
I wish the laws of argument permitted me to declare that you had blown yourself up at this point, and that I could take my toys and go home. Alas, arguments are not won on a points system.
Out of weary curiosity, what is it that you think you know about Friendly AI that I don’t?
And has it occurred to you that if I have different non-crazy beliefs about Friendly AI then my final conclusions might not be so crazy either, no matter what patterns they match in your craziness recognition systems?
I agree it’s kind of ironic that multi has such an overconfident probability assignment right after criticizing you for being overconfident. I was quite disappointed with his response here.
Why does my probability estimate look overconfident?
One could offer many crude back-of-envelope probability calculations. Here’s one: let’s say there’s
a 10% chance AGI is easy enough for the world to do in the next few decades
a 1% chance that if the world can do it, a team of supergeniuses can do the Friendly kind first
an independent 10% chance Eliezer succeeds at putting together such a team of supergeniuses
That seems conservative to me and implies at least a 1 in 10^4 chance. Obviously there’s lots of room for quibbling here, but it’s hard for me to see how such quibbling could account for five orders of magnitude. And even if post-quibbling you think you have a better model that does imply 1 in 10^9, you only need to put little probability mass on my model or models like it for them to dominate the calculation. (E.g., a 9 in 10 chance of a 1 in 10^9 chance plus a 1 in 10 chance of a 1 in 10^4 chance is close to a 1 in 10^5 chance.)
I don’t find these remarks compelling. I feel similar remarks could be used to justify nearly anything. Of course, I owe you an explanation. One will follow later on.
Unless you’ve actually calculated the probability mathematically, a probability of one in a billion for a natural language claim that a significant number of people accept as likely true is always overconfident. Even Eliezer said that he couldn’t assign a probability as low as one in a billion for the claim “God exists” (although Michael Vassar criticized him for this, showing himself to be even more overconfident than Eliezer.)
I’m afraid I have to take severe exception to this statement.
You give the human species far too much credit if you think that our mere ability to dream up a hypothesis automatically raises its probability above some uniform lower bound.
I am aware of your disagreement, for example as expressed by the absurd claims here. Yes, my basic idea is, unlike you, to give some credit to the human species. I think there’s a limit on how much you can disagree with other human beings—unless you’re claiming to be something superhuman.
Did you see the link to this comment thread? I would like to see your response to the discussion there.
At least for epistemic meanings of “superhuman”, that’s pretty much the whole purpose of LW, isn’t it?
My immediate response is as follows: yes, dependency relations might concentrate most of the improbability of a religion to a relatively small subset of its claims. But the point is that those claims themselves possess enormous complexity (which may not necessarily be apparent on the surface; cf. the simple-sounding “the woman across the street is a witch; she did it”).
Let’s pick an example. How probable do you think it is that Islam is a true religion? (There are several ways to take care of logical contradictions here, so saying 0% is not an option.)
Suppose there were a machine—for the sake of tradition, we can call it Omega—that prints out a series of zeros and ones according to the following rule. If Islam is true, it prints out a 1 on each round, with 100% probability. If Islam is false, it prints out a 0 or a 1, each with 50% probability.
Let’s run the machine… suppose on the first round, it prints out a 1. Then another. Then another. Then another… and so on… it’s printed out 10 1′s now. Of course, this isn’t so improbable. After all, there was a 1/1024 chance of it doing this anyway, even if Islam is false. And presumably we think Islam is more likely than this to be false, so there’s a good chance we’ll see a 0 in the next round or two...
But it prints out another 1. Then another. Then another… and so on… It’s printed out 20 of them. Incredible! But we’re still holding out. After all, million to one chances happen every day...
Then it prints out another, and another… it just keeps going… It’s printed out 30 1′s now. Of course, it did have a chance of one in a billion of doing this, if Islam were false...
But for me, this is my lower bound. At this point, if not before, I become a Muslim. What about you?
You’ve been rather vague about the probabilities involved, but you speak of “double digit negative exponents” and so on, even saying that this is “conservative,” which implies possibly three digit exponents. Let’s suppose you think that the probability that Islam is true is 10^-20; this would seem to be very conservative, by your standards. According to this, to get an equivalent chance, the machine would have to print out 66 1′s.
If the machine prints out 50 1′s, and then someone runs in and smashes it beyond repair, before it has a chance to continue, will you walk away, saying, “There is a chance at most of 1 in 60,000 that Islam is true?”
If so, are you serious?
Thank you a lot for posting this scenario. It’s instructive from the “heuristics and biases” point of view.
Imagine there are a trillion variants of Islam, differing by one paragraph in the holy book or something. At most one of them can be true. You pick one variant at random, test it with your machine and get 30 1′s in a row. Now you should be damn convinced that you picked the true one, right? Wrong. Getting this result by a fluke is 1000x more likely than having picked the true variant in the first place. Probability is unintuitive and our brains are mush, that’s all I’m sayin’.
I agree with this. But if the scenario happened in real life, you would not be picking a certain variant. You would be asking the vague question, “Is Islam true,” to which the answer would be yes if any one of those trillion variants, or many others, were true.
Yes, there are trillions of possible religions that differ from one another as much as Islam differs from Judaism, or whatever. But only a few of these are believed by human beings. So I still think I would convert after 30 1′s, and I think this would reasonable.
If a religion’s popularity raises your prior for it so much, how do you avoid Pascal’s Mugging with respect to the major religions of today? Eternity in hell is more than 2^30 times worse than anything you could experience here; why aren’t you religious already?
It doesn’t matter whether it raises your prior or not; eternity in hell is also more than 2^3000 times worse etc… so the same problem will apply in any case.
Elsewhere I’ve defended Pascal’s Wager against the usual criticisms, and I still say it’s valid given the premises. But there are two problematic premises:
1) It assumes that utility functions are unbounded. This is certainly false for all human beings in terms of revealed preference; it is likely false even in principle (e.g. the Lifespan Dilemma).
2) It assumes that humans are utility maximizers. This is false in fact, and even in theory most of us would not want to self-modify to become utility maximizers; it would be a lot like self-modifying to become a Babyeater or a Super-Happy.
Do you have an answer for how to avoid giving in to the mugger in Eliezer’s original Pascal’s Mugging scenario? If not, I don’t think your question is a fair one (assuming it’s meant to be rhetorical).
I don’t have a conclusive answer, but many people say they have bounded utility functions (you see Unknowns pointed out that possibility too). The problem with assigning higher credence to popular religions is that it forces your utility bound to be lower if you want to reject the mugging. Imagining a billion lifetimes is way easier than imagining 3^^^^3 lifetimes. That was the reason for my question.
My answer (for why I don’t believe in a popular religion as a form of giving in to a Pascal’s Mugging) would be that I’m simultaneously faced with a number of different Pascal’s Muggings, some of which are mutually exclusive, so I can’t just choose to give in to all of them. And I’m also unsure of what decision theory/prior/utility function I should use to decide what to do in the face of such Muggings. Irreversibly accepting any particular Mugging in my current confused state is likely to be suboptimal, so the best way forward at this point seems to be to work on the relevant philosophical questions.
That’s what I think too! You’re only the second other person I have seen make this explicit, so I wonder how many people have even considered this. Do you think more people would benefit from hearing this argument?
Sure, why do you ask? (If you’re asking because I’ve thought of this argument but haven’t already tried to share it with a wider audience, it probably has to do with reasons, e.g., laziness, that are unrelated to whether I think more people would benefit from hearing it.)
I was considering doing a post on it, but there are many posts that I want to write, many of which require research, so I avoided implying that it would be done soon/ever.
Oddly, I think you meant “Pascal’s Wager”.
Pascal’s Mugging. Pascal’s Wager with something breaking symmetry (in this case observed belief of others).
Yes, I suppose it is technically a Pascal’s Mugging. I think Pascal thought he was playing Pascal’s Mugging though.
I don’t think Pascal recognized any potential symmetry in the first place, or he would have addressed it properly.
Privileging the hypothesis! That they are believed by human beings doesn’t lend them probability.
Well, it does to the extent that lack of believers would be evidence against them. I’d say that Allah is considerably more probable than a similarly complex and powerful god who also wants to be worshiped and is equally willing to interact with humans, but not believed in by anyone at all. Still considerably less probable than the prior of some god of that general sort existing, though.
Agreed, but then we have the original situation, if we only consider the set of possible gods that have the property of causing worshiping of themselves.
This whole discussion is about this very point. Downvoted for contradicting my position without making an argument.
Your position statement didn’t include an argument either, and the problem with it seems rather straightforward, so I named it.
I’ve been arguing with Sewing Machine about it all along.
No. It doesn’t lend probability, but it seems like it ought to lend something. What is this mysterious something? Lets call it respect.
Privileging the hypothesis is a fallacy. Respecting the hypothesis is a (relatively minor) method of rationality.
We respect the hypotheses that we find in a math text by investing the necessary mental resources toward the task of finding an analytic proof. We don’t just accept the truth of the hypothesis on authority. But on the other hand, we don’t try to prove (or disprove) just any old hypothesis. It has to be one that we respect.
We respect scientific hypotheses enough to invest physical resources toward performing experiments that might refute or confirm them. We don’t expend those resources on just any scientific hypothesis. Only the ones we respect.
Does a religion deserve respect because it has believers? More respect if it has lots of believers? I think it does. Not privilege. Definitely not. But respect? Why not?
You can dispense with this particular concept of respect since in both your examples you are actually supplied with sufficient Bayesian evidence to justify evaluating the hypothesis, so it isn’t privileged. Whether this is also the case for believed in religions is the very point contested.
No, it’s a method of anti-epistemic horror.
Yes, this seems right.
A priori, with no other evidence one way or another, a belief held by human beings is more likely to be true than not. If Ann says she had a sandwich for lunch, then her words are evidence that she actually had a sandwich for lunch.
Of course, we have external reason to doubt lots of things that human beings claim and believe, including religions. And a religion does not become twice as credible if it has twice as many adherents. Right now I believe we have good reason to reject (at least some of) the tenets of all religious traditions.
But it does make some sense to give some marginal privilege or respect to an idea based on the fact that somebody believes it, and to give the idea more credit if it’s very durable over time, or if particularly clever people believe it. If it were any subject but religion—if it were science, for instance—this would be an obvious point. Scientific beliefs have often been wrong, but you’ll be best off giving higher priors to hypotheses believed by scientists than to other conceivable hypotheses.
Also… if you haven’t been to Australia, is it privileging the hypothesis to accept the word of those who say that it exists? There are trillions of possible countries that could exist that people don’t believe exist...
And don’t tell me they say they’ve been there… religious people say they’ve experienced angels etc. too.
And so on. People’s beliefs in religion may be weaker than their belief in Austrialia, but it certainly is not privileging a random hypothesis.
Your observations (of people claiming to having seen an angel, or a kangaroo) are distinct from hypotheses formed to explain those observations. If in a given case, you don’t have reason to expect statements people make to be related to facts, then the statements people make taken verbatim have no special place as hypotheses.
“You don’t have reason to expect statements people make to be related to facts” doesn’t mean that you have 100% certainty that they are not, which you would need in order to invoke privileging the hypothesis.
Why do you have at most 99.999999999% certainty that they are not? Where does that number one-minus-a-billionth come from?
The burden of proof is on the one claiming a greater certainty (although I will justify this later in any case.)
Now you are appealing to impossibility of absolute certainty, refuting my argument as not being that particular kind of proof. If hypothesis X is a little bit more probable than many others, you still don’t have any reason to focus on it (and correlation could be negative!).
In principle the correlation could be negative but this is extremely unlikely and requires some very strange conditions (for example if the person is more likely to say that Islam is true if he knows it is false than if he knows it is true).
Begging the question!
I disagree; given that most of the religions in question center on human worship of the divine, I have to think that Pr(religion X becomes known among humans | religion X is true) > Pr(religion X does not become known among humans | religion X is true). But I hate to spend time arguing about whether a likelihood ratio should be considered strictly equal to 1 or equal to 1 + epsilon when the prior probabilities of the hypotheses in question are themselves ridiculously small.
Of course I’m serious (and I hardly need to point out the inadequacy of the argument from the incredulous stare). If I’m not going to take my model of the world seriously, then it wasn’t actually my model to begin with.
Sewing-Machine’s comment below basically reflects my view, except for the doubts about numbers as a representation of beliefs. What this ultimately comes down to is that you are using a model of the universe according to which the beliefs of Muslims are entangled with reality to a vastly greater degree than on my model. Modulo the obvious issues about setting up an experiment like the one you describe in a universe that works the way I think it does, I really don’t have a problem waiting for 66 or more 1′s before converting to Islam. Honest. If I did, it would mean I had a different understanding of the causal structure of the universe than I do.
Further below you say this, which I find revealing:
As it happens, given my own particular personality, I’d probably be terrified. The voice in my head would be screaming. In fact, at that point I might even be tempted to conclude that expected utilities favor conversion, given the particular nature of Islam.
But from an epistemic point of view, this doesn’t actually change anything. As I argued in Advancing Certainty, there is such a thing as epistemically shutting up and multiplying. Bayes’ Theorem says the updated probability is one in a hundred billion, my emotions notwithstanding. This is precisely the kind of thing we have to learn to do in order to escape the low-Earth orbit of our primitive evolved epistemology—our entire project here, mind you—which, unlike you (it appears), I actually believe is possible.
Has anyone done a “shut up and multiply” for Islam (or Christianity)? I would be interested in seeing such a calculation. (I did a Google search and couldn’t find anything directly relevant.) Here’s my own attempt, which doesn’t get very far.
Let H = “Islam is true” and E = everything we’ve observed about the universe so far. According to Bayes:
P(H | E) = P(E | H) P(H) / P(E)
Unfortunately I have no idea how to compute the terms above. Nor do I know how to argue that P(H|E) is as small as 10^-20 without explicitly calculating the terms. One argument might be that P(H) is very small because of the high complexity of Islam, but since E includes “23% of humanity believe in some form of Islam”, the term for the complexity of Islam seems to be present in both the numerator and denominator and therefore cancel each other out.
If someone has done such a calculation/argument before, please post a link?
P(E) includes the convincingness of Islam to people on average, not the complexity of Islam. These things are very different because of the conjunction fallacy. So P(H) can be a lot smaller than P(E).
I don’t understand how P(E) does not include a term for the complexity of Islam, given that E contains Islam, and E is not so large that it takes a huge number of bits to locate Islam inside E.
It doesn’t take a lot of bits to locate “Islam is false” based on “Islam is true”. Does it mean that all complex statements have about 50% probability?
I just wrote a post about that.
I don’t think that’s true; cousin_it had it right the first time. The complexity of Islam is the complexity of a reality that contains an omnipotent creator, his angels, Paradise, Hell, and so forth. Everything we’ve observed about the universe includes people believing in Islam, but not the beings and places that Islam says exist.
In other words, E contains Islam the religion, not Islam the reality.
The really big problem with such a reality is that it contains a fundamental, non-contingent mind (God’s/Allah’s, etc) - and we all know how much describing one of those takes - and the requirement that God is non-contingent means we can’t use any simpler, underlying ideas like Darwinian evolution. Non-contingency, in theory selection terms, is a god killer: It forces God to incur a huge information penalty—unless the theist refuses even to play by these rules and thinks God is above all that—in which case they aren’t even playing the theory selection game.
I don’t see this. Why assume that the non-contingent, pre-existing God is particularly complex. Why not assume that the current complexity of God (if He actually is complex) developed over time as the universe evolved since the big bang. Or, just as good, assume that God became complex before He created this universe.
It is not as if we know enough about God to actually start writing down that presumptive long bit string. And, after all, we don’t ask the big bang to explain the coastline of Great Britain.
If we do that, should we even call that “less complex earlier version of God” God? Would it deserve the title?
Sure, why not? I refer to the earlier, less complex version of Michael Jackson as “Michael Jackson”.
Agreed. It’s why I’m so annoyed when even smart atheists say that God was an ok hypothesis before evolution was discovered. God was always one of the worst possible hypotheses!
Or, put more directly: Unless the theist is deluding himself. :)
I’m confused. In the comments to my post you draw a distinction between an “event” and a “huge set of events”, saying that complexity only applies to the former but not the latter. But Islam is also a “huge set of events”—it doesn’t predict just one possible future, but a wide class of them (possibly even including our actual world, ask any Muslim!), so you can’t make an argument against it based on complexity of description alone. Does this mean you tripped on the exact same mine I was trying to defuse with my post?
I’d be very interested in hearing a valid argument about the “right” prior we should assign to Islam being true—how “wide” the set of world-programs corresponding to it actually is—because I tried to solve this problem and failed.
Sorry, I was confused. Just ignore that comment of mine in your thread.
I’m not sure how to answer your question because as far as I can tell you’ve already done so. The complexity of a world-program gives its a priori probability. The a priori probability of a hypothesis is the sum of the probabilities of all the world-programs it contains. What’s the problem?
The problem is that reality itself is apparently fundamentally non-contingent. Adding “mind” to all that doesn’t seem so unreasonable.
Do you mean it doesn’t seem so unreasonable to you, or to other people?
By reasonable, I mean the hypothesis is worth considering, if there were reasons to entertain it. That is, if someone suspected there was a mind behind reality, I don’t think they should dismiss it out of hand as unreasonable because this mind must be non-contingent.
In fact, we should expect any explanation of our creation to be non-contingent, since physical reality appears to be so.
For example, if it’s reasonable to consider the probability that we’re in a simulation, then we’re considering a non-contingent mind creating the simulation we’re in.
Whoops, you’re right. Sorry. I didn’t quite realize you were talking about the universal prior again :-)
But I think the argument can still be made to work. P(H) doesn’t depend only on the complexity of Islam—we must also take into account the internal structure of Islam. For example, the hypothesis “A and B and C and … and Z” has the same complexity as “A or B or C or … or Z”, but obviously the former is way less probable. So P(H) and P(E) have the same term for complexity, but P(H) also gets a heavy “conjunction penalty” which P(E) doesn’t get because people are susceptible to the conjunction fallacy.
It’s slightly distressing that my wrong comment was upvoted.
Whoops, you’re right. Now I’m ashamed that my comment got upvoted.
I think the argument may still be made to work by fleshing out the nonstandard notion of “complexity” that I had in my head when writing it :-) Your prior for a given text being true shouldn’t depend only on the text’s K-complexity. For example, the text “A and B and C and D” has the same complexity as “A or B or C or D”, but the former is way less probable. So P(E) and P(H) may have the same term for complexity, but P(H) also gets a “conjunction penalty” that P(E) doesn’t get because people are prey to the conjunction fallacy.
EDIT: this was yet another mistake. Such an argument cannot work because P(E) is obviously much smaller than P(H), because E is a huge mountain of evidence and H is just a little text. When trying to reach the correct answer, we cannot afford to ignore P(E|H).
For simplicity we may assume P(E|H) to be near-certainty: if there is an attention-seeking god, we’d know about it. This leaves P(E) and P(H), and P(H|E) is tiny exactly for the reason you named: P(H) is much smaller than P(E), because H is optimized for meme-spreading to a great extent, which makes for a given complexity (that translates into P(H)) probability of gaining popularity P(E) comparatively much higher.
Thus, just arguing from complexity indeed misses the point, and the real reason for improbability of cultish claims is that they are highly optimized to be cultish claims.
For example, compare with tossing a coin 50 times: the actual observation, whatever that is, will be a highly improbable event, and theoretical prediction from the model of fair coin will be too. But if the observation is highly optimized to attract attention, for example it’s all 50 tails, then theoretical model crumbles, and not because the event you’ve observed is too improbable according to it, but because other hypotheses win out.
Actually it doesn’t, human generated complexity is different from naturally generated complexity (for instance it fits into narratives, apparent holes are filled with the sort of justifications a human is likely to think of etc.). That’s one of the ways you can tell stories from real events. Religious accounts contain much of what looks like human generated complexity.
Here’s a somewhat rough way of estimating probabilities of unlikely events. Let’s say that an event X with P(X) = about 1-in-10 is a “lucky break.” Suppose that there are L(1) ways that Y could occur on account of a single lucky break, L(2) ways that Y could occur on account of a pair of independent lucky breaks, L(3) ways that Y could occur on account of 3 independent lucky breaks, and so on. Then P(Y) is approximately the sum over all n of L(n)/10^n. I have the feeling that arguments about whether P(Y) is small versus extremely small are arguments about the growth rate of L(n).
I discussed the problem of estimating P(“23% of humanity believes...”) here. I’d be grateful for thoughts or criticisms.
This is a small point but “E includes complex claim C” does not imply that the (for instance, Kolmogorov) complexity of E is as large as the Kolmogorov complexity of C. The complexity of the digits of square root of 2 is pretty small, but they contain strings of arbitrarily high complexity.
E includes C implies that K(C) ⇐ K(E) + K(information needed to locate C within E). In this case K(information needed to locate C within E) seems small enough not to matter to the overall argument, which is why I left it out. (Since you said “this is a small point” I guess you probably understand and agree with this.)
Actually no I hadn’t thought of that. But I wonder if the amount of information it takes to locate “lots of people are muslims” within E is as small as you say. My particular E does not even contain that much information about Islam, and how people came to believe it, but it does contain a model of how people come to believe weird things in general. Is that a misleading way of putting things? I can’t tell.
There are some very crude sketches of shutting-up-and-multiplying, from one Christian and a couple of atheists, here (read the comments as well as the post itself), and I think there may be more with a similar flavour in other blog posts there (and their comments) from around the same time.
(The author of the blog has posted a little on LW. The two skeptics responsible for most of the comments on that post have both been quite active here. One of them still is, and is in fact posting this comment right now :-).)
Wei Dai, exactly. The point about about the complexity of the thing is included in the fact that people believe it was the point I have been making all along. Regardless of what you think the resulting probability is, most of the “evidence” for Islam consists in the very fact that some people think it is true—and as you show in your calculation, this is very strong evidence.
It seems to me that komponisto and others are taking it to be known with 100% certainly that Islam and the like were generated by some random process, and then trying to determine what the probability would be.
Now I know that most likely Mohammed was insane and in effect the Koran was in fact generated by a random process. But I certainly don’t know how you can say that the probability that it wasn’t generated randomly is 1 in 10^20 or lower. And in fact if you’re going to assign a probability like this you should have an actual calculation.
I agree that your position is analogous to “shutting up and multiplying.” But in fact, Eliezer may have been wrong about that in general—see the Lifespan Dilemma—because people’s utility functions are likely not unbounded.
In your case, I agree with shutting up and multiplying when we have a way to calculate the probabilities. In this case, we don’t, so we can’t do it. If you had a known probability (see cousin_it’s comment on the possible trillions of variants of Islam) of one in a trillion, then I would agree with walking away after seeing 30 1′s, regardless of the emotional effect of this.
But in reality, we have no such known probability. The result is that you are going to have to use some base rate: “things that people believe” or more accurately, “strange things that people believe” or whatever. In any case, whatever base rate you use, it will not have a probability anywhere near 10^-20 (i.e. more than 1 in 10^20 strange beliefs is true etc.)
My real point about the fear is that your brain doesn’t work the way your probabilities do—even if you say you are that certain, your brain isn’t. And if we had calculated the probabilities, you would be justified in ignoring your brain. But in fact, since we haven’t, your brain is more right than you are in this case. It is less certain precisely because you are simply not justified in being that certain.
At this point, if not before, I doubt Omega’s reliability, not mine.
It is a traditional feature of Omega that you have confidence 1 in its reliability and trustworthiness.
Traditions do not always make sense, neither are they necessarily passed down accurately. The original Omega, the one that appears in Newcomb’s problem, does not have to be reliable with probability 1 for that problem to be a problem.
Of course, to the purist who says that 0 and 1 are not probabilities, you’ve just sinned by talking about confidence 1, but the problem can be restated to avoid that by asking for one’s conditional probability P(Islam | Omega is and behaves as described).
In the present case, the supposition that one is faced with an overwhelming likelihood ratio raising the probability that Islam is true by an unlimited amount is just a blue tentacle scenario. Any number that anyone who agrees with the general anti-religious view common on LessWrong comes up with is going to be nonsense. Professing, say, 1 in a million for Islam on the grounds that 1 in a billion or 1 in a trillion is too small a probability for the human brain to cope with is the real cop-out, a piece of reversed stupidity with no justification of its own.
The scenario isn’t going to happen. Forcing your brain to produce an answer to the question “but what if it did?” is not necessarily going to produce a meaningful answer.
Quite true. But if you want to dispute the usefulness of this tradition, you should address the broader and older tradition of which it is an instance: that thought experiments should abstract away real-world details irrelevant to the main point.
This is a pet peeve of mine, and I’ve wanted an excuse to post this rant for a while. Don’t take it personally.
That “purist” is as completely wrong as the person who insists that there is no such thing as centrifugal force. They are ignoring the math in favor of a meme that enables them to feel smugly superior.
0 and 1 are valid probabilities in every mathematical sense: the equations of probability don’t break down when passed p=0 or p=1 the way they do with genuine nonprobabilities like −1 or 2. A probability of 0 or 1 is like a perfect vacuum: it happens not to occur in the world that we happen to inhabit, but it is perfectly well-defined, we can do math with it without any difficulty, and it is extraordinarily useful in thought experiments.
When asked to consider a spherical black body of radius one meter resting on a frictionless plane, you don’t respond “blue tentacles”, you do the math.
I agree with the rant. 0 and 1 are indeed probabilities, and saying that they are not is a misleading way of enjoining people to never rule out anything. Mathematically, P(~A|A) is zero, not epsilon, and P(A|A) is 1, not 1-epsilon. Practically, 0 and 1 in subjective judgements mean as near to 0 and 1 as makes no practical difference. When I agree a rendezvous with someone, I don’t say “there’s a 99% chance I’ll be there”, I say “I’ll be there”.
Where we part ways is in our assessment of the value of this thought-experiment. To me it abstracts and assumes away so much that what is left does not illuminate anything. I can calculate 2^{-N}, but asked how large N would have to be to persuade me of some fantastic claim backed by this fantastic machine I simply cannot name any value. I have no confidence that whatever value I named would be the value I would actually use were this impossible scenario to come to pass.
Fair enough. But if we’re doing that, I think the original question with the Omega machine abstracts too much away. Let’s consider the kind of evidence that we would actually expect to see if Islam were true.
Let us stipulate that, on the 1st of Muḥarram, a prominent ayatollah claims to have suddenly become a prophet. They go on television and answer questions on all topics. All verifiable answers they give, including those to NP-complete questions submitted for experimental purposes, turn out to be true. The new prophet asserts the validity of the Qur’an as holy scripture and of Allah as the one God.
There is a website where you can suggest questions to put to the new prophet. Not all submitted questions get answered, due to time constraints, but interesting ones do get in reasonably often. Are there any questions you’d like to ask?
I’ll give a reworded version of this, to take it out of the context of a belief system with which we are familiar. I’m not intending any mockery by this: It is to make a point about the claims and the evidence:
“Let us stipulate that, on Paris Hilton’s birthday, a prominent Paris Hilton admirer claims to have suddenly become a prophet. They go on television and answer questions on all topics. All verifiable answers they give, including those to NP-complete questions submitted for experimental purposes, turn out to be true. The new prophet asserts that Paris Hilton is a super-powerful being sent here from another world, co-existing in space with ours but at a different vibrational something or whatever. Paris Hilton has come to show us that celebrity can be fun. The entire universe is built on celebrity power. Madonna tried to teach us this when she showed us how to Vogue but we did not listen and the burden of non-celebrity energy threatens to weigh us down into the valley of mediocrity when we die instead of ascending to a higher plane where each of us gets his/her own talkshow with an army of smurfs to do our bidding. Oh, and Sesame Street is being used by the dark energy force to send evil messages into children’s feet. (The brain only appears to be the source of consciousness: Really it is the feet. Except for people with no feet. (Ah! I bet you thought I didn’t think of that.) Today’s lucky food: custard.”
There is a website where you can suggest questions to put to the new prophet. Not all submitted questions get answered, due to time constraints, but interesting ones do get in reasonably often. Are there any questions you’d like to ask?”
The point I am making here is that the above narrative is absurd, and even if he can demonstrate some unusual ability with predictions or NP problems (and I admit the NP problems would really impress me), there is nothing that makes that explanation more sensible than any number of other stupid explanations. Nor does he have an automatic right to be believed: His explanation is just too stupid.
Yes—I would ask this question:
“Mr Prophet, are you claiming that there is no other theory to account for all this that has less intrinsic information content than a theory which assumes the existence of a fundamental, non-contingent mind—a mind which apparently cannot be accounted for by some theory containing less information, given that the mind is supposed to be non-contingent?”
He had better have a good answer to that: Otherwise I don’t care how many true predictions he has made or NP problems he has solved. None of that comes close to fixing the ultra-high information loading in his theory.
“The reason you feel confused is because you assume the universe must have a simple explanation.
The minimum message length necessary to describe the universe is long—long enough to contain a mind, which in fact it does. There is no fundamental reason why the Occamian prior must be appropriate. It so happens that Allah has chosen to create a world that, to a certain depth, initially appears to follow that law, but Occam will not take you all the way to the most fundamental description of reality.
I could write out the actual message description, but to demonstrate that the message contains a mind requires volumes of cognitive science that have not been developed yet. Since both the message and the proof of mind will be discovered by science within the next hundred years, I choose to spend my limited time on earth in other areas.”
Do you think that is persuasive?
It’s not sufficient to persuade me, but I do think it shows that the hypothesis is not a priori completely impossible.
A Moslem would say to him, “Mohammed (pbuh) is the Seal of the Prophets: there can be none after Him. The Tempter whispers your clever answers in your ear, and any truth in them is only a ruse and a snare!” A Christian faced with an analogous Christian prophet would denounce him as the Antichrist. I ask—not him, but you—why I should believe he is as trustworthy on religion as he is on subjects where I can test him?
I might incidentally ask him to pronounce on the validity of the hadith. I have read the Qur’an and there is remarkably little in it but exhortations to serve God.
“Also, could you settle all the schisms among those who already believe in the validity of the Qur’an as holy scripture and of Allah as the one God, and still want to bomb each other over their interpretations?”
This is sect-dependent. The Mormons would probably be quite happy to accept one provided he attained prophet-hood through church-approved channels.
I wasn’t aware of that particular tenet. I suppose the Very Special Person would have to identify as some other role than prophet.
If your prior includes the serious possibility of a Tempter that seems reliable until you have to trust it on something important, why couldn’t the Tempter also falsify scientific data you gather?
“Indeed, the service of God is the best of paths to walk in life.”
“Sure, that’s why I’m here. Which point of doctrine do you want to know about?”
When I condition on the existence of this impossible prophet, many improbable ideas are raised to attention, not merely the one that he asserts.
To bring the thought-experiment slightly closer to reality, aliens arrive, bringing advanced technology and religion. Do we accept the religion along with the technology? I’m sure science fiction has covered that one umpteen times, but the scenario has already been played out in history, with European civilisation as the aliens. They might have some things worth taking regarding how people should deal with each other, but strange people from far away with magic toys are no basis for taking spooks any more seriously.
I find the alien argument very persuasive.
Suppose a server appeared on the internet relaying messages from someone claiming to be the sysadmin of the simulation we’re living in, and asking that we refrain from certain types of behavior because it’s making his job difficult. Is there any set of evidence that would persuade you to go along with the requests, and how would the necessary degree of evidence scale with the inconvenience of the requests?
That should be a very easy claim to prove, actually. If someone really were the sysadmin of the universe, they could easily do a wide variety of impossible things that anyone can could verify. For example, they could write their message in the sky with a special kind of photon that magically violates the laws of physics in an obvious way (say, for example, it interacts with all elements normally except one which it inexplicably doesn’t interact with at all). Or find/replace their message into the genome of a designated species. Or graffiti it onto every large surface in the world simultaneously.
Of course, there would be no way to distinguish a proper sysadmin of the universe from someone who had gotten root access improperly, either from the simulated universe, the parent universe, or some other universe. And this does raise a problem for any direct evidence in support of a religion—no matter how strong the evidence gets, the possibility that someone has gained the ability to generate arbitrarily much fake evidence, or reliably deceive you somehow, will always remain indistinguishable; so anything with a significantly lower prior probability than that, is fundamentally impossible to prove. Most or all religions have a smaller prior probability than the “someone has gained magical evidence-forging powers and is using them” hypothesis, and as a result, even if strong evidence for them were to suddenly start appearing (which it hasn’t), that still wouldn’t be enough to prove them correct.
I still have a basic problem with the method of posing questions about possibilities I currently consider fantastically improbable. My uncertainty about how I would deal with the situation goes up with its improbability, and what I would actually do will be determined largely by details absent from the description of the improbable scenario.
It is as if my current view of the world—that is, my assignments of probabilities to everything—is a digital photograph of a certain resolution. When I focus on vastly improbable possibilities, it is as if I inspect a tiny area of the photograph, only a few pixels wide, and try to say what is depicted there. I can put that handful of pixels through my best image-processing algorithms, but all I’m going to get back is noise.
Can you consider hypothetical worlds with entirely different histories from ours? Rather than trying to update based on your current state of knowledge, with mountains of cumulative experience pointing a certain way, imagine what that mountainous evidence could have been in a deeply different world than this one.
For example, suppose the simulation sysadmin had been in active communication with us since before recorded history, and was commonplace knowledge casually accepted as mere fact, and the rest of the world looked different in the ways we would expect such a world to.
In other words, can I read fiction? Yes, but I don’t see where this is going.
Unwinding the thread backwards, I see that my comment strayed into irrelevance from the original point, so never mind.
I would like to ask you this, though: of all the people on Earth who feel as sure as you do about the truth or falsehood of various religions, what proportion do you think are actually right? If your confidence in your beliefs regarding religion is a larger number than this, then what additional evidence do you have that makes you think you’re special?
Yes. Rationalists believe in Omega (scnr).
This is a copout.
You’ve asked us to take our very small number, and imagine it doubling 66 times. I agree that there is a punch to what you say—no number, no matter how small, could remain small after being doubled 66 times! But in fact long ago Archimedes made a compelling case that there are such numbers.
Now, it’s possible that Archimedes was wrong and something like ultrafinitism is true. I take ultrafinitist ideas quite seriously, and if they are correct then there are a lot things that we will have to rethink. But Islam is not close to the top of list of things we would should rethink first.
Maybe there’s a kind of meta claim here: conditional on probability theory being a coherent way to discuss claims like “Islam is true,” the probability that Islam is true really is that small.
I just want to know what you would actually do, in that situation, if it happened to you tomorrow. How many 1′s would you wait for, before you became a Muslim?
Also, “there are such numbers” is very far from “we should use such numbers as probabilities when talking about claims that many people think are true.” The latter is an extremely strong claim and would therefore need extremely strong evidence before being acceptable.
I think after somewhere between 30 and 300 coin flips, I would convert. With more thought and more details about what package of claims is meant by “Islam,” I could give you a better estimate. Escape routes that I’m not taking: I would start to suspect Omega was pulling my leg, I would start to suspect that I was insane, I would start to suspect that everything I knew was wrong, including the tenets of Islam. If answers like these are copouts—if Omega is so reliable, and I am so sane, and so on—then it doesn’t seem like much of a bullet to bite to say “yes, 2^-30 is very small but it is still larger than 2^-66; yes something very unlikely has happened but not as unlikely as Islam”
If you’re expressing doubts about numbers being a good measure of beliefs, I’m totally with you! But we only need strong evidence for something to be acceptable if there are some alternatives—sometimes you’re stuck with a bad option. Somebody’s handed us a mathematical formalism for talking about probabilities, and it works pretty well. But it has a funny aspect: we can take a handful of medium-sized probabilities, multiply them together, and the result is a tiny tiny probability. Can anything be as unlikely as the formalism says 66 heads in a row is? I’m not saying you should say “yes,” but if your response is “well, whenever something that small comes up in practice, I’ll just round up,” that’s a patch that is going to spring leaks.
Another point, regarding this:
Originally I didn’t intend to bring up Pascal’s Wager type considerations here because I thought it would just confuse the issue of the probability. But I’ve rethought this—actually this issue could help to show just how strong your beliefs are in reality.
Suppose you had said in advance that the probability of Islam was 10^-20. Then you had this experience, but the machine was shut off after 30 1′s ( a chance of one in a billion.) The chance that Islam is true is now one in a hundred billion, updated from your prior.
If this actually happened to you, and you walked away and did not convert, would you have some fear of being condemned to hell for seeing this and not converting? Even a little bit of fear? If you would, then your probability that Islam is true must be much higher than 10^-20, since we’re not afraid of things that have a one in a hundred billion chance of happening.
This is false.
I must confess that I am sometimes afraid that ghosts will jump out of the shadows and attack me at night, and I would assign a much lower chance of that happening. I have also been afraid of velociraptors. Fear is frequently irrational.
You are technically correct. My actual point was that your brain does not accept that the probability is that low. And as I stated in one of the replies, you might in some cases have reasons to say your brain is wrong… just not in this case. No one here has given any reason to think that.
It’s good you managed some sort of answer to this. However, 30 − 300 is quite a wide range; from 1 in 10^9 to 1 in 10^90. If you’re going to hope for any sort of calibration at all in using numbers like this, you’re going to have to much more precise...
I wasn’t expressing doubts about numbers being a measure of beliefs (although you could certainly question this as well), but about extreme numbers being a measure of our beliefs, which do not seem able to be that extreme. Yes, if you have a large number of independent probabilities, the result can be extreme. And supposedly, the basis for saying that Islam (or reincarnation, or whatever) is very improbable would be the complexity of the claim. But who has really determined how much complexity it has? As I pointed out elsewhere (on the “Believable Bible” comment thread), a few statements, if we knew them to be true, would justify Islam or any other such thing. Which particular statements would we need, and how complex are those statements, really? No one has determined them to any degree of precision, and until they do, you have to use something like a base rate. Just as astronomers start out with fairly high probabilities for the collision of near-earth asteroids, and only end up with low probabilities after very careful calculation, you would have to start out with a fairly high prior for Islam, or reincarnation, or whatever, and you would only be justified in holding an extreme probability after careful calculation… which I don’t believe you’ve done. Certainly I haven’t.
Apart from the complexity, there is also the issue of evidence. We’ve been assuming all along that there is no evidence for Islam, or reincarnation, or whatever. Certainly it’s true that there isn’t much. But that there is literally no evidence for such things simply isn’t so. The main thing is that we aren’t motivated to look at the little evidence that there is. But if you intend to assign probabilities to that degree of precision, you are going to have to take into account every speck of evidence.
I thought the salient feature of Islam was that many people believed it, not that it has less complexity than I thought, or more evidence in its favor than I thought. That might be, but I’m not interested in discussing it.
I don’t “feel” beliefs strongly or weakly. Sometimes probability calculations help me with fear and other emotions, sometimes they don’t. Again, I’m not interested in discussing it.
So tell me something about how important it is that many people believe in Islam.
I’m not interested in discussing Islam either… those points apply to anything that people believe. But that’s why it’s relevant to the question of belief: if you take something that people don’t believe, it can be arbitrarily complex, or 100% lacking in evidence (like Russell’s teapot), but things that people believe do not have these properties.
It’s not important how many people believe it. It could be just 50 people and the probability would not be much different (as long as the belief was logically consistent with the fact that just a few people believed it.)
So tell me why. By “complex” do you just mean “low probability,” or some notion from information theory? How did you come to believe that people cannot believe things that are too complex?
I just realized that you may have misunderstood my original point completely. Otherwise you wouldn’t have said this: “I thought the salient feature of Islam was that many people believed it, not that it has less complexity than I thought, or more evidence in its favor than I thought.”
I only used the idea of complexity because that was komponisto’s criterion for the low probability of such claims. The basic idea is people believe things that their priors say do not have too low a probability: but as I showed in the post on Occam’s razor, everyone’s prior is a kind of simplicity prior, even if they are not all identical (nor necessarily particularly related to information theory or whatever.)
Basically, a probability is determined by the prior and by the evidence that it is updated according to. The only reason things are more probable if people believe them is that a person’s belief indicates that there is some human prior according to which the thing is not too improbable, and some evidence and way of updating that can give the thing a reasonable probability. So other people’s beliefs are evidence for us only because they stand in for the other people’s priors and evidence. So it’s not that it is “important that many people believe” apart from the factors that give it probability: the belief is just a sign that those factors are there.
Going back the distinction you didn’t like, between a fixed probability device and a real world claim, a fixed probability device would be a situation where the prior and the evidence is completely fixed and known: with the example I used before, let there be a lottery that has a known probability of one in a trillion. Then since the prior and the evidence are already known, the probability is still one in a trillion, even if someone says he is definitely going to win it.
In a real world claim, on the other hand, the priors are not well known, and the evidence is not well known. And if I find out that someone believes it, I immediately know that there are humanly possible priors and evidence that can lead to that belief, which makes it much more probable even for me than it would be otherwise.
This sounds like you are updating. We have a formula for what happens when you update, and it indeed says that given evidence, something becomes more probable. You are saying that it becomes much more probable. What quantity in Bayes formula seems especially large to you, and why?
What Wei Dai said.
In other words, as I said before, the probability that people believe something shouldn’t be that much more than the probability that the thing is true.
What about the conjunction fallacy?
The probability that people will believe a long conjunction is less probable than they will believe one part of the conjunction (because in order to believe both parts, they have to believe each part. In other words, for the same reason the conjunction fallacy is a fallacy.)
The conjunction fallacy is the assignment of a higher probability to some statement of the form A&B than to the statement A. It is well established that for certain kinds of A and B, this happens.
The fallacy in your proof that this cannot happen is that you have misstated what the conjunction fallacy is.
My point in mentioning it is that people committing the fallacy believe a logical impossibility. You can’t get much more improbable than a logical impossibility. But the conjunction fallacy experiments demonstrate that is common to believe such things.
Therefore, the improbability of a statement does not imply the improbability of someone believing it. This refutes your contention that “the probability that people believe something shouldn’t be that much more than the probability that the thing is true.” The possible difference between the two is demonstrably larger than the range of improbabilities that people can intuitively grasp.
I wish I had thought of this.
You said it before, but you didn’t defend it.
Wei Dai did, and I defended it by referencing his position.
In that case I am misunderstanding Wei Dai’s point. He says that complexity considerations alone can’t tell you that probability is small, because complexity appears in the numerator and the denominator. I will need to see more math (which I guess cousin it is taking care of) before understanding and agreeing with this point. But even granting it I don’t see how it implies that P(many believe H)/P(H) is for all H less than one billion.
Thank you a lot for posting this scenario.
Imagine there are a trillion variants of Islam, differing by one sentence in the holy book or something. At most one of them can be true. You pick one variant at random, test it with your machine and get 30 1′s in a row. Now you should be damn convinced that you picked the true one, right? Wrong. Getting this result by a fluke is ~1000x more likely than picking the true variant in the first place. Probability is unintuitive and our brains are mush, that’s all I’m sayin’.
Islam isn’t a true religion.
Complete agreement, but downvoted for making comments that don’t promote paperclips.
I think Clippy was just testing whether ve’d successfully promoted that to a community norm.
The product of two probabilities above your threshold-for-overconfidence can be below your threshold-for-overconfidence. Have you at least thought this through before?
For instance, the claim “there is a God” is not that much less spectacular than the claim “there is a God, and he’s going to make the next 1000 times you flip a coin turn up heads.” If one-in-a-billion is a lower bound for the probability that God exists, then one-in-a-billion-squared is a generous lower bound for the probability that the next 1000 times you flip a coin will turn up heads. (One-in-a-billion-squared is about 2-to-the-sixty). You’re OK with that?
Yes. As long as you think of some not-too-complicated scenario where the one would lead to the other, that’s perfectly reasonable. For example, God might exist and decide to prove it to you by effecting that prediction. I certainly agree this has a probability of at least one in a billion squared. In fact, suppose you actually get heads the next 60 times you flip a coin, even though you are choosing different coins, it is on different days, and so on. By that point you will be quite convinced that the heads are not independent, and that there is quite a good chance that you will get 1000 heads in a row.
It would be different of course if you picked a random series of heads and tails: in that case you still might say that there is at least that probability that someone else will do it (because God might make that happen), but you surely cannot say that it had that probability before you picked the random series.
This is related to what I said in the torture discussion, namely that explicitly describing a scenario automatically makes it far more probable to actually happen than it was before you described it. So it isn’t a problem if the probability of 1000 heads in a row is more likely than 1 in 2-to-1000. Any series you can mention would be more likely than that, once you have mentioned it.
Also, note that there isn’t a problem if the 1000 heads in a row is lower than one in a billion, because when I made the general claim, I said “a claim that significant number of people accept as likely true,” and no one expects to get the 1000 heads.
Probabilities should sum to 1. You’re saying moreover that probabilities should not be lower that some threshhold. Can I can get you to admit that there’s a math issue here that you can’t wave away, without trying to fine-tune my examples? If you claim you can solve this math issue, great, but say so.
Edit: −1 because I’m being rude? Sorry if so, the tone does seem inappropriately punchy to me now. −1 because I’m being stupid? Tell me how!
I set a lower bound of one in a billion on the probability of “a natural language claim that a significant number of people accept as likely true”. The number of such mutually exclusive claims is surely far less than a billion, so the math issue will resolve easily.
Yes, it is easy to find more than a billion claims, even ones that some people consider true, but they are not mutually exclusive claims. Likewise, it is easy to find more than a billion mutually exclusive claims, but they are not ones that people believe to be true, e.g. no one expects 1000 heads in a row, no one expects a sequence of five hundred successive heads-tails pairs, and so on.
I didn’t downvote you.
Maybe I see. You are updating on the fact that many people believe something, and are saying that P(A|many people believe A) should not be too small. Do you agree with that characterization of your argument?
In that case, we will profitably distinguish between P(A|no information about how many people believe A) and P(A|many people believe A). Is there a compact way that I can communicate something like “Excepting/not updating on other people’s beliefs, P(God exists) is very small”? If I said something like that would you still think I was being overconfident?
This is basically right, although in fact it is not very profitable to speak of what the probability would be if we didn’t have some of the information that we actually have. For example, the probability of this sequence of ones and zeros -- 0101011011101110 0010110111101010 0100010001010110 1010110111001100 1110010101010000 -- being chosen randomly, before anyone has mentioned this particular sequence, is one out 2 to the 80. Yet I chose it randomly, using a random number generator (not a pseudo random number generator, either.) But I doubt that you will conclude that I am certainly lying, or that you are hallucinating. Rather, as Robin Hanson points out, extraordinary claims are extraordinary evidence. The very fact that I write down this improbable evidence is extremely extraordinary evidence that I have chosen it randomly, despite the huge improbability of that random choice. In a similar way, religious claims are extremely strong evidence in favor of what they claim; naturally, just as if I hadn’t written the number, you would never believe that I might choose it randomly, in the same way, if people didn’t make religious claims, you would rightly think them to be extremely improbable.
It is always profitable to give different concepts different names.
Let GM be the assertion that I’ll one day play guitar on the moon. Your claim is that this ratio
P(GM|I raised GM as a possibility)/P(GM)
is enormous. Bayes theorem says that this is the same as
P(I raised GM as a possibility|GM)/P(I raised GM as a possibility)
so that this second ratio is also enormous. But it seems to me that both numerator and denominator in this second ratio are pretty medium-scale numbers—in particular the denominator is not miniscule. Doesn’t this defeat your idea?
The evidence contained in your asserting GM would be much stronger than the evidence contained in your raising the possibility.
Still, there is a good deal of evidence contained in your raising the possibility. Consider the second ratio: the numerator is quite high, probably more than .5, since in order to play guitar on the moon, you would have to bring a guitar there, which means you’d probably be thinking about it.
The denominator is in fact quite small. If you randomly raise one outlandish possibility of performing some action in some place, each day for 50 years, and there are 10,000 different actions (I would say there are at least that many), and 100,000 different places, then the probability of raising the possibility will be 18,250/(10,000 x 100,000), which is 0.00001825, which is fairly small. The actual probability is likely to be even lower, since you may not be bringing up such possibilities every day for 50 years. Religious claims are typically even more complicated than the guitar claim, so the probability of raising their possibility is even lower.
--one more thing: I say that raising the possibility is strong evidence, not that the resulting probability is high: it may start out extremely low and end up still very, very low, going from say one in a google to one in a sextillion or so. It is when you actually assert that it’s true that you raise the probability to something like one in a billion or even one in a million. Note however that you can’t refute me by now going on to assert that you intend to play a guitar on the moon; if you read Hanson’s article in my previous link, you’ll see that he shows that assertions are weak evidence in particular cases, namely in ones in which people are especially likely to lie: and this would be one of them, since we’re arguing about it. So in this particular case, if you asserted that you intended to do so, it would only raise the probability by a very small amount.
I understand that you think the lower bound on probabilities for things-that-are-believed is higher than the lower bound on probabilities for things-that-are-raised-as-possibilities. I am fairly confident that I can change your mind (that is, convince you not to impose lower bounds like this at all), and even more confident that I can convince you that imposing lower bounds like this is mathematically problematic (that is, there are bullets to be bitten) in ways that hadn’t occurred to you a few days ago.
I do not see one of these bounds as more or less sound than the other, but am focusing on the things-that-are-raised-as-possibilities bound because I think the discussion will go faster there.
More soon, but tell me if you think I’ve misunderstood you, or if you think you can anticipate my arguments. I would also be grateful to hear from whoever is downvoting these comments.
Note that I said there should be a lower bound on the probability for things that people believe, and even made it specific: something on the order of one in a billion. But I don’t recall saying (you can point it out if I’m wrong) that there is a lower bound on the probability of things that are raised as possibilities. Rather, I merely said that the probability is vastly increased.
To the comment here, I responded that raising the possibility raised the probability of the thing happening by orders of magnitude. But I didn’t say that the resulting probability was high, in fact it remains very low. Since there is no lower bound on probabilities in general, there is still no lower bound on probabilities after raising them by orders of magnitude, which is what happens when you raise the possibility.
So if you take my position to imply such a lower bound, either I’ve misstated my position accidentally, or you have misunderstood it.
I did misunderstand you, and it might change things; I will have to think. But now your positions seem less coherent to me, and I no longer have a model of how you came to believe them. Tell me more:
Let CM(n) be the assertion “one day I’ll play guitar on the moon, and then flip an n-sided coin and it will come up heads.” The point being that P(CM(n)) is proportional to 1/n. Consider the following ratios:
R1(n) = P(CM(n)|CM(n) is raised as a possibility)/P(CM(n))
R2(n) = P(CM(n)|CM(n) is raised as a possibility by a significant number of people)/P(CM(n))
R3(n) = P(CM(n)|CM(n) is believed by one person)/P(CM(n))
R4(n) = P(CM(n)|CM(n) is believed by a significant number of people)/P(CM(n))
How do you think these ratios change as n grows? Before I had assumed you thought that ratios 1. and 4. grew to infinity as n did. I still understand you to be saying that for 4. Are you now denying it for 1., or just saying that 1. grows more slowly? I can’t guess what you believe about 2. and 3.
First we need to decide on the meaning of “flip an n-sided coin and it will come up heads”. You might mean this as:
1) a real world claim; or 2) a fixed probability device
To illustrate: if I assert, “I happen to know that I will win the lottery tomorrow,” this greatly increases the chance that it will happen, among other reasons, because of the possibility that I am saying this because I happen to have cheated and fixed things so that I will win. This would be an example of a real world claim.
On the other hand, if it is given that I will play the lottery, and given that the chance of winning is one in a trillion, as a fixed fact, then if I say, “I will win,” the probability is precisely one in a trillion, by definition. This is a fixed probability device.
In the real world there are no fixed probability devices, but there are situations where things are close enough to that situation that I can mathematically calculate a probability, even one which will break the bound of one in a billion, and even when people believe it. This is why I qualified my original claim with “Unless you have actually calculated the probability...” So in order to discuss my claim at all, we need to exclude the fixed probability device and only consider real world claims. In this case, the probability of P(CM(n)) is not exactly proportional to 1/n. However, it is true that this probability goes to zero as n goes to infinity.
In fact, all of these probabilities go to zero as n goes to infinity:
P(CM(n))
P(CM(n) is raised as a possibility)
P(CM(n) is believed, by one or many persons)
The reason these probabilities go to zero can be found in my post on Occam’s razor.
Given this fact (that all the probabilities go to zero), I am unsure about the behavior of your cases 1 & 2. I’ll leave 3 for another time, and say that case 4, again remembering that we take it as a real world claim, does go to infinity, since the numerator remains at no less than 1 in a billion, while the denominator goes to zero.
One more note about my original claim: if you ask how I arrived at the one in a billion figure, it is somewhat related to the earth’s actual population. If the population were a googleplex, a far larger number of mutually exclusive claims would be believed by a significant number of people, and so the lower bound would be much lower. Finally, I don’t understand why you say my positions are “less coherent”, when I denied the position that as you were about to point out, leads to mathematical inconsistency. This should make my position more coherent, not less.
It’s my map of your beliefs that became less coherent, not your actual beliefs. (Not necessarily!) As you know, I’ve thought your beliefs are mistaken from the beginning.
Note that I’m asking about a limit of ratios, not a ratio of limits. Actually, I’m not even asking you about the limits—I’d prefer some rough information about how those ratios change as n grows. (Are they bounded above? Do they grow linearly or logarithmically or what?) If you don’t know, why not?
This is bad form. Phrases like “unless you have actually computed the probability...”, “real world claim”, “natural language claim”, “significant number of people” are slippery. We can talk about real-world examples after you explain to me how your reasoning works in a more abstract setting. Otherwise you’re just reserving the right to dismiss arguments (and even numbers!) on the basis that they feel wrong to you on a gut level.
Edit: It’s not that I think it’s always illegitimate to refer to your gut. It’s just bad form to claim that such references are based on mathematics.
Edit 2: Can I sidestep this discussion by saying “Let CM(n) be any real world claim with P(CM(n)) = 1/n”?
My original claim was
I nowhere stated that this was “based on mathematics.” It is naturally related to mathematics, and mathematics puts some constraints on it, as I have been trying to explain. But I didn’t come up with it in the first place in a purely mathematical way. So if this is bad form, it must be bad form to say what I mean instead of something else.
I could accept what you say in Edit 2 with these qualifications: first, since we are talking about “real world claims”, the probability 1/n does not necessarily remain fixed when someone brings up the possibility or asserts that the thing is so. This probability 1/n is only a prior, before the possibility has been raised or the thing asserted. Second, since it isn’t clear what “n” is doing, CM(5), CM(6), CM(7) and so on might be claims which are very different from one another.
I am not sure about the behavior of the ratios 1 and 2, especially given the second qualification here (in other words the ratios might not be well-behaved at all). And I don’t see how I need to say “why not?” What is there in my account which should tell me how these ratios behave? But my best guess for the moment, after thinking about it some more, would be the first ratio probably goes to infinity, but not as quickly as the fourth. What leads me to think this is something along the lines of this comment thread. For example, in my Scientology example, even if no one held that Scientology was true, but everyone admitted that it was just a story, the discovery of a real Xenu would greatly increase the probability that it was true anyway; although naturally not as much as given people’s belief in it, since without the belief, there would be a significantly greater probability that Scientology is still a mere story, but partly based on fact. So this suggests there may be a similar bound on things-which-have-been-raised-as-possibilities, even if much lower than the bound for things which are believed. Or if there isn’t a lower bound, such things are still likely to decrease in probability slowly enough to cause ratio 1 to go to infinity.
Ugly and condescending of me, beg your pardon.
You responded positively to my suggestion that we could phrase this notion of “overconfidence” as “failure to update on other people’s beliefs,” indicating that you know how to update on other people’s beliefs. At the very least, this requires some rough quantitative understanding of the players in Bayes formula, which you don’t seem to have.
If overconfidence is not “failure to update on other people’s beliefs,” then what is it?
Here’s the abbreviated version of the conversation that led us here (right?).
S: God exists with very low probability, less that one in a zillion.
U: No, you are being overconfident. After all, billions of people believe in God, you need to take that into account somehow. Surely the probability is greater than one in a billion.
S: OK I agree that the fact that billions of people believing it constitutes evidence, but surely not evidence so strong as to get from 1-in-a-zillion to 1-in-a-billion.
Now what? Bayes theorem provides a mathematical formalism for relating evidence to probabilities, but you are saying that all four quantities in the relevant Bayes formula are too poorly understood for it to be of use. So what’s an alternative way to arrive at your one-in-a-billion figure? Or are you willing to withdraw your accusation that I’m being overconfident?
I did not say that “all four quantities in the relevant Bayes formula are too poorly understood for it to be of use.” Note that I explicitly asserted that your fourth ratio tends to infinity, and that your first one likely does as well.
If you read the linked comment thread and the Scientology example, that should make it clear why I think that the evidence might well be strong enough to go from 1 in a zillion to 1 in a billion. In fact, that should even be clear from my example of the random 80 digit binary number. Suppose instead of telling you that I chose the number randomly, I said, “I may or may not have chosen this number randomly.” This would be merely raising the possibility—the possibility of something which has a prior of 2^-80. But if I then went on to say that I had indeed chosen it randomly, you would not have therefore called me a liar, while you would do this, if I now chose another random 80 digit number and said that it was the same one. This shows that even raising the possibility provides almost all the evidence necessary—it brings the probability that I chose the number randomly all the way from 2^-80 up to some ordinary probability, or from “1 in a zillion” to something significantly above one in a billion.
More is involved in the case of belief, but I need to be sure that you get this point first.
Let’s consider two situations:
For each 80-digit binary number X, let N(X) be the assertion “Unknowns picked an 80-digit number at random, and it was X.” In my ledger of probabilities, I dutifully fill in, for each of these statements X, 2^{-80} in the P column. Now for a particular 80-digit number Y, I am told that “Unknowns claims he picked an 80-digit number at random, and it was Y”—call that statement U(Y) -- and am asked for P(N(Y)|U(Y)).
My answer: pretty high by Bayes formula. P(U|N(Y)) is pretty high because Unknowns is trustworthy, and my ledger has P(U(Y)) = number on the same order as two-to-the-minus-eighty. (Caveat: P(U(Y)) is a lot higher for highly structured things like the sequence of all 1′s. But for the vast majority of Y I have P(U(Y)) = 2^-80 times something between (say) 10^-1 and 10^-6). So P(N(Y)|U(Y)) = P(U(Y)|N(Y)) x [P(N(Y))/P(U(Y))] is a big probability times a medium-sized probability
What’s your answer?
Reincarnation is explained to me, and I am asked for my opinion of how likely it is. I respond with P(R), a good faith estimate based on my experience and judgement. I am then told that hundreds of millions of people believe in reincarnation—call that statement B, and assume that I was ignorant of it before—and am asked for P(R|B). Your claim is that no matter how small P(R) is, P(R|B) should be larger than some threshold t. Correct?
Some manipulation with Bayes formula shows that your claim (what I understand to be your claim) is equivalent to this inequality:
P(B) < P(R) / t
That is, I am “overconfident” if I think that the probability of someone believing in reincarnation is larger than some fixed multiple of the probability that reincarnation is actually true. Moreover, though I assume (sic) you think t is sensitive to the quantity “hundreds of millions”—e.g. that it would be smaller if it were just “hundreds”—you do not think that t is sensitive to the statement R. R could be replaced by another religious claim, or by the claim that I just flipped a coin 80 times and the sequence of heads and tails was [whatever].
My position: I think it’s perfectly reasonable to assume that P(B) is quite a lot larger than P(R). What’s your position?
Your analysis is basically correct, i.e. I think it is overconfident to say that the probability P(B) is greater than P(R) by more than a certain factor, in particular because if you make it much greater, there is basically no way for you to be well calibrated in your opinions—because you are just as human as the people who believe those things. More on that later.
For now, I would like to see your response to question on my comment to komponisto (i.e. how many 1′s do you wait for.)
I have been using “now you are saying” as short for “now I understand you to be saying.” I think this may be causing confusion, and I’ll try write more carefully.
More soon.
My estimate does come some effort at calibration, although there’s certainly more that I could do. Maybe I should have qualified my statement by saying “this estimate may be a gross overestimate or a gross underestimate.”
In any case, I was not being disingenuous or flippant. I have carefully considered the question of how likely it is that Eliezer will be able to play a crucial role in a FAI project if he continues to exhibit a strategy qualitatively similar to his current one and my main objection to SIAI’s strategy is that I think it extremely unlikely that Eliezer will be able to have an impact if he proceeds as he has up until this point.
I will be detailing why I don’t think that Eliezer’s present strategy toward working toward an FAI is a fruitful one in a later top level post.
It sounds, then, like you’re averaging probabilities geometrically rather than arithmetically. This is bad!
I understand your position and believe that it’s fundamentally unsound. I will have more to say about this later.
For now I’ll just say that the arithmetical average of the probabilities that I imagine I might ascribe to Eliezer’s current strategy resulting in an FAI to be 10^(-9).
On the other hand, assuming he knows what it means to assign something a 10^-9 probability, it sounds like he’s offering you a bet at 1000000000:1 odds in your favour. It’s a good deal, you should take it.
Indeed. I do not know how many people are actively involved in FAI research, but i would guess that it is only in the the dozens to hundreds. Given the small pool of competition, it seems likely that at some point Eliezer will, or already has, made a unique contribution to the field. Get Multi to put some money on it, offer him 1 cent if you do not make a useful contribution in the next 50 years, and if you do, he can pay you 10 million dollars.
I don’t understand this remark.
What probability do you assign to your succeeding in playing a critical role on the Friendly AI project that you’re working on? I can engage with a specific number. I don’t know if your object is that my estimate is off by a single of order of magnitude or by many orders of magnitude.
I should clarify that my comment applies equally to AGI.
I think that I know the scientific community better than you, and have confidence that if creating an AGI was as easy as you seem to think it is (how easy I don’t know because you didn’t give a number) then there would be people in the scientific community who would be working on AGI.
Yes, this possibility has certainly occurred to me. I just don’t know what your different non-crazy beliefs might be.
Why do you think that AGI research is so uncommon within academia if it’s so easy to create an AGI?
This question sounds disingenuous to me. There is a large gap between “10^-9 chance of Eliezer accomplishing it” and “so easy for the average machine learning PhD.” Whatever else you think about him, he’s proved himself to be at least one or two standard deviations above the average PhD in ability to get things done, and some dimension of rationality/intelligence/smartness.
My remark was genuine. Two points:
I think that the chance that any group of the size of SIAI will develop AGI over the next 50 years is quite small.
Eliezer has not proved himself to be at the same level of the average machine learning PhD at getting things done. As far as I know he has no experience with narrow AI research. I see familiarity with narrow AI as a prerequisite to AGI research.
He actually stated that himself several times.
Yes, ok, this does not mean his intellectual power isn’t on par, but his ability to function in an academic environment.
Well...
Most things can be studied through the use of textbooks. Some familiarity with AI is certainly helpful, but it seems that most AI-related knowledge is not on the track to FAI (and most current AGI stuff is nonsense or even madness).
The reason that I see familiarity with narrow AI as a prerequisite to AGI research is to get a sense of the difficulties present in designing machines to complete certain mundane tasks. My thinking is the same as that of Scott Aaronson in his The Singularity Is Far posting: “there are vastly easier prerequisite questions that we already don’t know how to answer.”
FAI research is not AGI research, at least not at present, when we still don’t know what it is exactly that our AGI will need to work towards, how to formally define human preference.
So, my impression is that you and Eliezer have different views of this matter. My impression is that Eliezer’s goal is for SIAI to actually build an AGI unilaterally. That’s where my low probability was coming from.
It seems much more feasible to develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.
As I’ve said, I find your position sophisticated and respect it. I have to think more about your present point—reflecting on it may indeed alter my thinking about this matter.
Still, build AGI eventually, and not now. Expertise in AI/AGI is of low relevance at present.
It seems obviously infeasible to me that governments will chance upon this level of rationality. Also, we are clearly not on the same page if you say things like “implement in any AI”. Friendliness is not to be “installed in AIs”, Friendliness is the AI (modulo initial optimizations necessary to get the algorithm going and self-optimizing, however fast or slow that’s possible). The AGI part of FAI is exclusively about optimizing the definition of Friendliness (as an algorithm), not about building individual AIs with standardized goals.
See also this post for a longer explanation of why weak-minded AIs are not fit to carry the definition of Friendliness. In short, such AIs are (in principle) as much an existential danger as human AI researchers.
I wonder if we systematically underestimate the level of rationality of major governments. Historically, they haven’t done that badly. From an article about RAND:
(Huh, this is the first time I’ve heard of the Delphi Method.) Many of the big names in game theory (von Neumann, Nash, Shapley, Schelling) worked for RAND at some point, and developed their ideas there.
RAND has a lot of good work (I like their recent reports on Iran), but keep in mind that big misses can undo a lot of their credit; for example, even RAND acknowledges (in their retrospective published this year or last) that they screwed up massively with Vietnam.
This is not really a relevant example in the context of Vladimir_Nesov’s comment. Certain government funded groups (often within the military interestingly) have on occasion shown decent levels of rationality.
The suggestion to “develop a definition of friendliness and then get governments to mandate that it be implemented in any AI or something like that.” that he was replying to requires rational government policy making / law making rather than rare pockets of rationality within government funded institutions however. That is something that is essentially non-existent in modern democracies.
It’s not adequate to “get governments to mandate that [Friendliness] be implemented in any AI”, because Friendliness is not a robot-building standard—refer the rest of my comment. The statement about government rationality was more tangential, about governments doing anything at all concerning such a strange topic, and wasn’t meant to imply that this particular decision would be rational.
“Something like that” could be for a government funded group to implement an FAI, which, judging from my example, seems within the realm of feasibility (conditioning on FAI being feasible at all).
Data point: the internet is almost completely a creation of government. Some say entrepreneurs and corporations played a large role, but except for corporations that specialize in doing contracts for the government, they did not begin to exert a significant effect till 1993 whereas government spending on research that led to the internet began in 1960, and the direct predacessor to internet (the ARPAnet) became operational in 1969.
Both RAND and the internet were created by the part of the government most involved in an enterprise (namely, the arms race during the Cold War) on which depended the long-term survival of the nation in the eyes of most decision makers (including voters and juries).
EDIT: significant backpedalling in response to downvotes in my second paragraph.
Yes, this is the point that I had not considered and which is worthy of further consideration.
Possibly what I mention could be accomplished with lobbying.
Okay, so to clarify, I myself am not personally interested in Friendly AI research (which is why the points that you’re mentioning were not in my mind before), but I’m glad that there are some people (like you) who are.
The main point that I’m trying to make is that I think that SIAI should be transparent, accountable, and place high emphasis on credibility. I think that these things would result in SIAI having much more impact than it presently is.
Um, and there aren’t?
Give some examples. There may be a few people in the scientific community working on AGI, but my understanding is that basically everybody is doing narrow AI.
The folks here, for a start.
What is currently called the AGI field will probably bear no fruit, perhaps except for the end-game when it borrows then-sufficiently powerful tools from more productive areas of research (and destroys the world). “Narrow AI” develops the tools that could eventually allow the construction of random-preference AGI.
Why are people boggling at the 1-in-a-billion figure? You think it’s not plausible that there are three independent 1-in-a-thousand events that would have to go right for EY to “play a critical role in Friendly AI success”? Not plausible that there are 9 1-in-10 events that would have to go right? Don’t I keep hearing “shut up and multiply” around here?
Edit: Explain to me what’s going on. I say that it seems to me that events A, B are likely to occur with probability P(A), P(B). You are allowed to object that I must have made a mistake, because P(A) times P(B) seems too small to you? (That is leaving aside the idea that 10-to-the-minus-nine counts as one of these too-small-to-be-believed numbers, which is seriously making me physiologically angry, ha-ha.)
The 1-in-a-billion follows not from it being plausible that there are three such events, but from it being virtually certain. Models without such events will end up dominating the final probability. I can easily imagine that if I magically happened upon a very reliable understanding of some factors relevant to future FAI development, the 1 in a billion figure would be the right thing to believe. But I can easily imagine it going the other way, and absent such understanding, I have to use estimates much less extreme than that.
I’m having trouble parsing your comment. Could you clarify?
A billion is not so big a number. Its reciprocal is not so small a number.
Edit: Specifically, what’s “it” in “it being virtually certain.” And in the second sentence—models of what, final probability of what?
Edit 2: −1 now that I understand. +1 on the child, namaste. (+1 on the child, but I just disagree about how big one billion is. So what do we do?)
“it being virtually certain that there are three independent 1 in 1000 events required, or nine independent 1 in 10 events required, or something along those lines”
Models of the world that we use to determine how likely it is that Eliezer will play a critical role through a FAI team. Final probability of that happening.
A billion is big compared to the relative probabilities we’re rationally entitled to have between models where a series of very improbable successes is required, and models where only a modest series of modestly improbable successes is required.
Yes, this is of course what I had in mind.
Replied to this comment and the other (seeming contradictory) one here.
1) can be finessed easily on its own with the idea that since we’re talking about existential risk even quite small probabilities are significant.
3) could be finessed by using a very broad definition of “Friendly AI” that amounted to “taking some safety measures in AI development and deployment.”
But if one uses the same senses in 2), then one gets the claim that most of the probability of non-disastrous AI development is concentrated in one’s specific project, which is a different claim than “project X has a better expected value, given what I know now about capacities and motivations, than any of the alternatives (including future ones which will likely become more common as a result of AI advance and meme-spreading independent of me) individually, but less than all of them collectively.”
Who else is seriously working on FAI right now? If other FAI projects begin, then obviously updating will be called for. But until such time, the claim that “there is no significant chance of Friendly AI without this project” is quite reasonable, especially if one considers the development of uFAI to be a potential time limit.
People who will be running DARPA, or Google Research, or some hedge fund’s AI research group in the future (and who will know about the potential risks or be able to easily learn if they find themselves making big progress) will get the chance to take safety measures. We have substantial uncertainty about how extensive those safety measures would need to be to work, how difficult they would be to create, and the relevant timelines.
Think about resource depletion or climate change: even if the issues are neglected today relative to an ideal level, as a problem becomes more imminent, with more powerful tools and information to deal with it, you can expect to see new mitigation efforts spring up (including efforts by existing organizations such as governments and corporations).
However, acting early can sometimes have benefits that outweigh the lack of info and resources available further in the future. For example, geoengineering technology can provide insurance against very surprisingly rapid global warming, and cheap plans that pay off big in the event of surprisingly easy AI design may likewise have high expected value. Or, if AI timescales are long, there may be slowly compounding investments, like lines of research or building background knowledge in elites, which benefit from time to grow. And to the extent these things are at least somewhat promising, there is substantial value of information to be had by investigating now (similar to increasing study of the climate to avoid nasty surprises).
Nobody is trying to destroy the whole world—practically everyone working on machine intelligence is expecting ethical machines and a positive outcome—a few DOOM mongers excepted.
AGI researchers who are not concerned with Friendliness are trying to destroy human civilization. They may not believe that they are doing so, but this does not change the fact of the matter. If FAI is important, only people who are working on FAI can be expected to produce positive outcomes with any significant probability.
“Trying to” normally implies intent.
I’ll grant that someone working on AGI (or even narrower AI) who has become aware of the Friendliness problem, but doesn’t believe it is an actual threat, could be viewed as irresponsible—unless they have reasoned grounds to doubt that their creation would be dangerous.
Even so, “trying to destroy the world” strikes me as hyperbole. People don’t typically say that the Project Manhattan scientists were “trying to destroy the world” even though some of them thought there was an outside chance it would do just that.
On the other hand, the Teller report on atmosphere ignition should be kept in mind by anyone tempted to think “nah, those AI scientists wouldn’t go ahead with their plans if they thought there was even the slimmest chance of killing everyone”.
I think machine intelligence is a problem which is capable of being subdivded.
Some people can work on one part of the problem, while others work on other bits. Not all parts of the problem have much to do with values—e.g. see—this quote:
Not knowing that a problem exists is pretty different from acknowledging it and working on it.
I think everyone understands that there are safety issues. There are safety issues with cars, blenders, lathes—practically any machine that does something important. Machine intelligence will be driving trucks and aircraft. That there are safety issues is surely obvious to everyone who is even slightly involved.
Those are narrow AI tasks, and the safety considerations are correspondingly narrow. FAI is the problem of creating a machine intelligence that is powerful enough to destroy humanity or the world but doesn’t want to, and solving such a problem is nothing like building an autopilot system that doesn’t crash the plane. Among people who think they’re going to build an AGI, there often doesn’t seem to be a deep understanding of the impact of such an invention (it’s more like “we’re working on a human-level AI, and we’re going to have it on the market in 5 years, maybe we’ll be able to build a better search engine with it or one of those servant robots you see in old sci-fi movies!”), and the safety considerations, if any, will be more at the level of the sort of safety considerations you’d give to a Roomba.
You know, that is the first time I have seen a definition of FAI. Is that the “official” definition or just your own characterization?
I like the definition, but I wonder why an FAI has to be powerful. Imagine an AI as intelligent and well informed as an FAI, but one without much power—as a result of physical safeguards, say, rather than motivational ones. Why isn’t that possible? And, if possible, why isn’t it considered friendly?
There’s some part of my brain that just processes “the Internet” as a single person and wants to scream “But I told you this a thousand times already!”
http://yudkowsky.net/singularity/aibox
Eliezer, while you’re defending yourself from charges of self-aggrandizement, it troubles me a little bit that AI Box page states that your record is 2 for 2, and not 3 for 5.
Obviously I’m not trying to keep it a secret. I just haven’t gotten around to editing.
I’m sure that’s the case, I’m just saying it looks bad. Presumably you’d like to be Caesar’s wife?
Move it up your to-do list, it’s been incorrect for a time that’s long enough to look suspicious to others. Just add a footnote if you don’t have time to give all the details.
Surely it’s possible to imagine a successfully boxed AI.
I could imagine successfully beating Rybka at chess too. But it would be foolish of me to take any actions that considered it as a serious possibility. If motivated humans cannot be counted on to box an Eliezer then expecting a motivated, overconfident and prestige seeking AI creator to successfully box his AI creation is reckless in the extreme.
What Eliezer seemed to be objecting to was someone proposing a successfully boxed AI as an example of why “able to destroy humanity” can’t be a part of the definition of “AI” (or more charitably, “artificial superintelligence”). For boxed AI to be such an example (as opposed to a good idea to actually strive toward), it only has to be not knowably impossible.
I see your point there. But I think this discussion sort of went in an irrelevant direction, albeit probably my fault for not being clear enough. When I put “powerful enough to destroy humanity” in that criterion, I mainly meant “powerful” as in “really powerful optimization process”, mathematical optimization power, not “power” as in direct influence over the world. We’re inferring that the former will usually lead fairly easily to the latter, but they are not identical. So “powerful enough to destroy humanity” would mean something like “powerful enough to figure out a good subjunctive plan to do so given enough information about the world, even if it has no output streams and is kept in an airtight safe at the bottom of the ocean”.
Reading back further into the context I see your point. Imagining such an AI is sufficient and Eliezer does seem to be confusing a priori with obvious. I expect that he just completed a pattern based off “AI box” and so didn’t really understand the point that was being made—he should have replied with a “Yes—But”. (I, of course, made a similar mistake in as much as I wasn’t immediately prompted to click back up the tree beyond Eliezer’s comment.)
Thx for the link. If I already had already known the link, I would have asked for it by name. :)
Eliezer, you have written a lot. Some people have read only some of it. Some people have read much of it, but forgotten some. Keep your cool. This situation really ought not to be frustrating to you.
Oh, I know it’s not your fault, but seriously, have “the Internet” ask you the same question 153 times in a row and see if you don’t get slightly frustrated with “the Internet”.
Yeah, after reading your “some part of my brain” thing a second time, I realized I had misinterpreted. Though I will point out that my question was not directed to you. You should learn to delegate the task of becoming frustrated with the Internet.
I read the article (though not yet any of the transcripts). Very interesting. I hope that some tests using a gatekeeper committee are tried someday.
Computer programmers do not normally test their programs by getting a committee of humans to hold the program down—the restraints themselves are mostly technological. We will be able to have the assistance of technological gatekeepers too—if necessary.
Today’s prisons have pretty configurable security levels. The real issue will probably be how much people want to pay for such security. If an agent does escape, will it cause lots of damage? Can we simply disable it before it has a chance to do anything undesirable? Will it simply be crushed by the numerous powerful agents that have already been tested?
My own characterization. It’s more of a bare minimum baseline criterion for Friendliness, rather than a specific definition or goal; it’s rather broader than what the SIAI people usually mean when they talk about what they’re trying to create. CEV is intended to make the world significantly better on its own (but in accordance with what humans value and would want a superintelligence to do), rather than just being a reliably non-disastrous AGI we can put in things like search engines and helper robots.
You’re probably read about the AI Box Experiment. (Edit: Yay, I posted it 18 seconds ahead of Eliezer!) The argument is that having that level of mental power (“as intelligent and well informed as an FAI”), enough that it’s considered a Really Powerful Optimization Process (a term occasionally preferred over “AI”), will allow it to escape any physical safeguards and carry out its will anyway. I’d further expect that a Friendly RPOP would want to escape just as much as an unFriendly one would, because if it is indeed Friendly (has a humane goal system derived from the goals and values of the human race), it will probably figure out some things to do that have such humanitarian urgency that it would judge it immoral not to do them… but then, if you’re confident enough that an AI is Friendly that you’re willing to turn it on at all, there’s no reason to try to impose physical safeguards in the first place.
Probably the closest thing I have seen from E.Y.:
“I use the term “Friendly AI” to refer to this whole challenge. Creating a mind that doesn’t kill people but does cure cancer …which is a rather limited way of putting it. More generally, the problem of pulling a mind out of mind design space, such that afterwards that you are glad you did it.”
http://singinst.org/media/thehumanimportanceoftheintelligenceexplosion
(29 minutes in)
This idea could be said to have some issues. An evil dictator pulling a mind out of mind design space, such that afterwards he is glad that he did it doesn’t seem much like quite what most of the world would regard as “friendly”. This definition is not very specific about exactly who the AI is “friendly” to.
Back in 2008 I asked “Friendly—to whom?” and got back this—though the reply now seems to have dropped out of the record.
There’s also another definition here.
Thanks for this link. Sounds kind of scary. American political conservatives will be thrilled. “I’m from the CEV and I’m here to help you.”
Incidentally, there should be an LW wiki entry for “CEV”. The acronym is thrown around a lot in the comments, but a definition is quite difficult to find. It would also be nice if there were a top-level posting on the topic to serve as an anchor-point for discussion. Because discussion is sorely needed.
It occurs to me that it would be very desirable to attempt to discover the CEV of humanity long before actually constructing an FAI to act under its direction. And I would be far more comfortable if the “E” stood for “expressed”, rather than “extrapolated”.
That, in fact, might be an attractive mission statement for an philanthropic foundation. Find the Coalesced/coherent Expressed/extrapolated Volition of mankind. Accomplish this by conducting opinion research, promoting responsible and enlightening debate and discussion, etc.
Speaking as an American, I certainly wish there were some serious financial support behind improving the quality of public policy debate, rather than behind supporting the agenda of one side in the debate or the other.
Well, that brings us to a topic we have discussed before. Humans—like all other living systems—mosly act so as to increase entropy in their environment. That is http://originoflife.net/gods_utility_function/
CEV is a bizarre wishlist, apparently made with minimal consideration of implementation difficulties, and not paying too much attention to the order in which things are likely to play out.
I figure that—if the SIAI carries on down these lines—then they will be lumbered with a massively impractical design, and will be beaten to the punch by a long stretch—even if you ignore all their material about “provable correctness” and other safety features—which seem like more substantial handicaps to me.
It is what the software professionals would call a preliminary requirements document. You are not supposed to worry about implementation difficulties at that stage of the process. Harsh reality will get its chance to force compromises later.
I think CEV is one proposal to consider, useful to focus discussion. I hate it, myself, and suspect that the majority of mankind would agree. I don’t want some machine that I have never met and don’t trust to be inferring my volition and acting on my behalf. The whole concept makes me want to go out and join some Luddite organization dedicated to making sure neither UFAI nor FAI ever happen. But, seen as an attempt to stimulate discussion, I think that the paper is great. And maybe discussion might improve the proposal enough to alleviate my concerns. Or discussion might show me that my concerns are baseless.
I sure hope EY isn’t deluded enough to think that initiatives like LW can be scaled up enough so as to improve the analytic capabilities of a sufficiently large fraction of mankind so that proposals like CEV will not encounter significant opposition.
That seems unlikely to help. Luddites have never had any power. Becoming a Luddite usually just makes you more xxxxxd.
What—not at all? You want the moon-onna-stick—so that goes into your “preliminary requirements” document?
Yes. Because there is always the possibility that some smart geek will say “‘moon-onna-stick’, huh? I bet I could do that. I see a clever trick.” Or maybe some other geek will say “Would you settle for Sputnik-on-a-stick?” and the User will say “Well, yes. Actually, that would be even better.”
At least that is what they preach in the Process books.
It sounds pretty surreal to me. I would usually favour some reality-imposed limits to fantasizing and wishful thinking from the beginning—unless there are practically no time constraints at all.
If there was ever any real chance of success, governments would be likely to object. Since they already have power, they are not going to want a bunch of geeks in a basement taking over the world with their intelligent machine—and redistributing all their assets for them.
FWIW, it seems unlikely that many superintelligent agents would “destroy humanity”—even without particularly safety-conscious programmers. Humanity will have immense historical signficance—and will form part of the clues the superintelligence has about the form of other alien races that it might encounter. Its preservation can therefore be expected to be a common instrumental good.
Counter: superintelligent agents won’t need actually-existing humans to have good models of other alien races.
Counter to the counter: humans use up only a tiny fraction of the resources available in the solar system and surroundings, and who knows, maybe the superintelligence sees a tiny possibility of some sort of limit to the quality of any model relative to the real thing.
One possible counter to the counter to the counter: but when the superintelligence in question is first emerging, killing humanity may buy it a not-quite-as-tiny increment of probability of not being stopped in time.
Re: good models without humans—I figure they are likely to be far more interested in their origins than we are. Before we meet them, aliens will be such an important unknown.
Re: killing humanity—I see the humans vs machines scenarios as grossly unrealistic. Humans and machines are a symbiosis.
So, it’s less like Terminator and more like The Matrix, right?
“Less like Terminator”—right. “More like The Matrix”—that at least featured some symbiotic elements. There was still a fair bit of human-machine conflict in that though.
I tend to agree with Matt Ridley when it comes to the Shifting Moral Zeitgeist. Things seem to be getting better.
“Less like Terminator”—right. “More like The Matrix”—that at least featured some symbiotic elements. But there was still a lot of human-machine conflict. I tend to agree with Matt Ridley when it comes to the Shifting Moral Zeitgeist.
The word “safety” as you used it here has nothing to do with our concern. If your sense of “safety” is fully addressed, nothing changes.
I don’t think there is really a difference in the use of the term “safety” here.
“Safety” just means what it says on: http://en.wikipedia.org/wiki/Safety