What is a genuinely utilitarian lifestyle? Is there someone you can cite as living such a lifestyle?
Obviously humans are extremely ill-suited for being utilitarians (just as humans would be extremely ill-suited for being paperclip maximizers even if they wanted to be.)
When I refer to a “genuinely utilitarian lifestyle” I mean subject to human constraints. There are some people who do this much better than others—for example, Bill Gates and Warren Buffett have done much better than most billionaires.
I think that with a better peer network, Gates and Buffett could have done still better (for example I would have liked to see them take existential risk into serious consideration with their philanthropic efforts).
A key point here is that as I’ve said elsewhere I don’t think that leading a (relatively) utilitarian lifestyle has very much at all to do with personal sacrifice, but rather with realigning one’s personal motivational structure in a way that (at least for many people) does not entail a drop in quality of life. If you haven’t already done so, see my post on missed opportunities for doing well by doing good.
I’m not sure what you’re talking about in the last sentence. Prevent what from happening to Eliezer? Failing to lose hope when he should? (He wrote a post about that, BTW.)
Thanks for the reference. I edited the end of my posting to clarify what I had in mind.
When I refer to a “genuinely utilitarian lifestyle” I mean subject to human constraints. There are some people who do this much better than others—for example, Bill Gates and Warren Buffett have done much better than most billionaires.
If that’s the kind of criteria you have in mind, why did you say “Eliezer appears to be deviating so sharply from leading a genuinely utilitarian lifestyle”? It seems to me that Eliezer has also done much better than most … (what’s the right reference class here? really smart people who have been raised in a developed country?)
Which isn’t to say that he couldn’t do better, but your phrasing strikes me as rather unfair...
What I was getting at in my posting is that in exhibiting unwillingness to seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI it appears that Eliezer is deviating sharply from leading a utilitarian lifestyle (relative to what one can expect from humans).
I was not trying to make a general statement about Eliezer’s attainment of utilitarian goals relative to other humans. I think that there’s a huge amount of uncertainty on this point to such an extent that it’s meaningless to try to make a precise statement.
The statement that I was driving at is a more narrow one.
I think that it would be better for Eliezer and for the world at large if Eliezer seriously considered the possibility that he’s vastly overestimated his chances of building a Friendly AI. I strongly suspect that if he did this, his strategy for reducing existential risk would change for the better. If his current views turn out to be right, he can always return to them later on. I think that the expected benefits of him reevaluating his position far outweigh the expected costs.
I think that it would be better for Eliezer and for the world at large if Eliezer seriously considered the possibility that he’s vastly overestimated his chances of building a Friendly AI. I strongly suspect that if he did this, his strategy for reducing existential risk would change for the better.
Why? What sort of improvement would you expect?
Remember that he is still the one person in the public sphere who takes the problem of Friendly AI (under any name) seriously enough to have devoted his life to it, and who actually has quasi-technical ideas regarding how to achieve it. All this despite the fact that for decades now, in fiction and nonfiction, the human race has been expressing anxiety about the possibility of superhuman AI. Who are his peers, his competitors, his predecessors? If I was writing the history of attempts to think about the problem, Chapter One would be Isaac Asimov with his laws of robotics, Chapter Two would be Eliezer Yudkowsky and the idea of Friendly AI, and everything else would be a footnote.
I think that if he had a more accurate estimation of his chances of building a Friendly AI, this would be better for public relations, for the reasons discussed in Existential Risk and Public Relations.
I think that his unreasonably estimate of his ability to build a Friendly AI has decreased his willingness to engage with the academic mainstream to an unreasonable degree. I think that his ability to do Friendly AI research would be heightened if he were more willing to engage with the academic mainstream. I think he’d be more likely to find collaborators and more likely to learn the relevant material.
I think that a more accurate assessment of the chances of him building a Friendly AI might lead him to focus on inspiring others and on existential risk reduction advocacy (things that he has demonstrated capacity to do very well) rather than Friendly AI research. I suspect that if this happened, it would maximize his chances of obstructing global catastrophic risk.
might lead him to focus on inspiring others and on existential risk reduction advocacy (things that he has demonstrated capacity to do very well) rather than Friendly AI research
That would absolutely be a waste. If for some reason he was only to engage in advocacy from now on, it should specifically be Friendly AI advocacy. I point again to the huge gaping absence of other people who specialize in this problem and who have worthwhile ideas. The other “existential risks” have their specialized advocates. No-one else remotely comes close to filling that role for the risks associated with superintelligence.
In other words, the important question is not, what are Eliezer’s personal chances of success; the important question is, who else is offering competent leadership on this issue? Like wedrifid, I don’t even recall hearing a guess from Eliezer about what he thinks the odds of success are. But such guesses are of secondary importance compared to the choice of doing something or doing nothing, in a domain where no-one else is acting. Until other people show up, you have to just go out there and do your best.
I’m pretty sure Eric Drexler went through this already, with nanotechnology. There was a time when Drexler was in a quite unique position, of appreciating the world-shaking significance of molecular machines, having an overall picture of what they imply and how to respond, and possessing a platform (his Foresight Institute) which gave him a little visibility. The situation is very different now. We may still be headed for disaster on that front as well, but at least the ability of society to think about the issues is greatly improved, mostly because broad technical progress in chemistry and nanoscale technology has made it easier for people to see the possibilities and has also clarified what can and can’t be done.
As computer science, cognitive science, and neuroscience keep advancing, the same thing will happen in artificial intelligence, and a lot of Eliezer’s ideas will seem more natural and constructive than they may now appear. Some of them will be reinvented independently. All of them (that survive) should take on much greater depth and richness (compare the word pictures in Drexler’s 1986 book with the calculations in his 1992 book).
Despite all the excesses and distractions, work is being done and foundations for the future are being laid. Also, Eliezer and his colleagues do have many lines into academia, despite the extent to which they exist outside it. So in terms of process, I do consider them to be on track, even if the train shakes violently at times.
Like wedrifid, I don’t even recall hearing a guess from Eliezer about what he thinks the odds of success are.
Eliezer took exception to my estimate linked in my comment here.
If for some reason he was only to engage in advocacy from now on, it should specifically be Friendly AI advocacy.
Quite possibly you’re right about this.
Despite all the excesses and distractions, work is being done and foundations for the future are being laid. Also, Eliezer and his colleagues do have many lines into academia, despite the extent to which they exist outside it.
On this point I agree with SarahC’s second comment here.
I would again recur to my point about Eliezer having an accurate view of his abilities and likelihood of success being important for public relations purposes.
Eliezer took exception to my estimate linked in my comment here.
Less than 1 in 1 billion! :-) May I ask exactly what the proposition was? At the link you say “probability of … you succeeding in playing a critical role on the Friendly AI project that you’re working on”. Now by one reading that probability is 1, since he’s already the main researcher at SIAI.
Suppose we analyse your estimate in terms of three factors:
(probability that anyone ever creates Friendly AI) x
(conditional probability SIAI contributed) x
(conditional probability that Eliezer contributed)
Can you tell us where the bulk of the 10^-9 is located?
Eliezer took exception to my estimate linked in my comment here.
And he was right to do so, because that estimate was obviously on the wrong order of magnitude. To make an analogy, if someone says that you weigh 10^5kg, you don’t have to reveal your actual weight (or even measure it) to know that 10^5 was wrong.
if someone says that you weigh 10^5kg, you don’t have to reveal your actual weight (or even measure it) to know that 10^5 was wrong.
But why is the estimate that I gave obviously on the wrong order of magnitude?
From my point of view, his reaction is an indication that his estimate is obviously on the wrong order of magnitude. But I’m still willing to engage with him and hear what he has to say, whereas he doesn’t seem willing to engage with me and hear what I have to say.
But why is the estimate that I gave obviously on the wrong order of magnitude?
The original statement was
I assign a probability of less than 10^(-9) to you succeeding in playing a critical role on the Friendly AI project that you’re working on.”
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
Assume we accept this lower probability of 10^-2 for the first piece. For the second piece, as simplifying assumptions, assume there are only 10^1 “critical role” slots, and that they’re assigned randomly out of all the people who might plausibly work on friendly AI. (Since we’re only going for an order of magnitude, we’re allowed to make simplifying assumptions like this; and we have to do so, because otherwise the problem is intractable.) In order to get a probability of 10^-9, you would need to come up with 10^8 candidate AGI researchers, each qualified to a degree similar to Eliezer. By comparison, there are 3.3x10^6 working in all computer and mathematical science occupations put together, of whom maybe 1 in 10^2 has even heard of FAI and none have dedicated their life to it.
It is possible to disagree on probabilities, but it is not possible to disagree by more than a couple orders of magnitude unless someone is either missing crucial information, has made a math error, or doesn’t know how to compute probabilities. And not knowing how to compute probabilities is the norm; it’s a rare skill that has to be specifically cultivated, and there’s no shame in not having it yet. But it is a prerequisite for some (though certainly not all) of the discussions that take place here. And the impression that I got was that you jumped into a discussion you weren’t ready for, and then let the need to be self-consistent guide your arguments in untruthful directions. I think others got the same impression as well. We call this motivated cognition—it’s a standard bias that everyone suffers from to some degree—and avoiding it is also a rare skill that must be specifically cultivated, and there is no shame in not having that skill yet, either. But until you develop your rationality skills further, Eliezer isn’t going to engage with you, and it would be a mistake for him to do so.
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
I can’t engage with your statement here unless you quantify the phrase “Not-too-distant future.”
For the second piece, as simplifying assumptions, assume there are only 10^1 “critical role” slots, and that they’re assigned randomly out of all the people who might plausibly work on friendly AI. (Since we’re only going for an order of magnitude, we’re allowed to make simplifying assumptions like this; and we have to do so, because otherwise the problem is intractable.) In order to get a probability of 10^-9, you would need to come up with 10^8 candidate AGI researchers, each qualified to a degree similar to Eliezer. By comparison, there are 3.3x10^6 working in all computer and mathematical science occupations put together, of whom maybe 1 in 10^2 has even heard of FAI and none have dedicated their life to it.
I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).
•You seem to be implicitly assuming that Friendly AI will be developed before unFriendly AI. This implicit assumption is completely ungrounded.
It is possible to disagree on probabilities, but it is not possible to disagree by more than a couple orders of magnitude unless someone is either missing crucial information, has made a math error, or doesn’t know how to compute probabilities. And not knowing how to compute probabilities is the norm; it’s a rare skill that has to be specifically cultivated, and there’s no shame in not having it yet. But it is a prerequisite for some (though certainly not all) of the discussions that take place here.
I agree with all of this.
And the impression that I got was that you jumped into a discussion you weren’t ready for, and then let the need to be self-consistent guide your arguments in untruthful directions. I think others got the same impression as well.
I can understand how you might have gotten this impression. But I think that it’s important to give people the benefit of the doubt up to a certain point. Too much willingness to dismiss the consideration of what people say on account of doubting their rationality is conducive to group think and confirmation bias.
But until you develop your rationality skills further, Eliezer isn’t going to engage with you, and it would be a mistake for him to do so.
In line with my comment above, I’m troubled by the fact that you’ve so readily assumed that my rationality skills are insufficiently developed for it to be worth Eliezer’s time to engage with me.
I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude.
Not only that, but sophisticated pure mathematics will surely supply the substance of FAI theory. I’m thinking especially of Ketan Mulmuley’s research program, applying algebraic geometry to computational complexity theory. Many people think it’s the most promising approach to P vs NP.
It has been suggested that the task of Friendly AI boils down to extracting the “human utility function” from the physical facts, and then “renormalizing” this using “reflective decision theory” to produce a human-relative friendly utility function, and then implementing this using a cognitive architecture which is provably stable under open-ended self-directed enhancement. The specification of the problem is still a little handwavy and intuitive, but it’s not hard to see solid, well-defined problems lurking underneath the suggestive words, and it should be expected that the exact answers to those problems will come from a body of “theory” as deep and as lucid as anything presently existing in pure math.
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
We have to assign probabilities to Artificial intelligence being first created on earth over the earths entire lifetime.
So what probability should we give to the first non-biological intelligence being created in the time period between 3 million years and 3million and 50 years time (not necessarily by humans)? Would it be greater than or less than 10^-2? If less than that, what justifies your confidence in that statement rather than your confidence that it will be created soon?
We have to get all these probabilities to sum to the chance we assign to AI ever being created, over the lifetime of the earth. So I don’t see how we can avoid very small probabilities in AI being created at certain times.
I think that it would be better for Eliezer and for the world at large if Eliezer seriously considered the possibility that he’s vastly overestimated his chances of building a Friendly AI.
We haven’t heard Eliezer say how likely he believes it is that he creates a Friendly AI. He has been careful to not to discuss that subject. If he thought his chances of success were 0.5% then I would expect him to make exactly the same actions.
(ETA: With the insertion of ‘relative’ I suspect I would more accurately be considering the position you are presenting.)
Right, so in my present epistemological state I find it extremely unlikely that Eliezer will succeed in building a Friendly AI. I gave an estimate here which proved to be surprisingly controversial.
The main points that inform my thinking here are:
The precedent for people outside of the academic mainstream having mathematical/scientific breakthroughs in recent times is extremely weak. In my own field of pure math I know of only two people without PhD’s in math or related fields who have produced something memorable in the last 70 years or so, namely Kurt Heegner and Martin Demaine. And even Heegner and Demaine are (relatively speaking) quite minor figures. It’s very common for self-taught amateur mathematicians to greatly underestimate the difficulty of substantive original mathematical research. I find it very likely that the same is true in virtually all scientific fields and thus have an extremely skeptical Bayesian prior against any proposition of the type “amateur intellectual X will solve major scientific problem Y.”
From having talked with computer scientists and AI researchers, I have a very strong impression that the consensus is that AGI is way out of reach at present. See for example points #1 and #5 of Scott Aaronson’s The Singularity is Far.
The fact that Eliezer does not appear to have seriously contemplated or addressed the the two points above and their implications diminishes my confidence in his odds of success still further.
The fact that Eliezer does not appear to have seriously contemplated or addressed the the two points above and their implications diminishes my confidence in his odds of success still further.
That you have this impression greatly diminishes my confidence in your intuitions on the matter. Are you seriously suggesting that Eliezer has not contemplated AI researchers’ opinions about AGI? Or that he hasn’t thought about just how much effort should go into a scientific breakthrough?
Someone please throw a few hundred relevant hyperlinks at this person.
I’m not saying that Eliezer has given my two points no consideration. I’m saying that Eliezer has not given my two points sufficient consideration. By all means, send hyperlinks that you find relevant my way—I would be happy to be proven wrong.
Regarding your first point, I’m pretty sure Eliezer does not expect to solve FAI by himself. Part of the reason for creating LW was to train/recruit potential FAI researchers, and there are also plenty of Ph.D. students among SIAI visiting fellows.
Regarding the second point, do you want nobody to start researching FAI until AGI is within reach?
Regarding your first point, I’m pretty sure Eliezer does not expect to solve FAI by himself. Part of the reason for creating LW was to train/recruit potential FAI researchers, and there are also plenty of Ph.D. students among SIAI visiting fellows.
Right, but the historical precedent for an amateur scientist even being at all involved in a substantial scientific breakthrough over the past 50 years is very weak.
Also, my confidence in Eliezer’s ability to train/recruit potential FAI researchers has been substantially diminished for the reasons that I give in Existential Risk and Public Relations. I personally would be interested in working with Eliezer if he appeared to me to be well grounded. The impressions that I’ve gotten from my private correspondence with Eliezer and from his comments have given me a very strong impression that I would find him too difficult to work with for me to be able to do productive FAI research with him.
Regarding the second point, do you want nobody to start researching FAI until AGI is within reach?
No. I think that it would be worthwhile for somebody to do FAI research in line with Vladimir Nesov’s remarks here and here.
But I maintain that the probability of success is very small and that the only justification for doing it is the possibility of enormous returns. If people had established an institute for the solution of Fermat’s Last Theorem in the 1800′s, the chances of anybody there playing a decisive role in the solution of Fermat’s Last Theorem would be very small. I view the situation with FAI as analogous.
Right, but the historical precedent for an amateur scientist even being at all involved in a substantial scientific breakthrough over the past 50 years is very weak.
Hold on—there are two different definitions of the word “amateur” that could apply here, and they lead to very different conclusions. The definition I think of first, is that an amateur at something is someone who doesn’t get paid folr doing it, as opposed to a professional who makes a living at it. By this definition, amateurs rarely achieve anything, and if they do, they usually stop being amateurs. But Eliezer’s full-time occupation is writing, thinking, and talking about FAI and related topics, so by this definition, he isn’t an amateur (regardless of whether or not you think he’s qualified for that occupation).
The other definition of “amateur scientist” would be “someone without a PhD”. This definition Eliezer does fit, but by this definition, the amateurs have a pretty solid record. And if you narrow it down to computer software, the amateurs have achieved more than the PhDs have!
I feel like you’ve taken the connotations of the first definition and unknowingly and wrongly transferred them to the second definition.
Okay, so, I agree with some of what you say above. I think I should have been more precise.
A claim of the type “Eliezer is likely to build a Friendly AI” requires (at least in part) a supporting claim of the type “Eliezer is in group X where people in group X are likely to build a Friendly AI.” Even if one finds such a group X, this may not be sufficient because Eliezer may belong to some subgroup of X which is disproportionately unlikely to build a Friendly AI. But one at least has to be able to generate such a group X.
At present I see no group X that qualifies.
1.Taking X to be “humans in the developed world” doesn’t work because the average member of X is extremely unlikely to build a Friendly AI.
Taking X to be “people with PhDs a field related to artificial intelligence” doesn’t work because Eliezer doesn’t have a PhD in artificial intelligence.
Taking X to be “programmers” doesn’t work because Eliezer is not a programmer.
Taking X to be “people with very high IQ” is a better candidate, but still doesn’t yield a very high probability estimate because very high IQ is not very strongly correlated with technological achievement.
Taking X to be “bloggers about rationality” doesn’t work because there’s very little evidence that being a blogger about rationality is correlated with skills conducive to building a Friendly AI.
Which suitable group X do you think that Eliezer falls into?
How about “people who have publically declared an intention to try to build an FAI”? That seems like a much more relevant reference class, and it’s tiny. (I’m not sure how tiny, exactly, but it’s certainly smaller than 10^3 people right now) And if someone else makes a breakthrough that suddenly brings AGI within reach, they’ll almost certainly choose to recruit help from that class.
I agree that the class that you mention is a better candidate than the ones that I listed. However:
I find it fairly likely that the class will expand dramatically if there’s a breakthrough that brings AGI in within reach.
Announcing interest in FAI does not entail having the skills necessary to collaborate with the people working on an AGI to make it Friendly.
In addition to these points, there’s a factor which makes Eliezer less qualified than the usual member of the class, namely his public relations difficulties. As he says here “I feel like I’m being held to an absurdly high standard … like I’m being asking to solve PR problems that I never signed up for.” As a matter of reality, PR matters in this world. If there was a breakthrough that prompted a company like IBM to decide to build an AGI, I have difficulty imagining them recruiting Eliezer, the reason being that Eliezer says things that sound strange and is far out of the mainstream. However of course
(i) I could imagine SIAI’s public relations improving substantially in the future—this would be good and would raise the chances of Eliezer being able to work with the researchers who build an AGI.
(ii) There may of course be other factors which make Eliezer more likely than other members of the class to be instrumental to building a Friendly AI.
Despite factors (i) and (ii), putting all of the information that I have together, my estimate of 10^(-9) still feels about right to me. I’d be happy to continue trading information with you with a view toward syncing up our probabilities if you’re so inclined.
I find it fairly likely that the class will expand dramatically if there’s a breakthrough that brings AGI in within reach.
I should hope not! If that happens, it means the person who made the breakthrough released it to the public. That would be a huge mistake, because it would greatly increase the chances of an unfriendly AI being built before a friendly one.
Despite factors (i) and (ii), putting all of the information that I have together, my estimate of 10^(-9) still feels about right to me.
That’s only because you said it in public and aren’t willing to appear inconsistent. You still haven’t decomposed this into manageable pieces with numbers. And since we’ve already seen that you wrote the bottom line first, we would have strong reason to not trust those numbers if you did.
I should hope not! If that happens, it means the person who made the breakthrough released it to the public. That would be a huge mistake, because it would greatly increase the chances of an unfriendly AI being built before a friendly one.
Two points:
It seems very likely to me that there’s a string of breakthroughs which will lead to AGI and that it will gradually become clear that to people that they should be thinking about friendliness issues.
Even if there’s a single crucial breakthrough, I find it fairly likely that the person who makes it will not have friendliness concerns in mind.
That’s only because you said it in public and aren’t willing to appear inconsistent. You still haven’t decomposed this into manageable pieces with numbers. And since we’ve already seen that you wrote the bottom line first, we would have strong reason to not trust those numbers if you did.
I believe that the human brain is extremely poorly calibrated to determining probabilities through the explicit process that you describe and that the human brain’s intuition is often more reliable for such purposes. My attitude is in line with Holden’s comments 14 and 16 on the GiveWell Singularity Summit thread.
In line with the last two paragraphs of one of my earlier comments, I find your quickness to assume that my thinking on these matters stems from motivated cognition disturbing. Of course, I may be exhibiting motivated cognition, but the same is true of you, and your ungrounded confidence in your superiority to me is truly unsettling. As such, I will cease to communicate further with you unless you resolve to stop confidently asserting that I’m exhibiting motivated cognition.
I don’t think that’s the right way to escape from a Pascal’s mugging. In the case of the SIAI, there isn’t really clear evidence that the organisation is having any positive effect—let alone SAVING THE WORLD. When the benefit could plausibly be small, zero—or indeed negative—one does not need to invoke teeny tiny probabalities to offset it.
If that happens, it means the person who made the breakthrough released
it to the public. That would be a huge mistake, because it would greatly
increase the chances of an unfriendly AI being built before a friendly one.
You are so concerned about the possibility of failure that you want to slow down research, publication and progress in the field—in order to promote research into safety?
Do you think all progress should be slowed down—or just progress in this area?
The costs of stupidity are a million road deaths a year, and goodness knows how many deaths in hospitals. Intelligence would have to be pretty damaging to outweigh that.
There is an obvious good associated with publication—the bigger the concentration of knowledge about intelligent machines there is in one place, the greater wealth inequality is likely to result, and the harder it would be for the rest of society to deal with a dominant organisation. Spreading knowlege helps spread out the power—which reduces the chance of any one group of people becoming badly impoverished. Such altruistic measures may help to prevent a bloody revolution from occurring.
Right, but the historical precedent for an amateur scientist even being at all involved in a substantial scientific breakthrough over the past 50 years is very weak.
What are we supposed to infer from that? That if you add an amateur scientist to a group of PhDs, that would substantially decrease their chance of making a breakthrough?
The impressions that I’ve gotten from my private correspondence with Eliezer and from his comments have given me a very strong impression that I would find him too difficult to work with for me to be able to do productive FAI research with him.
SIAI held a 3-day decision theory workshop in March that I attended along with Stuart Armstrong and Gary Drescher as outside guests. I feel pretty safe in saying that none of us found Eliezer particularly difficult to work with. I wonder if perhaps you’re generalizing from one example here.
I think that it would be worthwhile for somebody to do FAI research in line with Vladimir Nesov’s remarks here and here.
Do you also think it would be worthwhile for somebody to try to build an organization to do FAI research? If so, who do you think should be doing that, if not Eliezer and his supporters? Or is your position more like cousin_it’s, namely that FAI research should just be done by individuals on their free time for now?
What are we supposed to infer from that? That if you add an amateur scientist to a group of PhDs, that would substantially decrease their chance of making a breakthrough?
No, certainly not. I just don’t see much evidence that Eliezer is presently adding value to Friendly AI research. I think he could be doing more to reduce existential risk if he were operating under different assumptions.
SIAI held a 3-day decision theory workshop in March that I attended along with Stuart Armstrong and Gary Drescher as outside guests. I feel pretty safe in saying that none of us found Eliezer particularly difficult to work with. I wonder if perhaps you’re generalizing from one example here.
Of course you could be right here, but the situation is symmetric, the same could be the case for you, Stuart Armstrong and Gary Drescher. Keep in mind that there’s a strong selection effect here—if you’re spending time with Eliezer you’re disproportionately likely to be well suited to working with Eliezer, and people who have difficulty working with Eliezer are disproportionately unlikely be posting on Less Wrong or meeting with Eliezer.
My intuition is that there are a lot of good potential FAI researchers who would not feel comfortable working with Eliezer given his current disposition, but I may be wrong.
Do you also think it would be worthwhile for somebody to try to build an organization to do FAI research? If so, who do you think should be doing that, if not Eliezer and his supporters? Or is your position more like cousin_it’s, namely that FAI research should just be done by individuals on their free time for now?
Quite possibly it’s a good thing for Eliezer and his supporters to be building an organization to do FAI research. On the other hand maybe cousin_it’s position is right. I have a fair amount of uncertainty on this point.
The claim that I’m making is quite narrow: that it would be good for the cause of existential risk reduction if Eliezer seriously considered the possibility that he’s greatly overestimated his chances of building a Friendly AI.
I’m not saying that it’s a bad thing to have an organization like SIAI. I’m not saying that Eliezer doesn’t have a valuable role to serve within SIAI. I’m reminded of Robin Hanson’s Against Disclaimers though I don’t feel comfortable with his condescending tone and am not thinking of you in that light :-).
My intuition is that there are a lot of good potential FAI researchers who would not feel comfortable working with Eliezer given his current disposition, but I may be wrong.
This topic seems important enough that you should try to figure out why your intuition says that. I’d be interested in hearing more details about why you think a lot of good potential FAI researchers would not feel comfortable working with Eliezer. And in what ways do you think he could improve his disposition?
I personally would be interested in working with Eliezer if he appeared to me to be well grounded. The impressions that I’ve gotten from my private correspondence with Eliezer and from his comments have given me a very strong impression that I would find him too difficult to work with for me to be able to do productive FAI research with him.
My reading of this is that before you corresponded privately with Eliezer, you were
interested in personally doing FAI research
assigned high enough probability to Eliezer’s success to consider collaborating with him
massively decreased your estimate of Eliezer’s chance of success
Is this right? If so, I wonder what he could have said that made you change your mind like that. I guess either he privately came off as much less competent than he appeared in the public writings that drew him to your attention in the first place (which seems rather unlikely), or you took his response as some sort of personal affront and responded irrationally.
So, the situation is somewhat different than the one that you describe. Some points of clarification.
•I first came across Overcoming Bias in 2008. Eliezer was recommended to me by a friend who I respect a great deal. My reactions to the first postings that I read by Eliezer was strong discomfort with his apparent grandiosity and self absorption. This discomfort was sufficiently strong for me to lose interest despite my friend’s endorsement.
•I started reading Less Wrong in earnest in the beginning of 2010. This made it clear to me that Eliezer has a lot to offer and that it was unfortunate that I had been pushed away by my initial reaction.
•I never assigned a very high probability to Eliezer making a crucial contribution to an FAI research project. My thinking was that the enormous positive outcome associated with success might be sufficiently great to justify the project despite the small probability.
•I didn’t get much of a chance to correspond privately with Eliezer at all. He responded to a couple of my messages with one line dismissive responses and then stopped responding to my subsequent messages. Naturally this lowered the probability that I assigned to being able to collaborating with him. This also lowered my confidence in his ability to attract collaborators in general.
•If Eliezer showed strong ability to attract and work well with collaborators (including elite academics who are working on artificial intelligence research) then I would find it several orders of magnitude more likely that he would make a crucial contribution to an FAI research project. For concreteness I’ll throw out the number 10^(-6).
•I feel that the world is very complicated and that randomness plays a very large role. This leads me to assign a very small probability to the proposition that any given individual will play a crucial role in eliminating existential risk.
•I freely acknowledge that I may be influenced by emotional factors. I make an honest effort at being level headed and sober but as I mention elsewhere, my experience posting on Less Wrong has been emotionally draining. I find that I become substantially less rational when people assume that my motives are impure (some sort of self fulfilling prophesy).
You may notice that of my last four posts, the first pair was considerably more impartial than the second pair. (This is reflected in the fact that the first pair was upvoted more than the second pair.) My subjective perception is that I started out thinking quite carefully and became less rational as I read and responded to hostile commentators.
I first came across Overcoming Bias in 2008. Eliezer was recommended to me by a friend who I respect a great deal. My reactions to the first postings that I read by Eliezer was strong discomfort with his apparent grandiosity and self absorption. This discomfort was sufficiently strong for me to lose interest despite my friend’s endorsement.
I’d be really interested to know which posts these were, because it would help me to distinguish between the following interpretations:
(1) First impressions really do matter: even though you and I are probably very similar in many respects, we have different opinions of Eliezer simply because in the first posts of his I read, he sounded more like a yoga instructor than a cult leader; whereas perhaps the first thing you read was some post where his high estimation of his abilities relative to the rest of humanity was made explicit, and you didn’t have the experience of his other writings to allow you to “forgive” him for this social transgression.
(2) We have different personalities, which cause us to interpret people’s words differently: you and I read more or less the same kind of material first, but you just interpreted it as “grandiose” whereas I didn’t.
What’s interesting in any case is that I’m not sure that I actually disagree with you all that much about Eliezer having a small chance of success (though I think you quantify it incorrectly with numbers like 10^(-9) or 10^(-6) -- these are way too small). Where we differ seems to be in the implications we draw from this. You appear to believe that Eliezer and SIAI are doing something importantly wrong, that could be fixed by means of a simple change of mindset, and that they shouldn’t be supported until they make this change. By contrast, my interpretation is that this is an extremely difficult problem, that SIAI is basically the first organization that has begun to make a serious attempt to address it, and that they are therefore worthy of being supported so that they can increase their efforts in the directions they are currently pursuing and potentially have a larger impact than they otherwise would.
I’ve been meaning to ask you: given your interest in reducing existential risk, and your concerns about SIAI’s transparency and their general strategy, have you considered applying to the Visiting Fellows program? That would be an excellent way not only to see what it is they do up close, but also to discuss these very issues in person at length with the people involved in SIAI strategy—which, in my experience, they are very interested in doing, even with short-term visitors.
I’d be really interested to know which posts these were, because it would help me to distinguish between the following interpretations:
Right, so the first posts that I came across were Eliezer’s Coming of Age posts which I think are unrepresentatively self absorbed. So I think that the right interpretation is the first that you suggest.
What’s interesting in any case is that I’m not sure that I actually disagree with you all that much about Eliezer having a small chance of success (though I think you quantify it incorrectly with numbers like 10^(-9) or 10^(-6) -- these are way too small). Where we differ seems to be in the implications we draw from this. You appear to believe that Eliezer and SIAI are doing something importantly wrong, that could be fixed by means of a simple change of mindset, and that they shouldn’t be supported until they make this change. By contrast, my interpretation is that this is an extremely difficult problem, that SIAI is basically the first organization that has begun to make a serious attempt to address it, and that they are therefore worthy of being supported so that they can increase their efforts in the directions they are currently pursuing and potentially have a larger impact than they otherwise would.
Since I made my top level posts, I’ve been corresponding with Carl Shulman who informed me of some good things that SIAI has been doing that have altered my perception of the institution. I think that SIAI may be worthy of funding.
Regardless as to the merits of SIAI’s research and activities, I think that in general it’s valuable to promote norms of Transparency and Accountability. I would certainly be willing to fund SIAI if it were strongly recommended by a highly credible external charity evaluator like GiveWell. Note also a comment which I wrote in response to Jasen.
I would like to talk more about these things—would you like to share email addresses? PM me if so.
I’ve been meaning to ask you: given your interest in reducing existential risk, and your concerns about SIAI’s transparency and their general strategy, have you considered applying to the Visiting Fellows program? That would be an excellent way not only to see what it is they do up close, but also to discuss these very issues in person at length with the people involved in SIAI strategy—which, in my experience, they are very interested in doing, even with short-term visitors.
At this point I worry that I’ve alienated the SIAI people to such an extent that they might not be happy to have me. But I’d certainly be willing if they’re favorably disposed toward me.
I’ll remark that back in December after reading Anna Salamon’s posting on the SIAI Visting Fellows program I did send Anna Salamon a long email expressing some degree of interest and describing some my concerns without receiving a response. I now find it most plausible that she just forgot about it and that I should have tried again, but maybe you can understand from this how I got the impression that becoming an SIAI Visiting Fellow was not a strong option for me.
I would like to talk more about these things—would you like to share email addresses? PM me if so.
Done.
I’ll remark that back in December after reading Anna Salamon’s posting on the SIAI Visting Fellows program I did send Anna Salamon a long email expressing some degree of interest and describing some my concerns without receiving a response. I now find it most plausible that she just forgot about it and that I should have tried again, but maybe you can understand from this how I got the impression that becoming an SIAI Visiting Fellow was not a strong option for me.
As it happens, the same thing happened to me; it turned out that my initial message had been caught in a spam filter. I eventually ended up visiting for two weeks, and highly recommend the experience.
If Eliezer showed strong ability to attract and work well with collaborators (including elite academics who are working on artificial intelligence research) then I would find it several orders of magnitude more likely that he would make a crucial contribution to an FAI research project. For concreteness I’ll throw out the number 10^(-6).
This, along with your other estimate of 10^(-9), implies that your probability for Eliezer being able to eventually attract and work well with collaborators is currently 1/1000. Does that really seem reasonable to you (would you be willing to bet at those odds?), given other evidence besides your private exchange with Eliezer? Such as:
Eliezer already had a close collaborator, namely Marcello
SIAI has successfully attracted many visiting fellows
SIAI has successfully attracted top academics to speak at their Singularity Summit
Eliezer is currently writing a book on rationality, so presumably he isn’t actively trying to recruit collaborators at the moment
Other people’s reports of not finding Eliezer particularly difficult to work with
It seems to me that rationally updating on Eliezer’s private comments couldn’t have resulted in such a low probability. So I think a more likely explanation is that you were offended by the implications of Eliezer’s dismissive attitude towards your comments.
(Although, given Eliezer’s situation, it would probably be a good idea for him to make a greater effort to avoid offending potential supporters, even if he doesn’t consider them to be viable future collaborators.)
My subjective perception is that I started out thinking quite carefully and became less rational as I read and responded to hostile commentators.
Your responses to me seem pretty level headed and sober. I hope that means you don’t find my comments too hostile.
This, along with your other estimate of 10^(-9), implies that your probability for Eliezer being able to eventually attract and work well with collaborators is currently 1/1000. Does that really seem reasonable to you (would you be willing to bet at those odds?)
Thinking it over, my estimate of 10^(-6) was way too high. This isn’t because of a lack of faith in Eliezer’s abilities in particular. I would recur to my above remark that I think that everybody has very small probability of succeeding in efforts to eliminate existential risk. We’re part of a complicated chaotic dynamical system and to a large degree our cumulative impact on the world is unintelligible and unexpected (because of a complicated network of unintended consequences, side effects, side effects of the side effects, etc.).
Your responses to me seem pretty level headed and sober. I hope that means you don’t find my comments too hostile.
From having talked with computer scientists and AI researchers, I have a very strong impression that the consensus is that AGI is way out of reach at present. See for example points #1 and #5 of Scott Aaronson’s The Singularity is Far.
I don’t think there’s any such consensus. Most of those involved know that they don’t know with very much confidence. For a range of estimates, see the bottom of:
For what it’s worth, in saying “way out of reach” I didn’t mean “chronologically far away,” I meant “far beyond the capacity of all present researchers.” I think it’s quite possible that AGI is just 50 years away.
I think that the absence of plausibly relevant and concrete directions for AGI/FAI research, the chance of having any impact on the creation of an FAI through research is diminished by many orders of magnitude.
If there are plausibly relevant and concrete directions for AGI/FAI research then the situation is different, but I haven’t heard examples that I find compelling.
“Just 50 years?” Shane Legg’s explanation of why his mode is at 2025:
Thanks for pointing this out. I don’t have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.
I’d recur to CarlShulman’s remark about selection bias here. I look forward to seeing the results of the hypothetical Bostrom survey and the SIAI collection of all public predictions.
If 15 years is more accurate—then things are a bit different.
I agree. There’s still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in “crunch” mode (amassing resources specifically directed at future FAI research).
I agree. There’s still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in “crunch” mode (amassing resources specifically directed at future FAI research).
At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we’re still in serious crunch time. The tails are fat in both directions. (This is important because it takes away a lot of the Pascalian flavoring that makes people (justifiably) nervous when reasoning about whether or not to donate to FAI projects: 15% chance of FOOM before 2020 just feels very different to a bounded rationalist than a .5% chance of FOOM before 2020.)
Thanks for pointing this out. I don’t have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.
For what it’s worth, Shane Legg is a pretty reasonable fellow who understands that AGI isn’t automatically good, so we can at least rule out that his predictions are tainted by the thoughts of “Yay, technology is good, AGI is close!” that tend to cast doubt on the lack of bias in most AGI researchers’ and futurists’ predictions. He’s familiar with the field and indeed wrote the book on Machine Super Intelligence. I’m more persuaded by Legg’s arguments than most at SIAI, though, and although this isn’t a claim that is easily backed by evidence, the people at SIAI are really freakin’ good thinkers and are not to be disagreed with lightly.
At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we’re still in serious crunch time. The tails are fat in both directions.
I recur to my concern about selection effects. If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?
I do think that it’s sufficiently likely that the people in academia have erred that it’s worth my learning more about this topic and spending some time pressing people within academia on this point. But at present I assign a low probability (~5%) to the notion that the mainstream has missed something so striking as a large probability of a superhuman AI within 15 years.
Incidentally, I do think that decisive paradigm changing events are very likely to occur over the next 200 years and that this warrants focused effort on making sure that society is running as possible (as opposed to doing pure scientific research with the justification that it may pay off in 500 years).
A fair response to this requires a post that Less Wrong desperately needs to read: People Are Crazy, the World Is Mad. Unfortunately this requires that I convince Michael Vassar or Tom McCabe to write it. Thus, I am now on a mission to enlist the great power of Thomas McCabe.
(A not-so-fair response: you underestimate the extent to which academia is batshit insane just like nearly every individual in it, you overestimate the extent to which scientists ever look outside of their tiny fields of specialization, you overestimate the extent to which the most rational scientists are willing to put their reputations on the line by even considering much less accepting an idea as seemingly kooky as ‘human-level AI by 2035’, and you underestimate the extent to which the most rational scientists are starting to look at the possibility of AGI in the next 50 years (which amounts to non-trivial probability mass in the next 15). I guess I don’t know who the very best scientists are. (Dawkins and Tooby/Cosmides impress me a lot; Tooby was at the Summit. He signed a book that’s on my table top. :D ) Basically, I think you’re giving academia too much credit. These are all assertions, though; like I said, this response is not a fair one, but this way at least you can watch for a majoritarian bias in your thinking and a contrarian bias in my arguments.)
As for your “not-so-fair response”—I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.
(I say this with all due respect—I’ve read and admired some of your top level posts.)
As for your “not-so-fair response”—I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.
I definitely don’t have the necessary first-hand-experience: I was reporting second-hand the impressions of a few people who I respect but whose insights I’ve yet to verify. Sorry, I should have said that. I deserve some amount of shame for my lack of epistemic hygiene there.
(I say this with all due respect—I’ve read and admired some of your top level posts.)
Thanks! I really appreciate it. A big reason for the large amounts of comments I’ve been barfing up lately is a desire to improve my writing ability such that I’ll be able to make more and better posts in the future.
If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)? I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
And whose opinions should we count exactly? Do we value the opinions of historians, economists, psychologists, chemists, geologists, astronomers, etc etc as much as we value the opinions of neuroscientists, computer scientists, and engineers?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
The semiconductor industry predicts it’s own future pretty accurately, but they don’t invite biologists, philosophers or mathematicians to those meetings. Their roadmap and moore’s law in general is the most relevant for predicting AGI.
I base my own internal estimate on my own knowledge of the relevant fields—partly because this is so interesting and important that one should spend time investigating it.
I honestly suspect that most people who reject the possibility of near-term AGI have some deeper philosophical rejection.
If you are a materialist then intelligence is just another algorithm—something the brain does, and something we can build. It is an engineering problem and subject to the same future planning that we use for other engineering challenges.
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)?
I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
Can you give a reference?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
I have sufficiently little subject matter knowledge so that it’s reasonable for me to take the outside view here and listen to people who seem to know what they’re talking about rather than attempting to do a detailed analysis myself.
Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil’s latest tome was probably not much new news for most of it’s target demographic (silicon valley).
I’ve read Aaronson’s post and his counterview seems to boil down to generalized pessimism, which I don’t find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.
There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT’s lab is no joke.
There is some sort of strange academic stigma though as Legg discusses on his blog—almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn’t mean that anyone will necessarily change their behavior.
I agree this seems really odd, but then I think—how have I changed my behavior? And it dawns on me that this is a much more complex topic.
For the IEEE singularity issue—just google it .. something like “IEEE Singularity special issue”. I’m having slow internet atm.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
Because any software problem can become easy given enough hardware.
For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.
With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.
The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
Because any software problem can become easy given enough hardware.
That would have been a pretty naive reply—since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.
IMO, the biggest reason we have for thinking that the software will be fairly tractable, is that we have an existing working model which we could always just copy—if the worst came to the worst.
Agreed, although it will be very difficult to copy it without understanding it in considerably more detail than we do at present. Copying without any understanding (whole brain scanning and emulation) is possible in theory, but the required engineering capability for that level of scanning technology seems pretty far into the future at the moment.
A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of “human-level” machine minds—since machine intelligence is a complex and difficult field—and so most outsiders will probably be pretty clueless.
Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
Of course, neither Kurzweil nor Moravec think any such thing—both have estimates of when a computer with the same processing power as the human brain a considerable while before they think the required software will be developed.
The biggest optimist I have come across is Peter Voss. His estimate in 2009 was around 8 years − 7:00 in. However, he obviously has something to sell—so maybe we should not pay too much attention to his opinion—due to the signalling effects associated with confidence.
Eliezer addresses point 2 in the comments of the article you linked to in point 2. He’s also previously answered the questions of whether he believes he personally could solve FAI and how far out it is—here, for example.
Thanks for the references, both of which I had seen before.
Concerning Eliezer’s response to Scott Aaronson: I agree that there’s a huge amount of uncertainty about these things and it’s possible that AGI will develop unexpectedly, but don’t see how this points in the direction of AGI being likely to be developed within decades. It seems like one could have said the same thing that Eliezer is saying in 1950 or even 1800. See Holden’s remarks about noncontingency here.
Even though the FAI problem is incredibly difficult, it’s still worth working on because the returns attached to success would be enormous.
Lots of people who have worked on AGI are mediocre
The field of AI research is not well organized.
Claim (1) might be true. I suspect that both of claims (2) and (3) are true. But by themselves these claims offer essentially no support for the idea that Eliezer is likely to be able to build a Friendly AI.
Edit: Should I turn my three comments starting here into a top level posting? I hesitate to do so in light of how draining I’ve found the process of making top level postings and especially reading and responding to the ensuing comments, but the topic may be sufficiently important to justify the effort.
exhibiting unwillingness to seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI
What evidence do you have of this? One reason I doubt that it’s true is that Eliezer has been relatively good at admitting flaws in his ideas, even when doing so implied that building FAI is harder than he previously thought. I think you could reasonably argue that he’s still overconfident about his chances of successfully building FAI, but I don’t see how you get “unwillingness to seriously consider the possibility”.
Eliezer was not willing to engage with my estimate here. See his response. For the reasons that I point out here, I think that my estimate is well grounded.
Eliezer’s apparent lack of willingness to engage with me on this point does not immediately imply that he’s unwilling to seriously consider the possibility that I raise. But I do see it as strongly suggestive.
As I said in response to ThomBlake, I would be happy to pointed to any of Eliezer’s writings which support the idea that Eliezer has given serious consideration to the two points that I raised to explain my estimate.
Edit: I’ll also add that given the amount of evidence that I see against the proposition that Eliezer will build a Friendly AI, I have difficulty imagining how he could be persisting in holding his beliefs without having failed to give serious consideration to the possibility that he might be totally wrong. It seems very likely to me that if he had explored this line of thought, he would have a very different world view than he does at present.
I’ll also add that given the amount of evidence that I see against the proposition that Eliezer will build a Friendly AI, I have difficulty imagining how he could be persisting in holding his beliefs without having failed to give serious consideration to the possibility that he might be totally wrong.
Have you noticed that many (most?) commenters/voters seem to disagree with your estimate? That’s not necessarily strong evidence that your estimate is wrong (in the sense that a Bayesian superintelligence wouldn’t assign a probability as low as yours), but it does show that many reasonable and smart people disagree with your estimate even after seriously considering your arguments. To me that implies that Eliezer could disagree with your estimate even after seriously considering your arguments, so I don’t think his “persisting in holding his beliefs” offers much evidence for your position that Eliezer exhibited “unwillingness to seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”.
Have you noticed that many (most?) commenters/voters seem to disagree with your estimate?
Yes. Of course, there’s a selection effect here—the people on LW are more likely to assign a high probability to the proposition that Eliezer will build a Friendly AI (whether or not there’s epistemic reason to do so).
The people outside of LW who I talk to on a regular basis have an estimate in line with my own. I trust these people’s judgment more than I trust LW posters judgment simply because I have much more information about their positive track records for making accurate real world judgments than I do for the people on LW.
To me that implies that Eliezer could disagree with your estimate even after seriously considering your arguments, so I don’t think his “persisting in holding his beliefs” offers much evidence for your position that Eliezer exhibited “unwillingness to seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”.
Yes, so I agree that in your epistemological state you should feel this way. I’m explaining why in my epistemological state I feel the way I do.
In your own epistemological state, you may be justified in thinking that Eliezer and other LWers are wrong about his chances of success, but even granting that, I still don’t see why you’re so sure that Eliezer has failed to “seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”. Why couldn’t he have, like the other LWers apparently did, considered the possibility and then (erroneously, according to your epistemological state) rejected it?
Why couldn’t he have, like the other LWers apparently did, considered the possibility and then (erroneously, according to your epistemological state) rejected it?
My experience reading Eliezer’s writings is that he’s very smart and perceptive. I find it implausible that somebody so smart and perceptive could miss something for which there is (in my view) so much evidence for if he had engaged in such consideration. So I think that what you suggest could be the case, but I find is quite unlikely.
Sorry to take so long to get back to you :)
Obviously humans are extremely ill-suited for being utilitarians (just as humans would be extremely ill-suited for being paperclip maximizers even if they wanted to be.)
When I refer to a “genuinely utilitarian lifestyle” I mean subject to human constraints. There are some people who do this much better than others—for example, Bill Gates and Warren Buffett have done much better than most billionaires.
I think that with a better peer network, Gates and Buffett could have done still better (for example I would have liked to see them take existential risk into serious consideration with their philanthropic efforts).
A key point here is that as I’ve said elsewhere I don’t think that leading a (relatively) utilitarian lifestyle has very much at all to do with personal sacrifice, but rather with realigning one’s personal motivational structure in a way that (at least for many people) does not entail a drop in quality of life. If you haven’t already done so, see my post on missed opportunities for doing well by doing good.
Thanks for the reference. I edited the end of my posting to clarify what I had in mind.
If that’s the kind of criteria you have in mind, why did you say “Eliezer appears to be deviating so sharply from leading a genuinely utilitarian lifestyle”? It seems to me that Eliezer has also done much better than most … (what’s the right reference class here? really smart people who have been raised in a developed country?)
Which isn’t to say that he couldn’t do better, but your phrasing strikes me as rather unfair...
What I was getting at in my posting is that in exhibiting unwillingness to seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI it appears that Eliezer is deviating sharply from leading a utilitarian lifestyle (relative to what one can expect from humans).
I was not trying to make a general statement about Eliezer’s attainment of utilitarian goals relative to other humans. I think that there’s a huge amount of uncertainty on this point to such an extent that it’s meaningless to try to make a precise statement.
The statement that I was driving at is a more narrow one.
I think that it would be better for Eliezer and for the world at large if Eliezer seriously considered the possibility that he’s vastly overestimated his chances of building a Friendly AI. I strongly suspect that if he did this, his strategy for reducing existential risk would change for the better. If his current views turn out to be right, he can always return to them later on. I think that the expected benefits of him reevaluating his position far outweigh the expected costs.
Why? What sort of improvement would you expect?
Remember that he is still the one person in the public sphere who takes the problem of Friendly AI (under any name) seriously enough to have devoted his life to it, and who actually has quasi-technical ideas regarding how to achieve it. All this despite the fact that for decades now, in fiction and nonfiction, the human race has been expressing anxiety about the possibility of superhuman AI. Who are his peers, his competitors, his predecessors? If I was writing the history of attempts to think about the problem, Chapter One would be Isaac Asimov with his laws of robotics, Chapter Two would be Eliezer Yudkowsky and the idea of Friendly AI, and everything else would be a footnote.
Three points:
I think that if he had a more accurate estimation of his chances of building a Friendly AI, this would be better for public relations, for the reasons discussed in Existential Risk and Public Relations.
I think that his unreasonably estimate of his ability to build a Friendly AI has decreased his willingness to engage with the academic mainstream to an unreasonable degree. I think that his ability to do Friendly AI research would be heightened if he were more willing to engage with the academic mainstream. I think he’d be more likely to find collaborators and more likely to learn the relevant material.
I think that a more accurate assessment of the chances of him building a Friendly AI might lead him to focus on inspiring others and on existential risk reduction advocacy (things that he has demonstrated capacity to do very well) rather than Friendly AI research. I suspect that if this happened, it would maximize his chances of obstructing global catastrophic risk.
That would absolutely be a waste. If for some reason he was only to engage in advocacy from now on, it should specifically be Friendly AI advocacy. I point again to the huge gaping absence of other people who specialize in this problem and who have worthwhile ideas. The other “existential risks” have their specialized advocates. No-one else remotely comes close to filling that role for the risks associated with superintelligence.
In other words, the important question is not, what are Eliezer’s personal chances of success; the important question is, who else is offering competent leadership on this issue? Like wedrifid, I don’t even recall hearing a guess from Eliezer about what he thinks the odds of success are. But such guesses are of secondary importance compared to the choice of doing something or doing nothing, in a domain where no-one else is acting. Until other people show up, you have to just go out there and do your best.
I’m pretty sure Eric Drexler went through this already, with nanotechnology. There was a time when Drexler was in a quite unique position, of appreciating the world-shaking significance of molecular machines, having an overall picture of what they imply and how to respond, and possessing a platform (his Foresight Institute) which gave him a little visibility. The situation is very different now. We may still be headed for disaster on that front as well, but at least the ability of society to think about the issues is greatly improved, mostly because broad technical progress in chemistry and nanoscale technology has made it easier for people to see the possibilities and has also clarified what can and can’t be done.
As computer science, cognitive science, and neuroscience keep advancing, the same thing will happen in artificial intelligence, and a lot of Eliezer’s ideas will seem more natural and constructive than they may now appear. Some of them will be reinvented independently. All of them (that survive) should take on much greater depth and richness (compare the word pictures in Drexler’s 1986 book with the calculations in his 1992 book).
Despite all the excesses and distractions, work is being done and foundations for the future are being laid. Also, Eliezer and his colleagues do have many lines into academia, despite the extent to which they exist outside it. So in terms of process, I do consider them to be on track, even if the train shakes violently at times.
Eliezer took exception to my estimate linked in my comment here.
Quite possibly you’re right about this.
On this point I agree with SarahC’s second comment here.
I would again recur to my point about Eliezer having an accurate view of his abilities and likelihood of success being important for public relations purposes.
Less than 1 in 1 billion! :-) May I ask exactly what the proposition was? At the link you say “probability of … you succeeding in playing a critical role on the Friendly AI project that you’re working on”. Now by one reading that probability is 1, since he’s already the main researcher at SIAI.
Suppose we analyse your estimate in terms of three factors:
(probability that anyone ever creates Friendly AI) x (conditional probability SIAI contributed) x (conditional probability that Eliezer contributed)
Can you tell us where the bulk of the 10^-9 is located?
And he was right to do so, because that estimate was obviously on the wrong order of magnitude. To make an analogy, if someone says that you weigh 10^5kg, you don’t have to reveal your actual weight (or even measure it) to know that 10^5 was wrong.
I agree with
But why is the estimate that I gave obviously on the wrong order of magnitude?
From my point of view, his reaction is an indication that his estimate is obviously on the wrong order of magnitude. But I’m still willing to engage with him and hear what he has to say, whereas he doesn’t seem willing to engage with me and hear what I have to say.
The original statement was
The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
Assume we accept this lower probability of 10^-2 for the first piece. For the second piece, as simplifying assumptions, assume there are only 10^1 “critical role” slots, and that they’re assigned randomly out of all the people who might plausibly work on friendly AI. (Since we’re only going for an order of magnitude, we’re allowed to make simplifying assumptions like this; and we have to do so, because otherwise the problem is intractable.) In order to get a probability of 10^-9, you would need to come up with 10^8 candidate AGI researchers, each qualified to a degree similar to Eliezer. By comparison, there are 3.3x10^6 working in all computer and mathematical science occupations put together, of whom maybe 1 in 10^2 has even heard of FAI and none have dedicated their life to it.
It is possible to disagree on probabilities, but it is not possible to disagree by more than a couple orders of magnitude unless someone is either missing crucial information, has made a math error, or doesn’t know how to compute probabilities. And not knowing how to compute probabilities is the norm; it’s a rare skill that has to be specifically cultivated, and there’s no shame in not having it yet. But it is a prerequisite for some (though certainly not all) of the discussions that take place here. And the impression that I got was that you jumped into a discussion you weren’t ready for, and then let the need to be self-consistent guide your arguments in untruthful directions. I think others got the same impression as well. We call this motivated cognition—it’s a standard bias that everyone suffers from to some degree—and avoiding it is also a rare skill that must be specifically cultivated, and there is no shame in not having that skill yet, either. But until you develop your rationality skills further, Eliezer isn’t going to engage with you, and it would be a mistake for him to do so.
I can’t engage with your statement here unless you quantify the phrase “Not-too-distant future.”
Two points here:
•Quoting a comment that I wrote in July:
I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).
•You seem to be implicitly assuming that Friendly AI will be developed before unFriendly AI. This implicit assumption is completely ungrounded.
I agree with all of this.
I can understand how you might have gotten this impression. But I think that it’s important to give people the benefit of the doubt up to a certain point. Too much willingness to dismiss the consideration of what people say on account of doubting their rationality is conducive to group think and confirmation bias.
In line with my comment above, I’m troubled by the fact that you’ve so readily assumed that my rationality skills are insufficiently developed for it to be worth Eliezer’s time to engage with me.
Not only that, but sophisticated pure mathematics will surely supply the substance of FAI theory. I’m thinking especially of Ketan Mulmuley’s research program, applying algebraic geometry to computational complexity theory. Many people think it’s the most promising approach to P vs NP.
It has been suggested that the task of Friendly AI boils down to extracting the “human utility function” from the physical facts, and then “renormalizing” this using “reflective decision theory” to produce a human-relative friendly utility function, and then implementing this using a cognitive architecture which is provably stable under open-ended self-directed enhancement. The specification of the problem is still a little handwavy and intuitive, but it’s not hard to see solid, well-defined problems lurking underneath the suggestive words, and it should be expected that the exact answers to those problems will come from a body of “theory” as deep and as lucid as anything presently existing in pure math.
We have to assign probabilities to Artificial intelligence being first created on earth over the earths entire lifetime.
So what probability should we give to the first non-biological intelligence being created in the time period between 3 million years and 3million and 50 years time (not necessarily by humans)? Would it be greater than or less than 10^-2? If less than that, what justifies your confidence in that statement rather than your confidence that it will be created soon?
We have to get all these probabilities to sum to the chance we assign to AI ever being created, over the lifetime of the earth. So I don’t see how we can avoid very small probabilities in AI being created at certain times.
We haven’t heard Eliezer say how likely he believes it is that he creates a Friendly AI. He has been careful to not to discuss that subject. If he thought his chances of success were 0.5% then I would expect him to make exactly the same actions.
(ETA: With the insertion of ‘relative’ I suspect I would more accurately be considering the position you are presenting.)
Right, so in my present epistemological state I find it extremely unlikely that Eliezer will succeed in building a Friendly AI. I gave an estimate here which proved to be surprisingly controversial.
The main points that inform my thinking here are:
The precedent for people outside of the academic mainstream having mathematical/scientific breakthroughs in recent times is extremely weak. In my own field of pure math I know of only two people without PhD’s in math or related fields who have produced something memorable in the last 70 years or so, namely Kurt Heegner and Martin Demaine. And even Heegner and Demaine are (relatively speaking) quite minor figures. It’s very common for self-taught amateur mathematicians to greatly underestimate the difficulty of substantive original mathematical research. I find it very likely that the same is true in virtually all scientific fields and thus have an extremely skeptical Bayesian prior against any proposition of the type “amateur intellectual X will solve major scientific problem Y.”
From having talked with computer scientists and AI researchers, I have a very strong impression that the consensus is that AGI is way out of reach at present. See for example points #1 and #5 of Scott Aaronson’s The Singularity is Far.
The fact that Eliezer does not appear to have seriously contemplated or addressed the the two points above and their implications diminishes my confidence in his odds of success still further.
That you have this impression greatly diminishes my confidence in your intuitions on the matter. Are you seriously suggesting that Eliezer has not contemplated AI researchers’ opinions about AGI? Or that he hasn’t thought about just how much effort should go into a scientific breakthrough?
Someone please throw a few hundred relevant hyperlinks at this person.
I’m not saying that Eliezer has given my two points no consideration. I’m saying that Eliezer has not given my two points sufficient consideration. By all means, send hyperlinks that you find relevant my way—I would be happy to be proven wrong.
Regarding your first point, I’m pretty sure Eliezer does not expect to solve FAI by himself. Part of the reason for creating LW was to train/recruit potential FAI researchers, and there are also plenty of Ph.D. students among SIAI visiting fellows.
Regarding the second point, do you want nobody to start researching FAI until AGI is within reach?
Right, but the historical precedent for an amateur scientist even being at all involved in a substantial scientific breakthrough over the past 50 years is very weak.
Also, my confidence in Eliezer’s ability to train/recruit potential FAI researchers has been substantially diminished for the reasons that I give in Existential Risk and Public Relations. I personally would be interested in working with Eliezer if he appeared to me to be well grounded. The impressions that I’ve gotten from my private correspondence with Eliezer and from his comments have given me a very strong impression that I would find him too difficult to work with for me to be able to do productive FAI research with him.
No. I think that it would be worthwhile for somebody to do FAI research in line with Vladimir Nesov’s remarks here and here.
But I maintain that the probability of success is very small and that the only justification for doing it is the possibility of enormous returns. If people had established an institute for the solution of Fermat’s Last Theorem in the 1800′s, the chances of anybody there playing a decisive role in the solution of Fermat’s Last Theorem would be very small. I view the situation with FAI as analogous.
Hold on—there are two different definitions of the word “amateur” that could apply here, and they lead to very different conclusions. The definition I think of first, is that an amateur at something is someone who doesn’t get paid folr doing it, as opposed to a professional who makes a living at it. By this definition, amateurs rarely achieve anything, and if they do, they usually stop being amateurs. But Eliezer’s full-time occupation is writing, thinking, and talking about FAI and related topics, so by this definition, he isn’t an amateur (regardless of whether or not you think he’s qualified for that occupation).
The other definition of “amateur scientist” would be “someone without a PhD”. This definition Eliezer does fit, but by this definition, the amateurs have a pretty solid record. And if you narrow it down to computer software, the amateurs have achieved more than the PhDs have!
I feel like you’ve taken the connotations of the first definition and unknowingly and wrongly transferred them to the second definition.
Okay, so, I agree with some of what you say above. I think I should have been more precise.
A claim of the type “Eliezer is likely to build a Friendly AI” requires (at least in part) a supporting claim of the type “Eliezer is in group X where people in group X are likely to build a Friendly AI.” Even if one finds such a group X, this may not be sufficient because Eliezer may belong to some subgroup of X which is disproportionately unlikely to build a Friendly AI. But one at least has to be able to generate such a group X.
At present I see no group X that qualifies.
1.Taking X to be “humans in the developed world” doesn’t work because the average member of X is extremely unlikely to build a Friendly AI.
Taking X to be “people with PhDs a field related to artificial intelligence” doesn’t work because Eliezer doesn’t have a PhD in artificial intelligence.
Taking X to be “programmers” doesn’t work because Eliezer is not a programmer.
Taking X to be “people with very high IQ” is a better candidate, but still doesn’t yield a very high probability estimate because very high IQ is not very strongly correlated with technological achievement.
Taking X to be “bloggers about rationality” doesn’t work because there’s very little evidence that being a blogger about rationality is correlated with skills conducive to building a Friendly AI.
Which suitable group X do you think that Eliezer falls into?
How about “people who have publically declared an intention to try to build an FAI”? That seems like a much more relevant reference class, and it’s tiny. (I’m not sure how tiny, exactly, but it’s certainly smaller than 10^3 people right now) And if someone else makes a breakthrough that suddenly brings AGI within reach, they’ll almost certainly choose to recruit help from that class.
I agree that the class that you mention is a better candidate than the ones that I listed. However:
I find it fairly likely that the class will expand dramatically if there’s a breakthrough that brings AGI in within reach.
Announcing interest in FAI does not entail having the skills necessary to collaborate with the people working on an AGI to make it Friendly.
In addition to these points, there’s a factor which makes Eliezer less qualified than the usual member of the class, namely his public relations difficulties. As he says here “I feel like I’m being held to an absurdly high standard … like I’m being asking to solve PR problems that I never signed up for.” As a matter of reality, PR matters in this world. If there was a breakthrough that prompted a company like IBM to decide to build an AGI, I have difficulty imagining them recruiting Eliezer, the reason being that Eliezer says things that sound strange and is far out of the mainstream. However of course
(i) I could imagine SIAI’s public relations improving substantially in the future—this would be good and would raise the chances of Eliezer being able to work with the researchers who build an AGI.
(ii) There may of course be other factors which make Eliezer more likely than other members of the class to be instrumental to building a Friendly AI.
Despite factors (i) and (ii), putting all of the information that I have together, my estimate of 10^(-9) still feels about right to me. I’d be happy to continue trading information with you with a view toward syncing up our probabilities if you’re so inclined.
I should hope not! If that happens, it means the person who made the breakthrough released it to the public. That would be a huge mistake, because it would greatly increase the chances of an unfriendly AI being built before a friendly one.
That’s only because you said it in public and aren’t willing to appear inconsistent. You still haven’t decomposed this into manageable pieces with numbers. And since we’ve already seen that you wrote the bottom line first, we would have strong reason to not trust those numbers if you did.
Two points:
It seems very likely to me that there’s a string of breakthroughs which will lead to AGI and that it will gradually become clear that to people that they should be thinking about friendliness issues.
Even if there’s a single crucial breakthrough, I find it fairly likely that the person who makes it will not have friendliness concerns in mind.
I believe that the human brain is extremely poorly calibrated to determining probabilities through the explicit process that you describe and that the human brain’s intuition is often more reliable for such purposes. My attitude is in line with Holden’s comments 14 and 16 on the GiveWell Singularity Summit thread.
In line with the last two paragraphs of one of my earlier comments, I find your quickness to assume that my thinking on these matters stems from motivated cognition disturbing. Of course, I may be exhibiting motivated cognition, but the same is true of you, and your ungrounded confidence in your superiority to me is truly unsettling. As such, I will cease to communicate further with you unless you resolve to stop confidently asserting that I’m exhibiting motivated cognition.
P(SIAI will be successful) may be smaller that 10^-(3^^^^3)!
I don’t think that’s the right way to escape from a Pascal’s mugging. In the case of the SIAI, there isn’t really clear evidence that the organisation is having any positive effect—let alone SAVING THE WORLD. When the benefit could plausibly be small, zero—or indeed negative—one does not need to invoke teeny tiny probabalities to offset it.
Upvoted twice for the “Two points”. Downvoted once for the remainder of the comment.
Well, actually, I’m pretty sure the second point has a serious typo. Maybe I should flip that vote.
You are so concerned about the possibility of failure that you want to slow down research, publication and progress in the field—in order to promote research into safety?
Do you think all progress should be slowed down—or just progress in this area?
The costs of stupidity are a million road deaths a year, and goodness knows how many deaths in hospitals. Intelligence would have to be pretty damaging to outweigh that.
There is an obvious good associated with publication—the bigger the concentration of knowledge about intelligent machines there is in one place, the greater wealth inequality is likely to result, and the harder it would be for the rest of society to deal with a dominant organisation. Spreading knowlege helps spread out the power—which reduces the chance of any one group of people becoming badly impoverished. Such altruistic measures may help to prevent a bloody revolution from occurring.
What are we supposed to infer from that? That if you add an amateur scientist to a group of PhDs, that would substantially decrease their chance of making a breakthrough?
SIAI held a 3-day decision theory workshop in March that I attended along with Stuart Armstrong and Gary Drescher as outside guests. I feel pretty safe in saying that none of us found Eliezer particularly difficult to work with. I wonder if perhaps you’re generalizing from one example here.
Do you also think it would be worthwhile for somebody to try to build an organization to do FAI research? If so, who do you think should be doing that, if not Eliezer and his supporters? Or is your position more like cousin_it’s, namely that FAI research should just be done by individuals on their free time for now?
No, certainly not. I just don’t see much evidence that Eliezer is presently adding value to Friendly AI research. I think he could be doing more to reduce existential risk if he were operating under different assumptions.
Of course you could be right here, but the situation is symmetric, the same could be the case for you, Stuart Armstrong and Gary Drescher. Keep in mind that there’s a strong selection effect here—if you’re spending time with Eliezer you’re disproportionately likely to be well suited to working with Eliezer, and people who have difficulty working with Eliezer are disproportionately unlikely be posting on Less Wrong or meeting with Eliezer.
My intuition is that there are a lot of good potential FAI researchers who would not feel comfortable working with Eliezer given his current disposition, but I may be wrong.
Quite possibly it’s a good thing for Eliezer and his supporters to be building an organization to do FAI research. On the other hand maybe cousin_it’s position is right. I have a fair amount of uncertainty on this point.
The claim that I’m making is quite narrow: that it would be good for the cause of existential risk reduction if Eliezer seriously considered the possibility that he’s greatly overestimated his chances of building a Friendly AI.
I’m not saying that it’s a bad thing to have an organization like SIAI. I’m not saying that Eliezer doesn’t have a valuable role to serve within SIAI. I’m reminded of Robin Hanson’s Against Disclaimers though I don’t feel comfortable with his condescending tone and am not thinking of you in that light :-).
This topic seems important enough that you should try to figure out why your intuition says that. I’d be interested in hearing more details about why you think a lot of good potential FAI researchers would not feel comfortable working with Eliezer. And in what ways do you think he could improve his disposition?
My reading of this is that before you corresponded privately with Eliezer, you were
interested in personally doing FAI research
assigned high enough probability to Eliezer’s success to consider collaborating with him
And afterward, you became
no longer interested in doing FAI research
massively decreased your estimate of Eliezer’s chance of success
Is this right? If so, I wonder what he could have said that made you change your mind like that. I guess either he privately came off as much less competent than he appeared in the public writings that drew him to your attention in the first place (which seems rather unlikely), or you took his response as some sort of personal affront and responded irrationally.
So, the situation is somewhat different than the one that you describe. Some points of clarification.
•I first came across Overcoming Bias in 2008. Eliezer was recommended to me by a friend who I respect a great deal. My reactions to the first postings that I read by Eliezer was strong discomfort with his apparent grandiosity and self absorption. This discomfort was sufficiently strong for me to lose interest despite my friend’s endorsement.
•I started reading Less Wrong in earnest in the beginning of 2010. This made it clear to me that Eliezer has a lot to offer and that it was unfortunate that I had been pushed away by my initial reaction.
•I never assigned a very high probability to Eliezer making a crucial contribution to an FAI research project. My thinking was that the enormous positive outcome associated with success might be sufficiently great to justify the project despite the small probability.
•I didn’t get much of a chance to correspond privately with Eliezer at all. He responded to a couple of my messages with one line dismissive responses and then stopped responding to my subsequent messages. Naturally this lowered the probability that I assigned to being able to collaborating with him. This also lowered my confidence in his ability to attract collaborators in general.
•If Eliezer showed strong ability to attract and work well with collaborators (including elite academics who are working on artificial intelligence research) then I would find it several orders of magnitude more likely that he would make a crucial contribution to an FAI research project. For concreteness I’ll throw out the number 10^(-6).
•I feel that the world is very complicated and that randomness plays a very large role. This leads me to assign a very small probability to the proposition that any given individual will play a crucial role in eliminating existential risk.
•I freely acknowledge that I may be influenced by emotional factors. I make an honest effort at being level headed and sober but as I mention elsewhere, my experience posting on Less Wrong has been emotionally draining. I find that I become substantially less rational when people assume that my motives are impure (some sort of self fulfilling prophesy).
You may notice that of my last four posts, the first pair was considerably more impartial than the second pair. (This is reflected in the fact that the first pair was upvoted more than the second pair.) My subjective perception is that I started out thinking quite carefully and became less rational as I read and responded to hostile commentators.
I’d be really interested to know which posts these were, because it would help me to distinguish between the following interpretations:
(1) First impressions really do matter: even though you and I are probably very similar in many respects, we have different opinions of Eliezer simply because in the first posts of his I read, he sounded more like a yoga instructor than a cult leader; whereas perhaps the first thing you read was some post where his high estimation of his abilities relative to the rest of humanity was made explicit, and you didn’t have the experience of his other writings to allow you to “forgive” him for this social transgression.
(2) We have different personalities, which cause us to interpret people’s words differently: you and I read more or less the same kind of material first, but you just interpreted it as “grandiose” whereas I didn’t.
What’s interesting in any case is that I’m not sure that I actually disagree with you all that much about Eliezer having a small chance of success (though I think you quantify it incorrectly with numbers like 10^(-9) or 10^(-6) -- these are way too small). Where we differ seems to be in the implications we draw from this. You appear to believe that Eliezer and SIAI are doing something importantly wrong, that could be fixed by means of a simple change of mindset, and that they shouldn’t be supported until they make this change. By contrast, my interpretation is that this is an extremely difficult problem, that SIAI is basically the first organization that has begun to make a serious attempt to address it, and that they are therefore worthy of being supported so that they can increase their efforts in the directions they are currently pursuing and potentially have a larger impact than they otherwise would.
I’ve been meaning to ask you: given your interest in reducing existential risk, and your concerns about SIAI’s transparency and their general strategy, have you considered applying to the Visiting Fellows program? That would be an excellent way not only to see what it is they do up close, but also to discuss these very issues in person at length with the people involved in SIAI strategy—which, in my experience, they are very interested in doing, even with short-term visitors.
Right, so the first posts that I came across were Eliezer’s Coming of Age posts which I think are unrepresentatively self absorbed. So I think that the right interpretation is the first that you suggest.
Since I made my top level posts, I’ve been corresponding with Carl Shulman who informed me of some good things that SIAI has been doing that have altered my perception of the institution. I think that SIAI may be worthy of funding.
Regardless as to the merits of SIAI’s research and activities, I think that in general it’s valuable to promote norms of Transparency and Accountability. I would certainly be willing to fund SIAI if it were strongly recommended by a highly credible external charity evaluator like GiveWell. Note also a comment which I wrote in response to Jasen.
I would like to talk more about these things—would you like to share email addresses? PM me if so.
At this point I worry that I’ve alienated the SIAI people to such an extent that they might not be happy to have me. But I’d certainly be willing if they’re favorably disposed toward me.
I’ll remark that back in December after reading Anna Salamon’s posting on the SIAI Visting Fellows program I did send Anna Salamon a long email expressing some degree of interest and describing some my concerns without receiving a response. I now find it most plausible that she just forgot about it and that I should have tried again, but maybe you can understand from this how I got the impression that becoming an SIAI Visiting Fellow was not a strong option for me.
Done.
As it happens, the same thing happened to me; it turned out that my initial message had been caught in a spam filter. I eventually ended up visiting for two weeks, and highly recommend the experience.
This, along with your other estimate of 10^(-9), implies that your probability for Eliezer being able to eventually attract and work well with collaborators is currently 1/1000. Does that really seem reasonable to you (would you be willing to bet at those odds?), given other evidence besides your private exchange with Eliezer? Such as:
Eliezer already had a close collaborator, namely Marcello
SIAI has successfully attracted many visiting fellows
SIAI has successfully attracted top academics to speak at their Singularity Summit
Eliezer is currently writing a book on rationality, so presumably he isn’t actively trying to recruit collaborators at the moment
Other people’s reports of not finding Eliezer particularly difficult to work with
It seems to me that rationally updating on Eliezer’s private comments couldn’t have resulted in such a low probability. So I think a more likely explanation is that you were offended by the implications of Eliezer’s dismissive attitude towards your comments.
(Although, given Eliezer’s situation, it would probably be a good idea for him to make a greater effort to avoid offending potential supporters, even if he doesn’t consider them to be viable future collaborators.)
Your responses to me seem pretty level headed and sober. I hope that means you don’t find my comments too hostile.
Thinking it over, my estimate of 10^(-6) was way too high. This isn’t because of a lack of faith in Eliezer’s abilities in particular. I would recur to my above remark that I think that everybody has very small probability of succeeding in efforts to eliminate existential risk. We’re part of a complicated chaotic dynamical system and to a large degree our cumulative impact on the world is unintelligible and unexpected (because of a complicated network of unintended consequences, side effects, side effects of the side effects, etc.).
Glad to hear it :-)
I don’t think there’s any such consensus. Most of those involved know that they don’t know with very much confidence. For a range of estimates, see the bottom of:
http://alife.co.uk/essays/how_long_before_superintelligence/
For what it’s worth, in saying “way out of reach” I didn’t mean “chronologically far away,” I meant “far beyond the capacity of all present researchers.” I think it’s quite possible that AGI is just 50 years away.
I think that the absence of plausibly relevant and concrete directions for AGI/FAI research, the chance of having any impact on the creation of an FAI through research is diminished by many orders of magnitude.
If there are plausibly relevant and concrete directions for AGI/FAI research then the situation is different, but I haven’t heard examples that I find compelling.
“Just 50 years?” Shane Legg’s explanation of why his mode is at 2025:
http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/
If 15 years is more accurate—then things are a bit different.
Thanks for pointing this out. I don’t have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.
I’d recur to CarlShulman’s remark about selection bias here. I look forward to seeing the results of the hypothetical Bostrom survey and the SIAI collection of all public predictions.
I agree. There’s still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in “crunch” mode (amassing resources specifically directed at future FAI research).
At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we’re still in serious crunch time. The tails are fat in both directions. (This is important because it takes away a lot of the Pascalian flavoring that makes people (justifiably) nervous when reasoning about whether or not to donate to FAI projects: 15% chance of FOOM before 2020 just feels very different to a bounded rationalist than a .5% chance of FOOM before 2020.)
For what it’s worth, Shane Legg is a pretty reasonable fellow who understands that AGI isn’t automatically good, so we can at least rule out that his predictions are tainted by the thoughts of “Yay, technology is good, AGI is close!” that tend to cast doubt on the lack of bias in most AGI researchers’ and futurists’ predictions. He’s familiar with the field and indeed wrote the book on Machine Super Intelligence. I’m more persuaded by Legg’s arguments than most at SIAI, though, and although this isn’t a claim that is easily backed by evidence, the people at SIAI are really freakin’ good thinkers and are not to be disagreed with lightly.
I recur to my concern about selection effects. If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?
I do think that it’s sufficiently likely that the people in academia have erred that it’s worth my learning more about this topic and spending some time pressing people within academia on this point. But at present I assign a low probability (~5%) to the notion that the mainstream has missed something so striking as a large probability of a superhuman AI within 15 years.
Incidentally, I do think that decisive paradigm changing events are very likely to occur over the next 200 years and that this warrants focused effort on making sure that society is running as possible (as opposed to doing pure scientific research with the justification that it may pay off in 500 years).
A fair response to this requires a post that Less Wrong desperately needs to read: People Are Crazy, the World Is Mad. Unfortunately this requires that I convince Michael Vassar or Tom McCabe to write it. Thus, I am now on a mission to enlist the great power of Thomas McCabe.
(A not-so-fair response: you underestimate the extent to which academia is batshit insane just like nearly every individual in it, you overestimate the extent to which scientists ever look outside of their tiny fields of specialization, you overestimate the extent to which the most rational scientists are willing to put their reputations on the line by even considering much less accepting an idea as seemingly kooky as ‘human-level AI by 2035’, and you underestimate the extent to which the most rational scientists are starting to look at the possibility of AGI in the next 50 years (which amounts to non-trivial probability mass in the next 15). I guess I don’t know who the very best scientists are. (Dawkins and Tooby/Cosmides impress me a lot; Tooby was at the Summit. He signed a book that’s on my table top. :D ) Basically, I think you’re giving academia too much credit. These are all assertions, though; like I said, this response is not a fair one, but this way at least you can watch for a majoritarian bias in your thinking and a contrarian bias in my arguments.)
I look forward to the hypothetical post.
As for your “not-so-fair response”—I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.
(I say this with all due respect—I’ve read and admired some of your top level posts.)
I definitely don’t have the necessary first-hand-experience: I was reporting second-hand the impressions of a few people who I respect but whose insights I’ve yet to verify. Sorry, I should have said that. I deserve some amount of shame for my lack of epistemic hygiene there.
Thanks! I really appreciate it. A big reason for the large amounts of comments I’ve been barfing up lately is a desire to improve my writing ability such that I’ll be able to make more and better posts in the future.
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)? I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
And whose opinions should we count exactly? Do we value the opinions of historians, economists, psychologists, chemists, geologists, astronomers, etc etc as much as we value the opinions of neuroscientists, computer scientists, and engineers?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
The semiconductor industry predicts it’s own future pretty accurately, but they don’t invite biologists, philosophers or mathematicians to those meetings. Their roadmap and moore’s law in general is the most relevant for predicting AGI.
I base my own internal estimate on my own knowledge of the relevant fields—partly because this is so interesting and important that one should spend time investigating it.
I honestly suspect that most people who reject the possibility of near-term AGI have some deeper philosophical rejection.
If you are a materialist then intelligence is just another algorithm—something the brain does, and something we can build. It is an engineering problem and subject to the same future planning that we use for other engineering challenges.
I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
Can you give a reference?
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
I have sufficiently little subject matter knowledge so that it’s reasonable for me to take the outside view here and listen to people who seem to know what they’re talking about rather than attempting to do a detailed analysis myself.
Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil’s latest tome was probably not much new news for most of it’s target demographic (silicon valley).
I’ve read Aaronson’s post and his counterview seems to boil down to generalized pessimism, which I don’t find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.
There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT’s lab is no joke.
There is some sort of strange academic stigma though as Legg discusses on his blog—almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.
I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn’t mean that anyone will necessarily change their behavior.
I agree this seems really odd, but then I think—how have I changed my behavior? And it dawns on me that this is a much more complex topic.
For the IEEE singularity issue—just google it .. something like “IEEE Singularity special issue”. I’m having slow internet atm.
Because any software problem can become easy given enough hardware.
For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.
With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.
The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.
That would have been a pretty naive reply—since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.
Technically true—I should have said “tractable” or “these types of” rather than “any”. That of course is what computational complexity is all about.
IMO, the biggest reason we have for thinking that the software will be fairly tractable, is that we have an existing working model which we could always just copy—if the worst came to the worst.
Agreed, although it will be very difficult to copy it without understanding it in considerably more detail than we do at present. Copying without any understanding (whole brain scanning and emulation) is possible in theory, but the required engineering capability for that level of scanning technology seems pretty far into the future at the moment.
A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of “human-level” machine minds—since machine intelligence is a complex and difficult field—and so most outsiders will probably be pretty clueless.
Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.
Of course, neither Kurzweil nor Moravec think any such thing—both have estimates of when a computer with the same processing power as the human brain a considerable while before they think the required software will be developed.
The biggest optimist I have come across is Peter Voss. His estimate in 2009 was around 8 years − 7:00 in. However, he obviously has something to sell—so maybe we should not pay too much attention to his opinion—due to the signalling effects associated with confidence.
Optimist or pessimist?
In his own words: Increased Intelligence, Improved Life.
Eliezer addresses point 2 in the comments of the article you linked to in point 2. He’s also previously answered the questions of whether he believes he personally could solve FAI and how far out it is—here, for example.
Thanks for the references, both of which I had seen before.
Concerning Eliezer’s response to Scott Aaronson: I agree that there’s a huge amount of uncertainty about these things and it’s possible that AGI will develop unexpectedly, but don’t see how this points in the direction of AGI being likely to be developed within decades. It seems like one could have said the same thing that Eliezer is saying in 1950 or even 1800. See Holden’s remarks about noncontingency here.
As for A Premature Word on AI, Eliezer seems to be saying that
Even though the FAI problem is incredibly difficult, it’s still worth working on because the returns attached to success would be enormous.
Lots of people who have worked on AGI are mediocre
The field of AI research is not well organized.
Claim (1) might be true. I suspect that both of claims (2) and (3) are true. But by themselves these claims offer essentially no support for the idea that Eliezer is likely to be able to build a Friendly AI.
Edit: Should I turn my three comments starting here into a top level posting? I hesitate to do so in light of how draining I’ve found the process of making top level postings and especially reading and responding to the ensuing comments, but the topic may be sufficiently important to justify the effort.
What evidence do you have of this? One reason I doubt that it’s true is that Eliezer has been relatively good at admitting flaws in his ideas, even when doing so implied that building FAI is harder than he previously thought. I think you could reasonably argue that he’s still overconfident about his chances of successfully building FAI, but I don’t see how you get “unwillingness to seriously consider the possibility”.
Eliezer was not willing to engage with my estimate here. See his response. For the reasons that I point out here, I think that my estimate is well grounded.
Eliezer’s apparent lack of willingness to engage with me on this point does not immediately imply that he’s unwilling to seriously consider the possibility that I raise. But I do see it as strongly suggestive.
As I said in response to ThomBlake, I would be happy to pointed to any of Eliezer’s writings which support the idea that Eliezer has given serious consideration to the two points that I raised to explain my estimate.
Edit: I’ll also add that given the amount of evidence that I see against the proposition that Eliezer will build a Friendly AI, I have difficulty imagining how he could be persisting in holding his beliefs without having failed to give serious consideration to the possibility that he might be totally wrong. It seems very likely to me that if he had explored this line of thought, he would have a very different world view than he does at present.
Have you noticed that many (most?) commenters/voters seem to disagree with your estimate? That’s not necessarily strong evidence that your estimate is wrong (in the sense that a Bayesian superintelligence wouldn’t assign a probability as low as yours), but it does show that many reasonable and smart people disagree with your estimate even after seriously considering your arguments. To me that implies that Eliezer could disagree with your estimate even after seriously considering your arguments, so I don’t think his “persisting in holding his beliefs” offers much evidence for your position that Eliezer exhibited “unwillingness to seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”.
Yes. Of course, there’s a selection effect here—the people on LW are more likely to assign a high probability to the proposition that Eliezer will build a Friendly AI (whether or not there’s epistemic reason to do so).
The people outside of LW who I talk to on a regular basis have an estimate in line with my own. I trust these people’s judgment more than I trust LW posters judgment simply because I have much more information about their positive track records for making accurate real world judgments than I do for the people on LW.
Yes, so I agree that in your epistemological state you should feel this way. I’m explaining why in my epistemological state I feel the way I do.
In your own epistemological state, you may be justified in thinking that Eliezer and other LWers are wrong about his chances of success, but even granting that, I still don’t see why you’re so sure that Eliezer has failed to “seriously consider the possibility that he’s vastly overestimated his chances of building a Friendly AI”. Why couldn’t he have, like the other LWers apparently did, considered the possibility and then (erroneously, according to your epistemological state) rejected it?
My experience reading Eliezer’s writings is that he’s very smart and perceptive. I find it implausible that somebody so smart and perceptive could miss something for which there is (in my view) so much evidence for if he had engaged in such consideration. So I think that what you suggest could be the case, but I find is quite unlikely.