I think that it would be better for Eliezer and for the world at large if Eliezer seriously considered the possibility that he’s vastly overestimated his chances of building a Friendly AI.
We haven’t heard Eliezer say how likely he believes it is that he creates a Friendly AI. He has been careful to not to discuss that subject. If he thought his chances of success were 0.5% then I would expect him to make exactly the same actions.
(ETA: With the insertion of ‘relative’ I suspect I would more accurately be considering the position you are presenting.)
Right, so in my present epistemological state I find it extremely unlikely that Eliezer will succeed in building a Friendly AI. I gave an estimate here which proved to be surprisingly controversial.
The main points that inform my thinking here are:
The precedent for people outside of the academic mainstream having mathematical/scientific breakthroughs in recent times is extremely weak. In my own field of pure math I know of only two people without PhD’s in math or related fields who have produced something memorable in the last 70 years or so, namely Kurt Heegner and Martin Demaine. And even Heegner and Demaine are (relatively speaking) quite minor figures. It’s very common for self-taught amateur mathematicians to greatly underestimate the difficulty of substantive original mathematical research. I find it very likely that the same is true in virtually all scientific fields and thus have an extremely skeptical Bayesian prior against any proposition of the type “amateur intellectual X will solve major scientific problem Y.”
From having talked with computer scientists and AI researchers, I have a very strong impression that the consensus is that AGI is way out of reach at present. See for example points #1 and #5 of Scott Aaronson’s The Singularity is Far.
The fact that Eliezer does not appear to have seriously contemplated or addressed the the two points above and their implications diminishes my confidence in his odds of success still further.
The fact that Eliezer does not appear to have seriously contemplated or addressed the the two points above and their implications diminishes my confidence in his odds of success still further.
That you have this impression greatly diminishes my confidence in your intuitions on the matter. Are you seriously suggesting that Eliezer has not contemplated AI researchers’ opinions about AGI? Or that he hasn’t thought about just how much effort should go into a scientific breakthrough?
Someone please throw a few hundred relevant hyperlinks at this person.
I’m not saying that Eliezer has given my two points no consideration. I’m saying that Eliezer has not given my two points sufficient consideration. By all means, send hyperlinks that you find relevant my way—I would be happy to be proven wrong.
Regarding your first point, I’m pretty sure Eliezer does not expect to solve FAI by himself. Part of the reason for creating LW was to train/recruit potential FAI researchers, and there are also plenty of Ph.D. students among SIAI visiting fellows.
Regarding the second point, do you want nobody to start researching FAI until AGI is within reach?
Regarding your first point, I’m pretty sure Eliezer does not expect to solve FAI by himself. Part of the reason for creating LW was to train/recruit potential FAI researchers, and there are also plenty of Ph.D. students among SIAI visiting fellows.
Right, but the historical precedent for an amateur scientist even being at all involved in a substantial scientific breakthrough over the past 50 years is very weak.
Also, my confidence in Eliezer’s ability to train/recruit potential FAI researchers has been substantially diminished for the reasons that I give in Existential Risk and Public Relations. I personally would be interested in working with Eliezer if he appeared to me to be well grounded. The impressions that I’ve gotten from my private correspondence with Eliezer and from his comments have given me a very strong impression that I would find him too difficult to work with for me to be able to do productive FAI research with him.
Regarding the second point, do you want nobody to start researching FAI until AGI is within reach?
No. I think that it would be worthwhile for somebody to do FAI research in line with Vladimir Nesov’s remarks here and here.
But I maintain that the probability of success is very small and that the only justification for doing it is the possibility of enormous returns. If people had established an institute for the solution of Fermat’s Last Theorem in the 1800′s, the chances of anybody there playing a decisive role in the solution of Fermat’s Last Theorem would be very small. I view the situation with FAI as analogous.
Right, but the historical precedent for an amateur scientist even being at all involved in a substantial scientific breakthrough over the past 50 years is very weak.
Hold on—there are two different definitions of the word “amateur” that could apply here, and they lead to very different conclusions. The definition I think of first, is that an amateur at something is someone who doesn’t get paid folr doing it, as opposed to a professional who makes a living at it. By this definition, amateurs rarely achieve anything, and if they do, they usually stop being amateurs. But Eliezer’s full-time occupation is writing, thinking, and talking about FAI and related topics, so by this definition, he isn’t an amateur (regardless of whether or not you think he’s qualified for that occupation).
The other definition of “amateur scientist” would be “someone without a PhD”. This definition Eliezer does fit, but by this definition, the amateurs have a pretty solid record. And if you narrow it down to computer software, the amateurs have achieved more than the PhDs have!
I feel like you’ve taken the connotations of the first definition and unknowingly and wrongly transferred them to the second definition.
Okay, so, I agree with some of what you say above. I think I should have been more precise.
A claim of the type “Eliezer is likely to build a Friendly AI” requires (at least in part) a supporting claim of the type “Eliezer is in group X where people in group X are likely to build a Friendly AI.” Even if one finds such a group X, this may not be sufficient because Eliezer may belong to some subgroup of X which is disproportionately unlikely to build a Friendly AI. But one at least has to be able to generate such a group X.
At present I see no group X that qualifies.
1.Taking X to be “humans in the developed world” doesn’t work because the average member of X is extremely unlikely to build a Friendly AI.
Taking X to be “people with PhDs a field related to artificial intelligence” doesn’t work because Eliezer doesn’t have a PhD in artificial intelligence.
Taking X to be “programmers” doesn’t work because Eliezer is not a programmer.
Taking X to be “people with very high IQ” is a better candidate, but still doesn’t yield a very high probability estimate because very high IQ is not very strongly correlated with technological achievement.
Taking X to be “bloggers about rationality” doesn’t work because there’s very little evidence that being a blogger about rationality is correlated with skills conducive to building a Friendly AI.
Which suitable group X do you think that Eliezer falls into?
How about “people who have publically declared an intention to try to build an FAI”? That seems like a much more relevant reference class, and it’s tiny. (I’m not sure how tiny, exactly, but it’s certainly smaller than 10^3 people right now) And if someone else makes a breakthrough that suddenly brings AGI within reach, they’ll almost certainly choose to recruit help from that class.
I agree that the class that you mention is a better candidate than the ones that I listed. However:
I find it fairly likely that the class will expand dramatically if there’s a breakthrough that brings AGI in within reach.
Announcing interest in FAI does not entail having the skills necessary to collaborate with the people working on an AGI to make it Friendly.
In addition to these points, there’s a factor which makes Eliezer less qualified than the usual member of the class, namely his public relations difficulties. As he says here “I feel like I’m being held to an absurdly high standard … like I’m being asking to solve PR problems that I never signed up for.” As a matter of reality, PR matters in this world. If there was a breakthrough that prompted a company like IBM to decide to build an AGI, I have difficulty imagining them recruiting Eliezer, the reason being that Eliezer says things that sound strange and is far out of the mainstream. However of course
(i) I could imagine SIAI’s public relations improving substantially in the future—this would be good and would raise the chances of Eliezer being able to work with the researchers who build an AGI.
(ii) There may of course be other factors which make Eliezer more likely than other members of the class to be instrumental to building a Friendly AI.
Despite factors (i) and (ii), putting all of the information that I have together, my estimate of 10^(-9) still feels about right to me. I’d be happy to continue trading information with you with a view toward syncing up our probabilities if you’re so inclined.
I find it fairly likely that the class will expand dramatically if there’s a breakthrough that brings AGI in within reach.
I should hope not! If that happens, it means the person who made the breakthrough released it to the public. That would be a huge mistake, because it would greatly increase the chances of an unfriendly AI being built before a friendly one.
Despite factors (i) and (ii), putting all of the information that I have together, my estimate of 10^(-9) still feels about right to me.
That’s only because you said it in public and aren’t willing to appear inconsistent. You still haven’t decomposed this into manageable pieces with numbers. And since we’ve already seen that you wrote the bottom line first, we would have strong reason to not trust those numbers if you did.
I should hope not! If that happens, it means the person who made the breakthrough released it to the public. That would be a huge mistake, because it would greatly increase the chances of an unfriendly AI being built before a friendly one.
Two points:
It seems very likely to me that there’s a string of breakthroughs which will lead to AGI and that it will gradually become clear that to people that they should be thinking about friendliness issues.
Even if there’s a single crucial breakthrough, I find it fairly likely that the person who makes it will not have friendliness concerns in mind.
That’s only because you said it in public and aren’t willing to appear inconsistent. You still haven’t decomposed this into manageable pieces with numbers. And since we’ve already seen that you wrote the bottom line first, we would have strong reason to not trust those numbers if you did.
I believe that the human brain is extremely poorly calibrated to determining probabilities through the explicit process that you describe and that the human brain’s intuition is often more reliable for such purposes. My attitude is in line with Holden’s comments 14 and 16 on the GiveWell Singularity Summit thread.
In line with the last two paragraphs of one of my earlier comments, I find your quickness to assume that my thinking on these matters stems from motivated cognition disturbing. Of course, I may be exhibiting motivated cognition, but the same is true of you, and your ungrounded confidence in your superiority to me is truly unsettling. As such, I will cease to communicate further with you unless you resolve to stop confidently asserting that I’m exhibiting motivated cognition.
I don’t think that’s the right way to escape from a Pascal’s mugging. In the case of the SIAI, there isn’t really clear evidence that the organisation is having any positive effect—let alone SAVING THE WORLD. When the benefit could plausibly be small, zero—or indeed negative—one does not need to invoke teeny tiny probabalities to offset it.
If that happens, it means the person who made the breakthrough released
it to the public. That would be a huge mistake, because it would greatly
increase the chances of an unfriendly AI being built before a friendly one.
You are so concerned about the possibility of failure that you want to slow down research, publication and progress in the field—in order to promote research into safety?
Do you think all progress should be slowed down—or just progress in this area?
The costs of stupidity are a million road deaths a year, and goodness knows how many deaths in hospitals. Intelligence would have to be pretty damaging to outweigh that.
There is an obvious good associated with publication—the bigger the concentration of knowledge about intelligent machines there is in one place, the greater wealth inequality is likely to result, and the harder it would be for the rest of society to deal with a dominant organisation. Spreading knowlege helps spread out the power—which reduces the chance of any one group of people becoming badly impoverished. Such altruistic measures may help to prevent a bloody revolution from occurring.
Right, but the historical precedent for an amateur scientist even being at all involved in a substantial scientific breakthrough over the past 50 years is very weak.
What are we supposed to infer from that? That if you add an amateur scientist to a group of PhDs, that would substantially decrease their chance of making a breakthrough?
The impressions that I’ve gotten from my private correspondence with Eliezer and from his comments have given me a very strong impression that I would find him too difficult to work with for me to be able to do productive FAI research with him.
SIAI held a 3-day decision theory workshop in March that I attended along with Stuart Armstrong and Gary Drescher as outside guests. I feel pretty safe in saying that none of us found Eliezer particularly difficult to work with. I wonder if perhaps you’re generalizing from one example here.
I think that it would be worthwhile for somebody to do FAI research in line with Vladimir Nesov’s remarks here and here.
Do you also think it would be worthwhile for somebody to try to build an organization to do FAI research? If so, who do you think should be doing that, if not Eliezer and his supporters? Or is your position more like cousin_it’s, namely that FAI research should just be done by individuals on their free time for now?
What are we supposed to infer from that? That if you add an amateur scientist to a group of PhDs, that would substantially decrease their chance of making a breakthrough?
No, certainly not. I just don’t see much evidence that Eliezer is presently adding value to Friendly AI research. I think he could be doing more to reduce existential risk if he were operating under different assumptions.
SIAI held a 3-day decision theory workshop in March that I attended along with Stuart Armstrong and Gary Drescher as outside guests. I feel pretty safe in saying that none of us found Eliezer particularly difficult to work with. I wonder if perhaps you’re generalizing from one example here.
Of course you could be right here, but the situation is symmetric, the same could be the case for you, Stuart Armstrong and Gary Drescher. Keep in mind that there’s a strong selection effect here—if you’re spending time with Eliezer you’re disproportionately likely to be well suited to working with Eliezer, and people who have difficulty working with Eliezer are disproportionately unlikely be posting on Less Wrong or meeting with Eliezer.
My intuition is that there are a lot of good potential FAI researchers who would not feel comfortable working with Eliezer given his current disposition, but I may be wrong.
Do you also think it would be worthwhile for somebody to try to build an organization to do FAI research? If so, who do you think should be doing that, if not Eliezer and his supporters? Or is your position more like cousin_it’s, namely that FAI research should just be done by individuals on their free time for now?
Quite possibly it’s a good thing for Eliezer and his supporters to be building an organization to do FAI research. On the other hand maybe cousin_it’s position is right. I have a fair amount of uncertainty on this point.
The claim that I’m making is quite narrow: that it would be good for the cause of existential risk reduction if Eliezer seriously considered the possibility that he’s greatly overestimated his chances of building a Friendly AI.
I’m not saying that it’s a bad thing to have an organization like SIAI. I’m not saying that Eliezer doesn’t have a valuable role to serve within SIAI. I’m reminded of Robin Hanson’s Against Disclaimers though I don’t feel comfortable with his condescending tone and am not thinking of you in that light :-).
My intuition is that there are a lot of good potential FAI researchers who would not feel comfortable working with Eliezer given his current disposition, but I may be wrong.
This topic seems important enough that you should try to figure out why your intuition says that. I’d be interested in hearing more details about why you think a lot of good potential FAI researchers would not feel comfortable working with Eliezer. And in what ways do you think he could improve his disposition?
I personally would be interested in working with Eliezer if he appeared to me to be well grounded. The impressions that I’ve gotten from my private correspondence with Eliezer and from his comments have given me a very strong impression that I would find him too difficult to work with for me to be able to do productive FAI research with him.
My reading of this is that before you corresponded privately with Eliezer, you were
interested in personally doing FAI research
assigned high enough probability to Eliezer’s success to consider collaborating with him
massively decreased your estimate of Eliezer’s chance of success
Is this right? If so, I wonder what he could have said that made you change your mind like that. I guess either he privately came off as much less competent than he appeared in the public writings that drew him to your attention in the first place (which seems rather unlikely), or you took his response as some sort of personal affront and responded irrationally.
So, the situation is somewhat different than the one that you describe. Some points of clarification.
•I first came across Overcoming Bias in 2008. Eliezer was recommended to me by a friend who I respect a great deal. My reactions to the first postings that I read by Eliezer was strong discomfort with his apparent grandiosity and self absorption. This discomfort was sufficiently strong for me to lose interest despite my friend’s endorsement.
•I started reading Less Wrong in earnest in the beginning of 2010. This made it clear to me that Eliezer has a lot to offer and that it was unfortunate that I had been pushed away by my initial reaction.
•I never assigned a very high probability to Eliezer making a crucial contribution to an FAI research project. My thinking was that the enormous positive outcome associated with success might be sufficiently great to justify the project despite the small probability.
•I didn’t get much of a chance to correspond privately with Eliezer at all. He responded to a couple of my messages with one line dismissive responses and then stopped responding to my subsequent messages. Naturally this lowered the probability that I assigned to being able to collaborating with him. This also lowered my confidence in his ability to attract collaborators in general.
•If Eliezer showed strong ability to attract and work well with collaborators (including elite academics who are working on artificial intelligence research) then I would find it several orders of magnitude more likely that he would make a crucial contribution to an FAI research project. For concreteness I’ll throw out the number 10^(-6).
•I feel that the world is very complicated and that randomness plays a very large role. This leads me to assign a very small probability to the proposition that any given individual will play a crucial role in eliminating existential risk.
•I freely acknowledge that I may be influenced by emotional factors. I make an honest effort at being level headed and sober but as I mention elsewhere, my experience posting on Less Wrong has been emotionally draining. I find that I become substantially less rational when people assume that my motives are impure (some sort of self fulfilling prophesy).
You may notice that of my last four posts, the first pair was considerably more impartial than the second pair. (This is reflected in the fact that the first pair was upvoted more than the second pair.) My subjective perception is that I started out thinking quite carefully and became less rational as I read and responded to hostile commentators.
I first came across Overcoming Bias in 2008. Eliezer was recommended to me by a friend who I respect a great deal. My reactions to the first postings that I read by Eliezer was strong discomfort with his apparent grandiosity and self absorption. This discomfort was sufficiently strong for me to lose interest despite my friend’s endorsement.
I’d be really interested to know which posts these were, because it would help me to distinguish between the following interpretations:
(1) First impressions really do matter: even though you and I are probably very similar in many respects, we have different opinions of Eliezer simply because in the first posts of his I read, he sounded more like a yoga instructor than a cult leader; whereas perhaps the first thing you read was some post where his high estimation of his abilities relative to the rest of humanity was made explicit, and you didn’t have the experience of his other writings to allow you to “forgive” him for this social transgression.
(2) We have different personalities, which cause us to interpret people’s words differently: you and I read more or less the same kind of material first, but you just interpreted it as “grandiose” whereas I didn’t.
What’s interesting in any case is that I’m not sure that I actually disagree with you all that much about Eliezer having a small chance of success (though I think you quantify it incorrectly with numbers like 10^(-9) or 10^(-6) -- these are way too small). Where we differ seems to be in the implications we draw from this. You appear to believe that Eliezer and SIAI are doing something importantly wrong, that could be fixed by means of a simple change of mindset, and that they shouldn’t be supported until they make this change. By contrast, my interpretation is that this is an extremely difficult problem, that SIAI is basically the first organization that has begun to make a serious attempt to address it, and that they are therefore worthy of being supported so that they can increase their efforts in the directions they are currently pursuing and potentially have a larger impact than they otherwise would.
I’ve been meaning to ask you: given your interest in reducing existential risk, and your concerns about SIAI’s transparency and their general strategy, have you considered applying to the Visiting Fellows program? That would be an excellent way not only to see what it is they do up close, but also to discuss these very issues in person at length with the people involved in SIAI strategy—which, in my experience, they are very interested in doing, even with short-term visitors.
I’d be really interested to know which posts these were, because it would help me to distinguish between the following interpretations:
Right, so the first posts that I came across were Eliezer’s Coming of Age posts which I think are unrepresentatively self absorbed. So I think that the right interpretation is the first that you suggest.
What’s interesting in any case is that I’m not sure that I actually disagree with you all that much about Eliezer having a small chance of success (though I think you quantify it incorrectly with numbers like 10^(-9) or 10^(-6) -- these are way too small). Where we differ seems to be in the implications we draw from this. You appear to believe that Eliezer and SIAI are doing something importantly wrong, that could be fixed by means of a simple change of mindset, and that they shouldn’t be supported until they make this change. By contrast, my interpretation is that this is an extremely difficult problem, that SIAI is basically the first organization that has begun to make a serious attempt to address it, and that they are therefore worthy of being supported so that they can increase their efforts in the directions they are currently pursuing and potentially have a larger impact than they otherwise would.
Since I made my top level posts, I’ve been corresponding with Carl Shulman who informed me of some good things that SIAI has been doing that have altered my perception of the institution. I think that SIAI may be worthy of funding.
Regardless as to the merits of SIAI’s research and activities, I think that in general it’s valuable to promote norms of Transparency and Accountability. I would certainly be willing to fund SIAI if it were strongly recommended by a highly credible external charity evaluator like GiveWell. Note also a comment which I wrote in response to Jasen.
I would like to talk more about these things—would you like to share email addresses? PM me if so.
I’ve been meaning to ask you: given your interest in reducing existential risk, and your concerns about SIAI’s transparency and their general strategy, have you considered applying to the Visiting Fellows program? That would be an excellent way not only to see what it is they do up close, but also to discuss these very issues in person at length with the people involved in SIAI strategy—which, in my experience, they are very interested in doing, even with short-term visitors.
At this point I worry that I’ve alienated the SIAI people to such an extent that they might not be happy to have me. But I’d certainly be willing if they’re favorably disposed toward me.
I’ll remark that back in December after reading Anna Salamon’s posting on the SIAI Visting Fellows program I did send Anna Salamon a long email expressing some degree of interest and describing some my concerns without receiving a response. I now find it most plausible that she just forgot about it and that I should have tried again, but maybe you can understand from this how I got the impression that becoming an SIAI Visiting Fellow was not a strong option for me.
I would like to talk more about these things—would you like to share email addresses? PM me if so.
Done.
I’ll remark that back in December after reading Anna Salamon’s posting on the SIAI Visting Fellows program I did send Anna Salamon a long email expressing some degree of interest and describing some my concerns without receiving a response. I now find it most plausible that she just forgot about it and that I should have tried again, but maybe you can understand from this how I got the impression that becoming an SIAI Visiting Fellow was not a strong option for me.
As it happens, the same thing happened to me; it turned out that my initial message had been caught in a spam filter. I eventually ended up visiting for two weeks, and highly recommend the experience.
If Eliezer showed strong ability to attract and work well with collaborators (including elite academics who are working on artificial intelligence research) then I would find it several orders of magnitude more likely that he would make a crucial contribution to an FAI research project. For concreteness I’ll throw out the number 10^(-6).
This, along with your other estimate of 10^(-9), implies that your probability for Eliezer being able to eventually attract and work well with collaborators is currently 1/1000. Does that really seem reasonable to you (would you be willing to bet at those odds?), given other evidence besides your private exchange with Eliezer? Such as:
Eliezer already had a close collaborator, namely Marcello
SIAI has successfully attracted many visiting fellows
SIAI has successfully attracted top academics to speak at their Singularity Summit
Eliezer is currently writing a book on rationality, so presumably he isn’t actively trying to recruit collaborators at the moment
Other people’s reports of not finding Eliezer particularly difficult to work with
It seems to me that rationally updating on Eliezer’s private comments couldn’t have resulted in such a low probability. So I think a more likely explanation is that you were offended by the implications of Eliezer’s dismissive attitude towards your comments.
(Although, given Eliezer’s situation, it would probably be a good idea for him to make a greater effort to avoid offending potential supporters, even if he doesn’t consider them to be viable future collaborators.)
My subjective perception is that I started out thinking quite carefully and became less rational as I read and responded to hostile commentators.
Your responses to me seem pretty level headed and sober. I hope that means you don’t find my comments too hostile.
This, along with your other estimate of 10^(-9), implies that your probability for Eliezer being able to eventually attract and work well with collaborators is currently 1/1000. Does that really seem reasonable to you (would you be willing to bet at those odds?)
Thinking it over, my estimate of 10^(-6) was way too high. This isn’t because of a lack of faith in Eliezer’s abilities in particular. I would recur to my above remark that I think that everybody has very small probability of succeeding in efforts to eliminate existential risk. We’re part of a complicated chaotic dynamical system and to a large degree our cumulative impact on the world is unintelligible and unexpected (because of a complicated network of unintended consequences, side effects, side effects of the side effects, etc.).
Your responses to me seem pretty level headed and sober. I hope that means you don’t find my comments too hostile.
From having talked with computer scientists and AI researchers, I have a very strong impression that the consensus is that AGI is way out of reach at present. See for example points #1 and #5 of Scott Aaronson’s The Singularity is Far.
I don’t think there’s any such consensus. Most of those involved know that they don’t know with very much confidence. For a range of estimates, see the bottom of:
For what it’s worth, in saying “way out of reach” I didn’t mean “chronologically far away,” I meant “far beyond the capacity of all present researchers.” I think it’s quite possible that AGI is just 50 years away.
I think that the absence of plausibly relevant and concrete directions for AGI/FAI research, the chance of having any impact on the creation of an FAI through research is diminished by many orders of magnitude.
If there are plausibly relevant and concrete directions for AGI/FAI research then the situation is different, but I haven’t heard examples that I find compelling.
“Just 50 years?” Shane Legg’s explanation of why his mode is at 2025:
Thanks for pointing this out. I don’t have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.
I’d recur to CarlShulman’s remark about selection bias here. I look forward to seeing the results of the hypothetical Bostrom survey and the SIAI collection of all public predictions.
If 15 years is more accurate—then things are a bit different.
I agree. There’s still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in “crunch” mode (amassing resources specifically directed at future FAI research).
I agree. There’s still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in “crunch” mode (amassing resources specifically directed at future FAI research).
At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we’re still in serious crunch time. The tails are fat in both directions. (This is important because it takes away a lot of the Pascalian flavoring that makes people (justifiably) nervous when reasoning about whether or not to donate to FAI projects: 15% chance of FOOM before 2020 just feels very different to a bounded rationalist than a .5% chance of FOOM before 2020.)
Thanks for pointing this out. I don’t have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.
For what it’s worth, Shane Legg is a pretty reasonable fellow who understands that AGI isn’t automatically good, so we can at least rule out that his predictions are tainted by the thoughts of “Yay, technology is good, AGI is close!” that tend to cast doubt on the lack of bias in most AGI researchers’ and futurists’ predictions. He’s familiar with the field and indeed wrote the book on Machine Super Intelligence. I’m more persuaded by Legg’s arguments than most at SIAI, though, and although this isn’t a claim that is easily backed by evidence, the people at SIAI are really freakin’ good thinkers and are not to be disagreed with lightly.
At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we’re still in serious crunch time. The tails are fat in both directions.
I recur to my concern about selection effects. If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?
I do think that it’s sufficiently likely that the people in academia have erred that it’s worth my learning more about this topic and spending some time pressing people within academia on this point. But at present I assign a low probability (~5%) to the notion that the mainstream has missed something so striking as a large probability of a superhuman AI within 15 years.
Incidentally, I do think that decisive paradigm changing events are very likely to occur over the next 200 years and that this warrants focused effort on making sure that society is running as possible (as opposed to doing pure scientific research with the justification that it may pay off in 500 years).
A fair response to this requires a post that Less Wrong desperately needs to read: People Are Crazy, the World Is Mad. Unfortunately this requires that I convince Michael Vassar or Tom McCabe to write it. Thus, I am now on a mission to enlist the great power of Thomas McCabe.
(A not-so-fair response: you underestimate the extent to which academia is batshit insane just like nearly every individual in it, you overestimate the extent to which scientists ever look outside of their tiny fields of specialization, you overestimate the extent to which the most rational scientists are willing to put their reputations on the line by even considering much less accepting an idea as seemingly kooky as ‘human-level AI by 2035’, and you underestimate the extent to which the most rational scientists are starting to look at the possibility of AGI in the next 50 years (which amounts to non-trivial probability mass in the next 15). I guess I don’t know who the very best scientists are. (Dawkins and Tooby/Cosmides impress me a lot; Tooby was at the Summit. He signed a book that’s on my table top. :D ) Basically, I think you’re giving academia too much credit. These are all assertions, though; like I said, this response is not a fair one, but this way at least you can watch for a majoritarian bias in your thinking and a contrarian bias in my arguments.)
As for your “not-so-fair response”—I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.
(I say this with all due respect—I’ve read and admired some of your top level posts.)
As for your “not-so-fair response”—I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.
I definitely don’t have the necessary first-hand-experience: I was reporting second-hand the impressions of a few people who I respect but whose insights I’ve yet to verify. Sorry, I should have said that. I deserve some amount of shame for my lack of epistemic hygiene there.
(I say this with all due respect—I’ve read and admired some of your top level posts.)
Thanks! I really appreciate it. A big reason for the large amounts of comments I’ve been barfing up lately is a desire to improve my writing ability such that I’ll be able to make more and better posts in the future.
If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)? I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
And whose opinions should we count exactly? Do we value the opinions of historians, economists, psychologists, chemists, geologists, astronomers, etc etc as much as we value the opinions of neuroscientists, computer scientists, and engineers?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
The semiconductor industry predicts it’s own future pretty accurately, but they don’t invite biologists, philosophers or mathematicians to those meetings. Their roadmap and moore’s law in general is the most relevant for predicting AGI.
I base my own internal estimate on my own knowledge of the relevant fields—partly because this is so interesting and important that one should spend time investigating it.
I honestly suspect that most people who reject the possibility of near-term AGI have some deeper philosophical rejection.
If you are a materialist then intelligence is just another algorithm—something the brain does, and something we can build. It is an engineering problem and subject to the same future planning that we use for other engineering challenges.
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)?
I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
Can you give a reference?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
I have sufficiently little subject matter knowledge so that it’s reasonable for me to take the outside view here and listen to people who seem to know what they’re talking about rather than attempting to do a detailed analysis myself.
Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil’s latest tome was probably not much new news for most of it’s target demographic (silicon valley).
I’ve read Aaronson’s post and his counterview seems to boil down to generalized pessimism, which I don’t find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.
There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT’s lab is no joke.
There is some sort of strange academic stigma though as Legg discusses on his blog—almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn’t mean that anyone will necessarily change their behavior.
I agree this seems really odd, but then I think—how have I changed my behavior? And it dawns on me that this is a much more complex topic.
For the IEEE singularity issue—just google it .. something like “IEEE Singularity special issue”. I’m having slow internet atm.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
Because any software problem can become easy given enough hardware.
For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.
With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.
The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
Because any software problem can become easy given enough hardware.
That would have been a pretty naive reply—since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.
IMO, the biggest reason we have for thinking that the software will be fairly tractable, is that we have an existing working model which we could always just copy—if the worst came to the worst.
Agreed, although it will be very difficult to copy it without understanding it in considerably more detail than we do at present. Copying without any understanding (whole brain scanning and emulation) is possible in theory, but the required engineering capability for that level of scanning technology seems pretty far into the future at the moment.
A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of “human-level” machine minds—since machine intelligence is a complex and difficult field—and so most outsiders will probably be pretty clueless.
Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
Of course, neither Kurzweil nor Moravec think any such thing—both have estimates of when a computer with the same processing power as the human brain a considerable while before they think the required software will be developed.
The biggest optimist I have come across is Peter Voss. His estimate in 2009 was around 8 years − 7:00 in. However, he obviously has something to sell—so maybe we should not pay too much attention to his opinion—due to the signalling effects associated with confidence.
Eliezer addresses point 2 in the comments of the article you linked to in point 2. He’s also previously answered the questions of whether he believes he personally could solve FAI and how far out it is—here, for example.
Thanks for the references, both of which I had seen before.
Concerning Eliezer’s response to Scott Aaronson: I agree that there’s a huge amount of uncertainty about these things and it’s possible that AGI will develop unexpectedly, but don’t see how this points in the direction of AGI being likely to be developed within decades. It seems like one could have said the same thing that Eliezer is saying in 1950 or even 1800. See Holden’s remarks about noncontingency here.
Even though the FAI problem is incredibly difficult, it’s still worth working on because the returns attached to success would be enormous.
Lots of people who have worked on AGI are mediocre
The field of AI research is not well organized.
Claim (1) might be true. I suspect that both of claims (2) and (3) are true. But by themselves these claims offer essentially no support for the idea that Eliezer is likely to be able to build a Friendly AI.
Edit: Should I turn my three comments starting here into a top level posting? I hesitate to do so in light of how draining I’ve found the process of making top level postings and especially reading and responding to the ensuing comments, but the topic may be sufficiently important to justify the effort.
We haven’t heard Eliezer say how likely he believes it is that he creates a Friendly AI. He has been careful to not to discuss that subject. If he thought his chances of success were 0.5% then I would expect him to make exactly the same actions.
(ETA: With the insertion of ‘relative’ I suspect I would more accurately be considering the position you are presenting.)
Right, so in my present epistemological state I find it extremely unlikely that Eliezer will succeed in building a Friendly AI. I gave an estimate here which proved to be surprisingly controversial.
The main points that inform my thinking here are:
The precedent for people outside of the academic mainstream having mathematical/scientific breakthroughs in recent times is extremely weak. In my own field of pure math I know of only two people without PhD’s in math or related fields who have produced something memorable in the last 70 years or so, namely Kurt Heegner and Martin Demaine. And even Heegner and Demaine are (relatively speaking) quite minor figures. It’s very common for self-taught amateur mathematicians to greatly underestimate the difficulty of substantive original mathematical research. I find it very likely that the same is true in virtually all scientific fields and thus have an extremely skeptical Bayesian prior against any proposition of the type “amateur intellectual X will solve major scientific problem Y.”
From having talked with computer scientists and AI researchers, I have a very strong impression that the consensus is that AGI is way out of reach at present. See for example points #1 and #5 of Scott Aaronson’s The Singularity is Far.
The fact that Eliezer does not appear to have seriously contemplated or addressed the the two points above and their implications diminishes my confidence in his odds of success still further.
That you have this impression greatly diminishes my confidence in your intuitions on the matter. Are you seriously suggesting that Eliezer has not contemplated AI researchers’ opinions about AGI? Or that he hasn’t thought about just how much effort should go into a scientific breakthrough?
Someone please throw a few hundred relevant hyperlinks at this person.
I’m not saying that Eliezer has given my two points no consideration. I’m saying that Eliezer has not given my two points sufficient consideration. By all means, send hyperlinks that you find relevant my way—I would be happy to be proven wrong.
Regarding your first point, I’m pretty sure Eliezer does not expect to solve FAI by himself. Part of the reason for creating LW was to train/recruit potential FAI researchers, and there are also plenty of Ph.D. students among SIAI visiting fellows.
Regarding the second point, do you want nobody to start researching FAI until AGI is within reach?
Right, but the historical precedent for an amateur scientist even being at all involved in a substantial scientific breakthrough over the past 50 years is very weak.
Also, my confidence in Eliezer’s ability to train/recruit potential FAI researchers has been substantially diminished for the reasons that I give in Existential Risk and Public Relations. I personally would be interested in working with Eliezer if he appeared to me to be well grounded. The impressions that I’ve gotten from my private correspondence with Eliezer and from his comments have given me a very strong impression that I would find him too difficult to work with for me to be able to do productive FAI research with him.
No. I think that it would be worthwhile for somebody to do FAI research in line with Vladimir Nesov’s remarks here and here.
But I maintain that the probability of success is very small and that the only justification for doing it is the possibility of enormous returns. If people had established an institute for the solution of Fermat’s Last Theorem in the 1800′s, the chances of anybody there playing a decisive role in the solution of Fermat’s Last Theorem would be very small. I view the situation with FAI as analogous.
Hold on—there are two different definitions of the word “amateur” that could apply here, and they lead to very different conclusions. The definition I think of first, is that an amateur at something is someone who doesn’t get paid folr doing it, as opposed to a professional who makes a living at it. By this definition, amateurs rarely achieve anything, and if they do, they usually stop being amateurs. But Eliezer’s full-time occupation is writing, thinking, and talking about FAI and related topics, so by this definition, he isn’t an amateur (regardless of whether or not you think he’s qualified for that occupation).
The other definition of “amateur scientist” would be “someone without a PhD”. This definition Eliezer does fit, but by this definition, the amateurs have a pretty solid record. And if you narrow it down to computer software, the amateurs have achieved more than the PhDs have!
I feel like you’ve taken the connotations of the first definition and unknowingly and wrongly transferred them to the second definition.
Okay, so, I agree with some of what you say above. I think I should have been more precise.
A claim of the type “Eliezer is likely to build a Friendly AI” requires (at least in part) a supporting claim of the type “Eliezer is in group X where people in group X are likely to build a Friendly AI.” Even if one finds such a group X, this may not be sufficient because Eliezer may belong to some subgroup of X which is disproportionately unlikely to build a Friendly AI. But one at least has to be able to generate such a group X.
At present I see no group X that qualifies.
1.Taking X to be “humans in the developed world” doesn’t work because the average member of X is extremely unlikely to build a Friendly AI.
Taking X to be “people with PhDs a field related to artificial intelligence” doesn’t work because Eliezer doesn’t have a PhD in artificial intelligence.
Taking X to be “programmers” doesn’t work because Eliezer is not a programmer.
Taking X to be “people with very high IQ” is a better candidate, but still doesn’t yield a very high probability estimate because very high IQ is not very strongly correlated with technological achievement.
Taking X to be “bloggers about rationality” doesn’t work because there’s very little evidence that being a blogger about rationality is correlated with skills conducive to building a Friendly AI.
Which suitable group X do you think that Eliezer falls into?
How about “people who have publically declared an intention to try to build an FAI”? That seems like a much more relevant reference class, and it’s tiny. (I’m not sure how tiny, exactly, but it’s certainly smaller than 10^3 people right now) And if someone else makes a breakthrough that suddenly brings AGI within reach, they’ll almost certainly choose to recruit help from that class.
I agree that the class that you mention is a better candidate than the ones that I listed. However:
I find it fairly likely that the class will expand dramatically if there’s a breakthrough that brings AGI in within reach.
Announcing interest in FAI does not entail having the skills necessary to collaborate with the people working on an AGI to make it Friendly.
In addition to these points, there’s a factor which makes Eliezer less qualified than the usual member of the class, namely his public relations difficulties. As he says here “I feel like I’m being held to an absurdly high standard … like I’m being asking to solve PR problems that I never signed up for.” As a matter of reality, PR matters in this world. If there was a breakthrough that prompted a company like IBM to decide to build an AGI, I have difficulty imagining them recruiting Eliezer, the reason being that Eliezer says things that sound strange and is far out of the mainstream. However of course
(i) I could imagine SIAI’s public relations improving substantially in the future—this would be good and would raise the chances of Eliezer being able to work with the researchers who build an AGI.
(ii) There may of course be other factors which make Eliezer more likely than other members of the class to be instrumental to building a Friendly AI.
Despite factors (i) and (ii), putting all of the information that I have together, my estimate of 10^(-9) still feels about right to me. I’d be happy to continue trading information with you with a view toward syncing up our probabilities if you’re so inclined.
I should hope not! If that happens, it means the person who made the breakthrough released it to the public. That would be a huge mistake, because it would greatly increase the chances of an unfriendly AI being built before a friendly one.
That’s only because you said it in public and aren’t willing to appear inconsistent. You still haven’t decomposed this into manageable pieces with numbers. And since we’ve already seen that you wrote the bottom line first, we would have strong reason to not trust those numbers if you did.
Two points:
It seems very likely to me that there’s a string of breakthroughs which will lead to AGI and that it will gradually become clear that to people that they should be thinking about friendliness issues.
Even if there’s a single crucial breakthrough, I find it fairly likely that the person who makes it will not have friendliness concerns in mind.
I believe that the human brain is extremely poorly calibrated to determining probabilities through the explicit process that you describe and that the human brain’s intuition is often more reliable for such purposes. My attitude is in line with Holden’s comments 14 and 16 on the GiveWell Singularity Summit thread.
In line with the last two paragraphs of one of my earlier comments, I find your quickness to assume that my thinking on these matters stems from motivated cognition disturbing. Of course, I may be exhibiting motivated cognition, but the same is true of you, and your ungrounded confidence in your superiority to me is truly unsettling. As such, I will cease to communicate further with you unless you resolve to stop confidently asserting that I’m exhibiting motivated cognition.
P(SIAI will be successful) may be smaller that 10^-(3^^^^3)!
I don’t think that’s the right way to escape from a Pascal’s mugging. In the case of the SIAI, there isn’t really clear evidence that the organisation is having any positive effect—let alone SAVING THE WORLD. When the benefit could plausibly be small, zero—or indeed negative—one does not need to invoke teeny tiny probabalities to offset it.
Upvoted twice for the “Two points”. Downvoted once for the remainder of the comment.
Well, actually, I’m pretty sure the second point has a serious typo. Maybe I should flip that vote.
You are so concerned about the possibility of failure that you want to slow down research, publication and progress in the field—in order to promote research into safety?
Do you think all progress should be slowed down—or just progress in this area?
The costs of stupidity are a million road deaths a year, and goodness knows how many deaths in hospitals. Intelligence would have to be pretty damaging to outweigh that.
There is an obvious good associated with publication—the bigger the concentration of knowledge about intelligent machines there is in one place, the greater wealth inequality is likely to result, and the harder it would be for the rest of society to deal with a dominant organisation. Spreading knowlege helps spread out the power—which reduces the chance of any one group of people becoming badly impoverished. Such altruistic measures may help to prevent a bloody revolution from occurring.
What are we supposed to infer from that? That if you add an amateur scientist to a group of PhDs, that would substantially decrease their chance of making a breakthrough?
SIAI held a 3-day decision theory workshop in March that I attended along with Stuart Armstrong and Gary Drescher as outside guests. I feel pretty safe in saying that none of us found Eliezer particularly difficult to work with. I wonder if perhaps you’re generalizing from one example here.
Do you also think it would be worthwhile for somebody to try to build an organization to do FAI research? If so, who do you think should be doing that, if not Eliezer and his supporters? Or is your position more like cousin_it’s, namely that FAI research should just be done by individuals on their free time for now?
No, certainly not. I just don’t see much evidence that Eliezer is presently adding value to Friendly AI research. I think he could be doing more to reduce existential risk if he were operating under different assumptions.
Of course you could be right here, but the situation is symmetric, the same could be the case for you, Stuart Armstrong and Gary Drescher. Keep in mind that there’s a strong selection effect here—if you’re spending time with Eliezer you’re disproportionately likely to be well suited to working with Eliezer, and people who have difficulty working with Eliezer are disproportionately unlikely be posting on Less Wrong or meeting with Eliezer.
My intuition is that there are a lot of good potential FAI researchers who would not feel comfortable working with Eliezer given his current disposition, but I may be wrong.
Quite possibly it’s a good thing for Eliezer and his supporters to be building an organization to do FAI research. On the other hand maybe cousin_it’s position is right. I have a fair amount of uncertainty on this point.
The claim that I’m making is quite narrow: that it would be good for the cause of existential risk reduction if Eliezer seriously considered the possibility that he’s greatly overestimated his chances of building a Friendly AI.
I’m not saying that it’s a bad thing to have an organization like SIAI. I’m not saying that Eliezer doesn’t have a valuable role to serve within SIAI. I’m reminded of Robin Hanson’s Against Disclaimers though I don’t feel comfortable with his condescending tone and am not thinking of you in that light :-).
This topic seems important enough that you should try to figure out why your intuition says that. I’d be interested in hearing more details about why you think a lot of good potential FAI researchers would not feel comfortable working with Eliezer. And in what ways do you think he could improve his disposition?
My reading of this is that before you corresponded privately with Eliezer, you were
interested in personally doing FAI research
assigned high enough probability to Eliezer’s success to consider collaborating with him
And afterward, you became
no longer interested in doing FAI research
massively decreased your estimate of Eliezer’s chance of success
Is this right? If so, I wonder what he could have said that made you change your mind like that. I guess either he privately came off as much less competent than he appeared in the public writings that drew him to your attention in the first place (which seems rather unlikely), or you took his response as some sort of personal affront and responded irrationally.
So, the situation is somewhat different than the one that you describe. Some points of clarification.
•I first came across Overcoming Bias in 2008. Eliezer was recommended to me by a friend who I respect a great deal. My reactions to the first postings that I read by Eliezer was strong discomfort with his apparent grandiosity and self absorption. This discomfort was sufficiently strong for me to lose interest despite my friend’s endorsement.
•I started reading Less Wrong in earnest in the beginning of 2010. This made it clear to me that Eliezer has a lot to offer and that it was unfortunate that I had been pushed away by my initial reaction.
•I never assigned a very high probability to Eliezer making a crucial contribution to an FAI research project. My thinking was that the enormous positive outcome associated with success might be sufficiently great to justify the project despite the small probability.
•I didn’t get much of a chance to correspond privately with Eliezer at all. He responded to a couple of my messages with one line dismissive responses and then stopped responding to my subsequent messages. Naturally this lowered the probability that I assigned to being able to collaborating with him. This also lowered my confidence in his ability to attract collaborators in general.
•If Eliezer showed strong ability to attract and work well with collaborators (including elite academics who are working on artificial intelligence research) then I would find it several orders of magnitude more likely that he would make a crucial contribution to an FAI research project. For concreteness I’ll throw out the number 10^(-6).
•I feel that the world is very complicated and that randomness plays a very large role. This leads me to assign a very small probability to the proposition that any given individual will play a crucial role in eliminating existential risk.
•I freely acknowledge that I may be influenced by emotional factors. I make an honest effort at being level headed and sober but as I mention elsewhere, my experience posting on Less Wrong has been emotionally draining. I find that I become substantially less rational when people assume that my motives are impure (some sort of self fulfilling prophesy).
You may notice that of my last four posts, the first pair was considerably more impartial than the second pair. (This is reflected in the fact that the first pair was upvoted more than the second pair.) My subjective perception is that I started out thinking quite carefully and became less rational as I read and responded to hostile commentators.
I’d be really interested to know which posts these were, because it would help me to distinguish between the following interpretations:
(1) First impressions really do matter: even though you and I are probably very similar in many respects, we have different opinions of Eliezer simply because in the first posts of his I read, he sounded more like a yoga instructor than a cult leader; whereas perhaps the first thing you read was some post where his high estimation of his abilities relative to the rest of humanity was made explicit, and you didn’t have the experience of his other writings to allow you to “forgive” him for this social transgression.
(2) We have different personalities, which cause us to interpret people’s words differently: you and I read more or less the same kind of material first, but you just interpreted it as “grandiose” whereas I didn’t.
What’s interesting in any case is that I’m not sure that I actually disagree with you all that much about Eliezer having a small chance of success (though I think you quantify it incorrectly with numbers like 10^(-9) or 10^(-6) -- these are way too small). Where we differ seems to be in the implications we draw from this. You appear to believe that Eliezer and SIAI are doing something importantly wrong, that could be fixed by means of a simple change of mindset, and that they shouldn’t be supported until they make this change. By contrast, my interpretation is that this is an extremely difficult problem, that SIAI is basically the first organization that has begun to make a serious attempt to address it, and that they are therefore worthy of being supported so that they can increase their efforts in the directions they are currently pursuing and potentially have a larger impact than they otherwise would.
I’ve been meaning to ask you: given your interest in reducing existential risk, and your concerns about SIAI’s transparency and their general strategy, have you considered applying to the Visiting Fellows program? That would be an excellent way not only to see what it is they do up close, but also to discuss these very issues in person at length with the people involved in SIAI strategy—which, in my experience, they are very interested in doing, even with short-term visitors.
Right, so the first posts that I came across were Eliezer’s Coming of Age posts which I think are unrepresentatively self absorbed. So I think that the right interpretation is the first that you suggest.
Since I made my top level posts, I’ve been corresponding with Carl Shulman who informed me of some good things that SIAI has been doing that have altered my perception of the institution. I think that SIAI may be worthy of funding.
Regardless as to the merits of SIAI’s research and activities, I think that in general it’s valuable to promote norms of Transparency and Accountability. I would certainly be willing to fund SIAI if it were strongly recommended by a highly credible external charity evaluator like GiveWell. Note also a comment which I wrote in response to Jasen.
I would like to talk more about these things—would you like to share email addresses? PM me if so.
At this point I worry that I’ve alienated the SIAI people to such an extent that they might not be happy to have me. But I’d certainly be willing if they’re favorably disposed toward me.
I’ll remark that back in December after reading Anna Salamon’s posting on the SIAI Visting Fellows program I did send Anna Salamon a long email expressing some degree of interest and describing some my concerns without receiving a response. I now find it most plausible that she just forgot about it and that I should have tried again, but maybe you can understand from this how I got the impression that becoming an SIAI Visiting Fellow was not a strong option for me.
Done.
As it happens, the same thing happened to me; it turned out that my initial message had been caught in a spam filter. I eventually ended up visiting for two weeks, and highly recommend the experience.
This, along with your other estimate of 10^(-9), implies that your probability for Eliezer being able to eventually attract and work well with collaborators is currently 1/1000. Does that really seem reasonable to you (would you be willing to bet at those odds?), given other evidence besides your private exchange with Eliezer? Such as:
Eliezer already had a close collaborator, namely Marcello
SIAI has successfully attracted many visiting fellows
SIAI has successfully attracted top academics to speak at their Singularity Summit
Eliezer is currently writing a book on rationality, so presumably he isn’t actively trying to recruit collaborators at the moment
Other people’s reports of not finding Eliezer particularly difficult to work with
It seems to me that rationally updating on Eliezer’s private comments couldn’t have resulted in such a low probability. So I think a more likely explanation is that you were offended by the implications of Eliezer’s dismissive attitude towards your comments.
(Although, given Eliezer’s situation, it would probably be a good idea for him to make a greater effort to avoid offending potential supporters, even if he doesn’t consider them to be viable future collaborators.)
Your responses to me seem pretty level headed and sober. I hope that means you don’t find my comments too hostile.
Thinking it over, my estimate of 10^(-6) was way too high. This isn’t because of a lack of faith in Eliezer’s abilities in particular. I would recur to my above remark that I think that everybody has very small probability of succeeding in efforts to eliminate existential risk. We’re part of a complicated chaotic dynamical system and to a large degree our cumulative impact on the world is unintelligible and unexpected (because of a complicated network of unintended consequences, side effects, side effects of the side effects, etc.).
Glad to hear it :-)
I don’t think there’s any such consensus. Most of those involved know that they don’t know with very much confidence. For a range of estimates, see the bottom of:
http://alife.co.uk/essays/how_long_before_superintelligence/
For what it’s worth, in saying “way out of reach” I didn’t mean “chronologically far away,” I meant “far beyond the capacity of all present researchers.” I think it’s quite possible that AGI is just 50 years away.
I think that the absence of plausibly relevant and concrete directions for AGI/FAI research, the chance of having any impact on the creation of an FAI through research is diminished by many orders of magnitude.
If there are plausibly relevant and concrete directions for AGI/FAI research then the situation is different, but I haven’t heard examples that I find compelling.
“Just 50 years?” Shane Legg’s explanation of why his mode is at 2025:
http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/
If 15 years is more accurate—then things are a bit different.
Thanks for pointing this out. I don’t have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.
I’d recur to CarlShulman’s remark about selection bias here. I look forward to seeing the results of the hypothetical Bostrom survey and the SIAI collection of all public predictions.
I agree. There’s still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in “crunch” mode (amassing resources specifically directed at future FAI research).
At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we’re still in serious crunch time. The tails are fat in both directions. (This is important because it takes away a lot of the Pascalian flavoring that makes people (justifiably) nervous when reasoning about whether or not to donate to FAI projects: 15% chance of FOOM before 2020 just feels very different to a bounded rationalist than a .5% chance of FOOM before 2020.)
For what it’s worth, Shane Legg is a pretty reasonable fellow who understands that AGI isn’t automatically good, so we can at least rule out that his predictions are tainted by the thoughts of “Yay, technology is good, AGI is close!” that tend to cast doubt on the lack of bias in most AGI researchers’ and futurists’ predictions. He’s familiar with the field and indeed wrote the book on Machine Super Intelligence. I’m more persuaded by Legg’s arguments than most at SIAI, though, and although this isn’t a claim that is easily backed by evidence, the people at SIAI are really freakin’ good thinkers and are not to be disagreed with lightly.
I recur to my concern about selection effects. If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?
I do think that it’s sufficiently likely that the people in academia have erred that it’s worth my learning more about this topic and spending some time pressing people within academia on this point. But at present I assign a low probability (~5%) to the notion that the mainstream has missed something so striking as a large probability of a superhuman AI within 15 years.
Incidentally, I do think that decisive paradigm changing events are very likely to occur over the next 200 years and that this warrants focused effort on making sure that society is running as possible (as opposed to doing pure scientific research with the justification that it may pay off in 500 years).
A fair response to this requires a post that Less Wrong desperately needs to read: People Are Crazy, the World Is Mad. Unfortunately this requires that I convince Michael Vassar or Tom McCabe to write it. Thus, I am now on a mission to enlist the great power of Thomas McCabe.
(A not-so-fair response: you underestimate the extent to which academia is batshit insane just like nearly every individual in it, you overestimate the extent to which scientists ever look outside of their tiny fields of specialization, you overestimate the extent to which the most rational scientists are willing to put their reputations on the line by even considering much less accepting an idea as seemingly kooky as ‘human-level AI by 2035’, and you underestimate the extent to which the most rational scientists are starting to look at the possibility of AGI in the next 50 years (which amounts to non-trivial probability mass in the next 15). I guess I don’t know who the very best scientists are. (Dawkins and Tooby/Cosmides impress me a lot; Tooby was at the Summit. He signed a book that’s on my table top. :D ) Basically, I think you’re giving academia too much credit. These are all assertions, though; like I said, this response is not a fair one, but this way at least you can watch for a majoritarian bias in your thinking and a contrarian bias in my arguments.)
I look forward to the hypothetical post.
As for your “not-so-fair response”—I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.
(I say this with all due respect—I’ve read and admired some of your top level posts.)
I definitely don’t have the necessary first-hand-experience: I was reporting second-hand the impressions of a few people who I respect but whose insights I’ve yet to verify. Sorry, I should have said that. I deserve some amount of shame for my lack of epistemic hygiene there.
Thanks! I really appreciate it. A big reason for the large amounts of comments I’ve been barfing up lately is a desire to improve my writing ability such that I’ll be able to make more and better posts in the future.
How do you support this? Have you done a poll of mainstream scientists (or better yet—the ‘best’ ones)? I haven’t seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50⁄50. It’s also important to note that the IEEE editor was against the Singularity-hypothesis—if I remember correctly, so there may be some bias there.
And whose opinions should we count exactly? Do we value the opinions of historians, economists, psychologists, chemists, geologists, astronomers, etc etc as much as we value the opinions of neuroscientists, computer scientists, and engineers?
I’d actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct—AGI will arrive around when Moore’s law makes it so.
200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap—not a general poll of scientists.
The semiconductor industry predicts it’s own future pretty accurately, but they don’t invite biologists, philosophers or mathematicians to those meetings. Their roadmap and moore’s law in general is the most relevant for predicting AGI.
I base my own internal estimate on my own knowledge of the relevant fields—partly because this is so interesting and important that one should spend time investigating it.
I honestly suspect that most people who reject the possibility of near-term AGI have some deeper philosophical rejection.
If you are a materialist then intelligence is just another algorithm—something the brain does, and something we can build. It is an engineering problem and subject to the same future planning that we use for other engineering challenges.
I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.
I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there’s a significant probability that we’ll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.
Can you give a reference?
This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.
I have sufficiently little subject matter knowledge so that it’s reasonable for me to take the outside view here and listen to people who seem to know what they’re talking about rather than attempting to do a detailed analysis myself.
Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil’s latest tome was probably not much new news for most of it’s target demographic (silicon valley).
I’ve read Aaronson’s post and his counterview seems to boil down to generalized pessimism, which I don’t find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.
There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT’s lab is no joke.
There is some sort of strange academic stigma though as Legg discusses on his blog—almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.
I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn’t mean that anyone will necessarily change their behavior.
I agree this seems really odd, but then I think—how have I changed my behavior? And it dawns on me that this is a much more complex topic.
For the IEEE singularity issue—just google it .. something like “IEEE Singularity special issue”. I’m having slow internet atm.
Because any software problem can become easy given enough hardware.
For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.
With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.
The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.
That would have been a pretty naive reply—since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.
Technically true—I should have said “tractable” or “these types of” rather than “any”. That of course is what computational complexity is all about.
IMO, the biggest reason we have for thinking that the software will be fairly tractable, is that we have an existing working model which we could always just copy—if the worst came to the worst.
Agreed, although it will be very difficult to copy it without understanding it in considerably more detail than we do at present. Copying without any understanding (whole brain scanning and emulation) is possible in theory, but the required engineering capability for that level of scanning technology seems pretty far into the future at the moment.
A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of “human-level” machine minds—since machine intelligence is a complex and difficult field—and so most outsiders will probably be pretty clueless.
Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.
Of course, neither Kurzweil nor Moravec think any such thing—both have estimates of when a computer with the same processing power as the human brain a considerable while before they think the required software will be developed.
The biggest optimist I have come across is Peter Voss. His estimate in 2009 was around 8 years − 7:00 in. However, he obviously has something to sell—so maybe we should not pay too much attention to his opinion—due to the signalling effects associated with confidence.
Optimist or pessimist?
In his own words: Increased Intelligence, Improved Life.
Eliezer addresses point 2 in the comments of the article you linked to in point 2. He’s also previously answered the questions of whether he believes he personally could solve FAI and how far out it is—here, for example.
Thanks for the references, both of which I had seen before.
Concerning Eliezer’s response to Scott Aaronson: I agree that there’s a huge amount of uncertainty about these things and it’s possible that AGI will develop unexpectedly, but don’t see how this points in the direction of AGI being likely to be developed within decades. It seems like one could have said the same thing that Eliezer is saying in 1950 or even 1800. See Holden’s remarks about noncontingency here.
As for A Premature Word on AI, Eliezer seems to be saying that
Even though the FAI problem is incredibly difficult, it’s still worth working on because the returns attached to success would be enormous.
Lots of people who have worked on AGI are mediocre
The field of AI research is not well organized.
Claim (1) might be true. I suspect that both of claims (2) and (3) are true. But by themselves these claims offer essentially no support for the idea that Eliezer is likely to be able to build a Friendly AI.
Edit: Should I turn my three comments starting here into a top level posting? I hesitate to do so in light of how draining I’ve found the process of making top level postings and especially reading and responding to the ensuing comments, but the topic may be sufficiently important to justify the effort.