This seems to be an example of negative commentary being primarily negative, rather than primarily commentary. One specific concrete claim that stood out to me without needing to unpack:
I seem to recall that Yudkowsky first claimed he didn’t need to get a degree [...] because the singularity was so near it would be a waste of time.
I agree with your overall characterization of the post, but on the specific concrete claim: one of the commenters there cites this article as saying this:
Yudkowsky’s reason for shunning formal education is that he believes the danger of unfriendly AI to be so near—as early as tomorrow—that there was no time for a traditional adolescence. “If you take the Singularity seriously, you tend to live out your life on a shorter time scale,” he said.
which I think is close enough to DC’s claim. I have no way of telling how accurately that article represents EY’s position or whether the quotation itself is accurate. Here’s EY characterizing another statement in that article as a lie (though for what it’s worth I think it can be interpreted consistently with what EY says is the truth—but of course that doesn’t mean it wasn’t intended to mislead).
Okay, I’ve updated somewhat in the direction that Eliezer actually said that at one point. (I was previously assuming that it was a mash-up of other things he’s said, but those things were all Sequences-or-later and this article is pre-Sequences.)
With the other statement,
When one researcher booted up a program he hoped would be AI-like, Yudkowsky said he believed there was a 5 percent chance the Singularity was about to happen and human existence would be forever changed.
It seems important to note that Eliezer was talking about a program unlike any program that had ever been turned on, when we knew less than we did at the time of writing. Without that detail, it can be interpreted as not-completely-literally-false, but I wouldn’t call it truthful. (The fact that Eliezer was not able to say it at the time seems less important, but leaving it out obscures the timeline.) When searching for the source of the “if you take the Singularity seriously” line, I found another comment by Eliezer on the subject: http://sl4.org/archive/0104/1163.html .
I say this with the benefit of hindsight, but just remember that not only Eurisko (the 5% risk program) but also its successors like Cyc which benefit from vastly greater computing power and decades of architecture improvement, fall far far short of being FOOMable AIs.
So if somebody had estimated a 5% risk for Eurisko, and then we saw what actually happened, I would update as them being substantially too paranoid.
I don’t think “it didn’t even come close” is sufficient to say that 5% was too paranoid.
I know the principles behind an atomic bomb, but I don’t know how much U-238 you need for critical mass. If someone takes two fist-sized lumps of U-238 and proposes to smash them together, I’d give… probably ~50% chance of it causing a massive explosion. But I’d also give maybe about 10% probability that you need like ten times as much U-238 as that. If that happens to be the case, I still don’t think that 50% is too paranoid, given my current state of knowledge.
There are people who do know how much U-238 you need, and their probability estimate will presumably be close to 0 or close to 1. And today, we can presumably work through the math and point out what the limits of Eurisko are that stop it from FOOMing. But if we hadn’t done the math at the time, 5% isn’t obviously unreasonable.
Tangential, but: U-238 is fissionable but not fissile; no amount of U-238 will give you a massive explosion if you bang it together. It’s U-235 that’s the fissile isotope.
(Even banging that together by hand won’t give you a massive explosion, though it will give you a moderately large explosion and an extremely lethal dose of radiation: the jargon is “predetonation” or “fizzle”. You need to bring a critical mass into existence hard and fast, e.g. by imploding a hollow sphere with explosive lenses, or a partial reaction will blow the pieces apart before criticality really has a chance to get going.)
I don’t think I can prove that I’m not coming at it from a hindsight biased perspective.
But I think I can say confidently that today’s technology is at least a qualitative leap away from Strong let alone FOOM AI. To make that more clear, I think no currently existing academic, industrial or personal project will achieve Strong AI or FOOM. Concretely:
In the next 2 years the chance of Strong AI and/or FOOM AI being developed is no more than 0.2%
So that’s a 2 year period where I estimate the chance of Strong AI or FOOM as substantially less than EY is saying we should have estimated Eurisko’s risk of FOOM only in retrospect.
I kept in mind that highly confident predictions (98%+) are often miscalibrated and I still make that assertion.
Also thought about it in terms of placing a bet.
I am not just throwing around numbers.
I can’t prove that this isn’t all hindsight bias, but to make a forward-looking prediction (also said this in another post): I believe that in the next 2 years the chance of Strong AI and/or FOOM AI being developed is no more than 0.2%.
Yes, this is a super-high-confidence prediction. But I have a pretty deep knowledge of computer science and AI research, I can very confidently say that current technology is a qualitative leap from Strong AI.
I kept in mind that highly confident predictions (98%+) are often miscalibrated and I still make that assertion.
Keeping something like that in mind does relatively little. Humans don’t manage to correct for the hindsight bias based on keeping things in mind.
Calibration actually needs feedback. You need to see how you mess up to get a feel for what a 95% prediction feels like.
95% feels like: I’m pretty certain that won’t happen but I’m not fully certain.
But I have a pretty deep knowledge of computer science and AI research, I can very confidently say that current technology is a qualitative leap from Strong AI.
The whole point for the 5% prediction was that going from a state where no program is self modifying to a world with self modifying AI is a qualitative leap.
The whole point for the 5% prediction was that going from a state where no program is self modifying to a world with self modifying AI is a qualitative leap.
But estimating this risk of FOOM still disregards the enormous computational power constraints on this software, and the fact that the self-modification heuristics were quite limited.
Basically, we know now that AI researchers in the 80′s and earlier were TREMENDOUSLY overoptimistic. I also think that less optimism was warranted by the facts at the time, and not just hindsight.
we know now that AI researchers in the 80′s and earlier were TREMENDOUSLY overoptimistic
In hindsight they were optimistic, but given the knowledge to which they had access at the time, it’s harder to make the same arguments. How would you argue that a researcher at that time should know how much the computational power constraints of that day mattered?
But I’d argue that their optimism stemmed from irrational assumptions. I’m not even saying that if I were transported back in time I would fall prey to the same irrational assumptions, but I would say that they had naive views of problems like visual object recognition or language comprehension that were completely unmotivated.
A comparable error today would be to assume that Strong AI is right around the corner as soon as we crack some current set of well-defined research problems, that there could not be any more problems that are not yet understood.
A comparable error today would be to assume that Strong AI is right around the corner as soon as we crack some current set of well-defined research problems
I don’t see at all how the step from non-self -modifying AI to self -modifying AI is in the same reference class as solving most well defined current research problems.
I think we’re arguing over whether I’m speaking from hindsight bias or whether the researchers in the past were irrationally overoptimistic (and whether EY’s assessment of how optimistic they should have been without hindsight is overoptimistic).
Let’s admit both are possible.
What could I show you that would convince you of the latter?
What could I show you that would convince you of the latter?
A valid heuristic that comes to the conclusion that you want to convince me off. In this case your claim that moving from non-self -modifying AI to self -modifying AI is no qualitative leap in the same way that solving most current well-defined AI problems is no qualitative leap suggests that you aren’t reasoning clearly.
If you get the easy things wrong, then the harder things are also more likely to be wrong.
Furthermore there a strong prior that you are wrong about estimating probabilities if you aren’t calibrated. It been shown that naive attempt to try to correct against the hindsight bias just don’t work.
Until you have at least trained calibration a bit you aren’t in a good position to judge whether other people are off.
The author admits the quote is only anecdotal, but it does seem plausible to me. EY has said stuff more dumbfounding than that.
More generally, it’s just a snarky blog post. Nothing wrong with that; posts are allowed to be snarky. And there’s plenty to criticize about uncritical belief in a singularity or the work MIRI does (which this post isn’t doing, it’s just reminding us of the existence of those criticisms via snark).
Dale Carrico on Yudkowsky and Bostrom:
http://amormundi.blogspot.com/2014/10/fluffing-yud.html
This seems to be an example of negative commentary being primarily negative, rather than primarily commentary. One specific concrete claim that stood out to me without needing to unpack:
is, I believe, simply false.
I agree with your overall characterization of the post, but on the specific concrete claim: one of the commenters there cites this article as saying this:
which I think is close enough to DC’s claim. I have no way of telling how accurately that article represents EY’s position or whether the quotation itself is accurate. Here’s EY characterizing another statement in that article as a lie (though for what it’s worth I think it can be interpreted consistently with what EY says is the truth—but of course that doesn’t mean it wasn’t intended to mislead).
Okay, I’ve updated somewhat in the direction that Eliezer actually said that at one point. (I was previously assuming that it was a mash-up of other things he’s said, but those things were all Sequences-or-later and this article is pre-Sequences.)
With the other statement,
It seems important to note that Eliezer was talking about a program unlike any program that had ever been turned on, when we knew less than we did at the time of writing. Without that detail, it can be interpreted as not-completely-literally-false, but I wouldn’t call it truthful. (The fact that Eliezer was not able to say it at the time seems less important, but leaving it out obscures the timeline.) When searching for the source of the “if you take the Singularity seriously” line, I found another comment by Eliezer on the subject: http://sl4.org/archive/0104/1163.html .
I say this with the benefit of hindsight, but just remember that not only Eurisko (the 5% risk program) but also its successors like Cyc which benefit from vastly greater computing power and decades of architecture improvement, fall far far short of being FOOMable AIs.
So if somebody had estimated a 5% risk for Eurisko, and then we saw what actually happened, I would update as them being substantially too paranoid.
I don’t think “it didn’t even come close” is sufficient to say that 5% was too paranoid.
I know the principles behind an atomic bomb, but I don’t know how much U-238 you need for critical mass. If someone takes two fist-sized lumps of U-238 and proposes to smash them together, I’d give… probably ~50% chance of it causing a massive explosion. But I’d also give maybe about 10% probability that you need like ten times as much U-238 as that. If that happens to be the case, I still don’t think that 50% is too paranoid, given my current state of knowledge.
There are people who do know how much U-238 you need, and their probability estimate will presumably be close to 0 or close to 1. And today, we can presumably work through the math and point out what the limits of Eurisko are that stop it from FOOMing. But if we hadn’t done the math at the time, 5% isn’t obviously unreasonable.
Tangential, but: U-238 is fissionable but not fissile; no amount of U-238 will give you a massive explosion if you bang it together. It’s U-235 that’s the fissile isotope.
(Even banging that together by hand won’t give you a massive explosion, though it will give you a moderately large explosion and an extremely lethal dose of radiation: the jargon is “predetonation” or “fizzle”. You need to bring a critical mass into existence hard and fast, e.g. by imploding a hollow sphere with explosive lenses, or a partial reaction will blow the pieces apart before criticality really has a chance to get going.)
I don’t think I can prove that I’m not coming at it from a hindsight biased perspective.
But I think I can say confidently that today’s technology is at least a qualitative leap away from Strong let alone FOOM AI. To make that more clear, I think no currently existing academic, industrial or personal project will achieve Strong AI or FOOM. Concretely:
In the next 2 years the chance of Strong AI and/or FOOM AI being developed is no more than 0.2%
So that’s a 2 year period where I estimate the chance of Strong AI or FOOM as substantially less than EY is saying we should have estimated Eurisko’s risk of FOOM only in retrospect.
Did you try to calibrate yourself via the credence game or a similar method?
I kept in mind that highly confident predictions (98%+) are often miscalibrated and I still make that assertion.
Also thought about it in terms of placing a bet.
I am not just throwing around numbers.
I can’t prove that this isn’t all hindsight bias, but to make a forward-looking prediction (also said this in another post): I believe that in the next 2 years the chance of Strong AI and/or FOOM AI being developed is no more than 0.2%.
Yes, this is a super-high-confidence prediction. But I have a pretty deep knowledge of computer science and AI research, I can very confidently say that current technology is a qualitative leap from Strong AI.
Keeping something like that in mind does relatively little. Humans don’t manage to correct for the hindsight bias based on keeping things in mind. Calibration actually needs feedback. You need to see how you mess up to get a feel for what a 95% prediction feels like.
95% feels like: I’m pretty certain that won’t happen but I’m not fully certain.
The whole point for the 5% prediction was that going from a state where no program is self modifying to a world with self modifying AI is a qualitative leap.
But estimating this risk of FOOM still disregards the enormous computational power constraints on this software, and the fact that the self-modification heuristics were quite limited.
Basically, we know now that AI researchers in the 80′s and earlier were TREMENDOUSLY overoptimistic. I also think that less optimism was warranted by the facts at the time, and not just hindsight.
In hindsight they were optimistic, but given the knowledge to which they had access at the time, it’s harder to make the same arguments. How would you argue that a researcher at that time should know how much the computational power constraints of that day mattered?
But I’d argue that their optimism stemmed from irrational assumptions. I’m not even saying that if I were transported back in time I would fall prey to the same irrational assumptions, but I would say that they had naive views of problems like visual object recognition or language comprehension that were completely unmotivated.
A comparable error today would be to assume that Strong AI is right around the corner as soon as we crack some current set of well-defined research problems, that there could not be any more problems that are not yet understood.
I don’t see at all how the step from non-self -modifying AI to self -modifying AI is in the same reference class as solving most well defined current research problems.
I think we’re arguing over whether I’m speaking from hindsight bias or whether the researchers in the past were irrationally overoptimistic (and whether EY’s assessment of how optimistic they should have been without hindsight is overoptimistic).
Let’s admit both are possible.
What could I show you that would convince you of the latter?
A valid heuristic that comes to the conclusion that you want to convince me off. In this case your claim that moving from non-self -modifying AI to self -modifying AI is no qualitative leap in the same way that solving most current well-defined AI problems is no qualitative leap suggests that you aren’t reasoning clearly. If you get the easy things wrong, then the harder things are also more likely to be wrong.
Furthermore there a strong prior that you are wrong about estimating probabilities if you aren’t calibrated. It been shown that naive attempt to try to correct against the hindsight bias just don’t work. Until you have at least trained calibration a bit you aren’t in a good position to judge whether other people are off.
The author admits the quote is only anecdotal, but it does seem plausible to me. EY has said stuff more dumbfounding than that.
More generally, it’s just a snarky blog post. Nothing wrong with that; posts are allowed to be snarky. And there’s plenty to criticize about uncritical belief in a singularity or the work MIRI does (which this post isn’t doing, it’s just reminding us of the existence of those criticisms via snark).