A note I want to add, if this fact-check ends up being valid:
It appears that a significant fraction of Eliezer’s argument relies on AlphaGo being surprising. But then his evidence for it being surprising seems to rest substantially on something that was misremembered. That seems important if true.
I would point to, for example, this quote, “I mean the superforecasters did already suck once in my observation, which was AlphaGo, but I did not bet against them there, I bet with them and then updated afterwards.” It seems like the lesson here, if indeed superforecasters got AlphaGo right and Eliezer got it wrong, is that we should update a little bit towards superforecasting, and against Eliezer.
Adding my recollection of that period: some people made the relevant updates when DeepMind’s system beat the European Champion Fan Hui (in October 2015). My hazy recollection is that beating Fan Hui started some people going “Oh huh, I think this is going to happen” and then when AlphaGo beat Lee Sedol (in March 2016) everyone said “Now it is happening”.
It seems from this Metaculus question that people indeed were surprised by the announcement of the match between Fan Hui and AlphaGo (which was disclosed in January, despite the match happening months earlier, according to Wikipedia).
It seems hard to interpret this as AlphaGo being inherently surprising though, because the relevant fact is that the question was referring only to 2016. It seems somewhat reasonable to think that even if a breakthrough is on the horizon, it won’t happen imminently with high probability.
Perhaps a better source of evidence of AlphaGo’s surprisingness comes from Nick Bostrom’s 2014 book Superintelligence in which he says, “Go-playing amateur programs have been improving at a rate of about 1 level dan/year in recent years. If this rate of improvement continues, they might beat the human world champion in about a decade.” (Chapter 1).
This vindicates AlphaGo being an impressive discontinuity from pre-2015 progress. Though one can reasonably dispute whether superforecasters thought that the milestone was still far away after being told that Google and Facebook made big investments into it (as was the case in late 2015).
Wow thanks for pulling that up. I’ve gotta say, having records of people’s predictions is pretty sweet. Similarly, solid find on the Bostrom quote.
Do you think that might be the 20% number that Eliezer is remembering? Eliezer, interested in whether you have a recollection of this or not. [Added: It seems from a comment upthread that EY was talking about superforecasters in Feb 2016, which is after Fan Hui.]
There was still a big update from ~20%->90%, which is what is relevant for Eliezer’s argument, even if he misremembered the timing. The fact that the update was from the Fan Hui match rather than the Lee Sedol match doesn’t seem that important to the argument [for superforecasters being caught flatfooted by discontinuous AI-Go progress].
A note I want to add, if this fact-check ends up being valid:
It appears that a significant fraction of Eliezer’s argument relies on AlphaGo being surprising. But then his evidence for it being surprising seems to rest substantially on something that was misremembered. That seems important if true.
I would point to, for example, this quote, “I mean the superforecasters did already suck once in my observation, which was AlphaGo, but I did not bet against them there, I bet with them and then updated afterwards.” It seems like the lesson here, if indeed superforecasters got AlphaGo right and Eliezer got it wrong, is that we should update a little bit towards superforecasting, and against Eliezer.
Adding my recollection of that period: some people made the relevant updates when DeepMind’s system beat the European Champion Fan Hui (in October 2015). My hazy recollection is that beating Fan Hui started some people going “Oh huh, I think this is going to happen” and then when AlphaGo beat Lee Sedol (in March 2016) everyone said “Now it is happening”.
It seems from this Metaculus question that people indeed were surprised by the announcement of the match between Fan Hui and AlphaGo (which was disclosed in January, despite the match happening months earlier, according to Wikipedia).
It seems hard to interpret this as AlphaGo being inherently surprising though, because the relevant fact is that the question was referring only to 2016. It seems somewhat reasonable to think that even if a breakthrough is on the horizon, it won’t happen imminently with high probability.
Perhaps a better source of evidence of AlphaGo’s surprisingness comes from Nick Bostrom’s 2014 book Superintelligence in which he says, “Go-playing amateur programs have been improving at a rate of about 1 level dan/year in recent years. If this rate of improvement continues, they might beat the human world champion in about a decade.” (Chapter 1).
This vindicates AlphaGo being an impressive discontinuity from pre-2015 progress. Though one can reasonably dispute whether superforecasters thought that the milestone was still far away after being told that Google and Facebook made big investments into it (as was the case in late 2015).
Wow thanks for pulling that up. I’ve gotta say, having records of people’s predictions is pretty sweet. Similarly, solid find on the Bostrom quote.
Do you think that might be the 20% number that Eliezer is remembering? Eliezer, interested in whether you have a recollection of this or not. [Added: It seems from a comment upthread that EY was talking about superforecasters in Feb 2016, which is after Fan Hui.]
There was still a big update from ~20%->90%, which is what is relevant for Eliezer’s argument, even if he misremembered the timing. The fact that the update was from the Fan Hui match rather than the Lee Sedol match doesn’t seem that important to the argument [for superforecasters being caught flatfooted by discontinuous AI-Go progress].