superforecasters were claiming that AlphaGo had a 20% chance of beating Lee Se-dol and I didn’t disagree with that at the time
Good Judgment Open had the probability at 65% on March 8th 2016, with a generally stable forecast since early February (Wikipedia says that the first match was on March 9th).
Metaculus had the probability at 64% with similar stability over time. Of course, there might be another source that Eliezer is referring to, but for now I think it’s right to flag this statement as false.
A note I want to add, if this fact-check ends up being valid:
It appears that a significant fraction of Eliezer’s argument relies on AlphaGo being surprising. But then his evidence for it being surprising seems to rest substantially on something that was misremembered. That seems important if true.
I would point to, for example, this quote, “I mean the superforecasters did already suck once in my observation, which was AlphaGo, but I did not bet against them there, I bet with them and then updated afterwards.” It seems like the lesson here, if indeed superforecasters got AlphaGo right and Eliezer got it wrong, is that we should update a little bit towards superforecasting, and against Eliezer.
Adding my recollection of that period: some people made the relevant updates when DeepMind’s system beat the European Champion Fan Hui (in October 2015). My hazy recollection is that beating Fan Hui started some people going “Oh huh, I think this is going to happen” and then when AlphaGo beat Lee Sedol (in March 2016) everyone said “Now it is happening”.
It seems from this Metaculus question that people indeed were surprised by the announcement of the match between Fan Hui and AlphaGo (which was disclosed in January, despite the match happening months earlier, according to Wikipedia).
It seems hard to interpret this as AlphaGo being inherently surprising though, because the relevant fact is that the question was referring only to 2016. It seems somewhat reasonable to think that even if a breakthrough is on the horizon, it won’t happen imminently with high probability.
Perhaps a better source of evidence of AlphaGo’s surprisingness comes from Nick Bostrom’s 2014 book Superintelligence in which he says, “Go-playing amateur programs have been improving at a rate of about 1 level dan/year in recent years. If this rate of improvement continues, they might beat the human world champion in about a decade.” (Chapter 1).
This vindicates AlphaGo being an impressive discontinuity from pre-2015 progress. Though one can reasonably dispute whether superforecasters thought that the milestone was still far away after being told that Google and Facebook made big investments into it (as was the case in late 2015).
Wow thanks for pulling that up. I’ve gotta say, having records of people’s predictions is pretty sweet. Similarly, solid find on the Bostrom quote.
Do you think that might be the 20% number that Eliezer is remembering? Eliezer, interested in whether you have a recollection of this or not. [Added: It seems from a comment upthread that EY was talking about superforecasters in Feb 2016, which is after Fan Hui.]
There was still a big update from ~20%->90%, which is what is relevant for Eliezer’s argument, even if he misremembered the timing. The fact that the update was from the Fan Hui match rather than the Lee Sedol match doesn’t seem that important to the argument [for superforecasters being caught flatfooted by discontinuous AI-Go progress].
My memory of the past is not great in general, but considering that I bet sums of my own money and advised others to do so, I am surprised that my memory here would be that bad, if it was.
Neither GJO nor Metaculus are restricted to only past superforecasters, as I understand it; and my recollection is that superforecasters in particular, not all participants at GJO or Metaculus, were saying in the range of 20%. Here’s an example of one such, which I have a potentially false memory of having maybe read at the time: https://www.gjopen.com/comments/118530
Thanks for clarifying. That makes sense that you may have been referring to a specific subset of forecasters. I do think that some forecasters tend to be much more reliable than others (and maybe there was/is a way to restrict to “superforecasters” in the UI).
I will add the following piece of evidence, which I don’t think counts much for or against your memory, but which still seems relevant. Metaculus shows a histogram of predictions. On the relevant question, a relatively high fraction of people put a 20% chance, but it also looks like over 80% of forecasters put higher credences.
Good Judgment Open had the probability at 65% on March 8th 2016, with a generally stable forecast since early February (Wikipedia says that the first match was on March 9th).
Metaculus had the probability at 64% with similar stability over time. Of course, there might be another source that Eliezer is referring to, but for now I think it’s right to flag this statement as false.
A note I want to add, if this fact-check ends up being valid:
It appears that a significant fraction of Eliezer’s argument relies on AlphaGo being surprising. But then his evidence for it being surprising seems to rest substantially on something that was misremembered. That seems important if true.
I would point to, for example, this quote, “I mean the superforecasters did already suck once in my observation, which was AlphaGo, but I did not bet against them there, I bet with them and then updated afterwards.” It seems like the lesson here, if indeed superforecasters got AlphaGo right and Eliezer got it wrong, is that we should update a little bit towards superforecasting, and against Eliezer.
Adding my recollection of that period: some people made the relevant updates when DeepMind’s system beat the European Champion Fan Hui (in October 2015). My hazy recollection is that beating Fan Hui started some people going “Oh huh, I think this is going to happen” and then when AlphaGo beat Lee Sedol (in March 2016) everyone said “Now it is happening”.
It seems from this Metaculus question that people indeed were surprised by the announcement of the match between Fan Hui and AlphaGo (which was disclosed in January, despite the match happening months earlier, according to Wikipedia).
It seems hard to interpret this as AlphaGo being inherently surprising though, because the relevant fact is that the question was referring only to 2016. It seems somewhat reasonable to think that even if a breakthrough is on the horizon, it won’t happen imminently with high probability.
Perhaps a better source of evidence of AlphaGo’s surprisingness comes from Nick Bostrom’s 2014 book Superintelligence in which he says, “Go-playing amateur programs have been improving at a rate of about 1 level dan/year in recent years. If this rate of improvement continues, they might beat the human world champion in about a decade.” (Chapter 1).
This vindicates AlphaGo being an impressive discontinuity from pre-2015 progress. Though one can reasonably dispute whether superforecasters thought that the milestone was still far away after being told that Google and Facebook made big investments into it (as was the case in late 2015).
Wow thanks for pulling that up. I’ve gotta say, having records of people’s predictions is pretty sweet. Similarly, solid find on the Bostrom quote.
Do you think that might be the 20% number that Eliezer is remembering? Eliezer, interested in whether you have a recollection of this or not. [Added: It seems from a comment upthread that EY was talking about superforecasters in Feb 2016, which is after Fan Hui.]
There was still a big update from ~20%->90%, which is what is relevant for Eliezer’s argument, even if he misremembered the timing. The fact that the update was from the Fan Hui match rather than the Lee Sedol match doesn’t seem that important to the argument [for superforecasters being caught flatfooted by discontinuous AI-Go progress].
My memory of the past is not great in general, but considering that I bet sums of my own money and advised others to do so, I am surprised that my memory here would be that bad, if it was.
Neither GJO nor Metaculus are restricted to only past superforecasters, as I understand it; and my recollection is that superforecasters in particular, not all participants at GJO or Metaculus, were saying in the range of 20%. Here’s an example of one such, which I have a potentially false memory of having maybe read at the time: https://www.gjopen.com/comments/118530
Thanks for clarifying. That makes sense that you may have been referring to a specific subset of forecasters. I do think that some forecasters tend to be much more reliable than others (and maybe there was/is a way to restrict to “superforecasters” in the UI).
I will add the following piece of evidence, which I don’t think counts much for or against your memory, but which still seems relevant. Metaculus shows a histogram of predictions. On the relevant question, a relatively high fraction of people put a 20% chance, but it also looks like over 80% of forecasters put higher credences.