I had already seen all of those quotes/links, all of the quotes/links that Rob Bensinger posts in the sibling comment, as well as this tweet from Eliezer. I asked my question because those public quotes don’t sound like the private information I referred to in my question, and I wanted insight into the discrepancy.
Okay. I was responding to “Is there any truth to these claims?” which sounded like it would be a big shock to discover MIRI/CFAR staff were considering short timelines a lot in their actions, when they’d actually stated it out loud in many places.
While I agree that I’m confused about MIRI/CFAR’s timelines and think that info-cascades around this have likely occurred, I want to mention that the thing you linked to is pretty hyperbolic.
To the best of my understanding, part of why the MIRI leadership (Nate Soares, Eliezer Yudkowsky, and Anna Salamon) have been delusionally spewing nonsense about the destruction of the world within a decade is because they’ve been misled by Dario Amodei, an untrustworthy, blatant status-seeker recently employed at Google Brain. I am unaware of the existence of even a single concrete, object-level reason to believe these claims; I, and many others, suspect that Dario is intentionally embellishing the facts because he revels in attention.
I want to say that I think that Dario is not obviously untrustworthy; I think well of him for being an early EA who put in the work to write up their reasoning about donations (see his extensive writeup on the GiveWell blog from 2009) which I always take as a good sign about someone’s soul; the quote also says there’s no reason or argument to believe in short timelines, but the analyses above in Eliezer’s posts on AlphaGo Zero and Fire Alarm provide plenty of reasons for thinking AI could come within a decade. Don’t forget that Shane Legg, one of the cofounders of DeepMind, has been consistently predicting AGI with 50% probability by 2028 (e.g. he said it here in 2011).
Don’t forget that Shane Legg, one of the cofounders of DeepMind, has been consistently predicting AGI with 50% probability by 2028 (e.g. he said it here in 2011).
Just noting that since then, half the time to 2028 has elapsed. If he’s still giving 50%, that’s kind of surprising.
Why is that surprising? Doesn’t it just mean that the pace of development in the last decade has been approximately equal to the average over Shane_{2011}’s distribution of development speeds?
I don’t think it’s that simple. The uncertainty isn’t just about pace of development but about how much development needs to be done.
But even if it does mean that, would that not be surprising? Perhaps not if he’d originally given a narrow confidence internal, but his 10% estimate was in 2018. For us to be hitting the average precisely enough to not move the 50% estimate much… I haven’t done any arithmetic here, but I think that would be surprising, yeah.
And my sense is that the additional complexity makes it more surprising, not less.
Yes, I agree that the space of things to be uncertain about is multidimensional. We project the uncertainty onto a one-dimensional space parameterized by “probability of <event> by <time>”.
It would be surprising for a sophisticated person to show a market of 49 @ 51 on this event. (Unpacking jargon, showing this market means being willing to buy for 49 or sell at 51 a contract which is worth 100 if the hypothesis is true and 0 if it is false)
(it’s somewhat similar saying that your 2-sigma confidence interval around the “true probability” of the event is 49 to 51. The market language can be interpreted with just decision theory while the confidence interval idea also requires some notion of statistics)
My interpretation of the second-hand evidence about Shane Legg’s opinion suggests that Shane would quote a market like 40 @ 60. (The only thing I know about Shane is that they apparently summarized their belief as 50% a number of years ago and hasn’t publicly changed their opinion since)
Perhaps I’m misinterpreting you, but I feel like this was intended as disagreement? If so, I’d appreciate clarification. It seems basically correct to me, and consistent with what I said previously. I still think that: if, in 2011, you gave 10% probability by 2018 and 50% by 2028; and if, in 2019, you still give 50% by 2028 (as an explicit estimate, i.e. you haven’t just not-given an updated estimate); then this is surprising, even acknowledging that 50% is probably not very precise in either case.
I realised after writing that I didn’t give a quote to show he that still believed it. I have the recollection that he still says 2028, I think someone more connected to AI/ML probably told me, but I can’t think of anywhere to quote him saying it.
I had already seen all of those quotes/links, all of the quotes/links that Rob Bensinger posts in the sibling comment, as well as this tweet from Eliezer. I asked my question because those public quotes don’t sound like the private information I referred to in my question, and I wanted insight into the discrepancy.
Okay. I was responding to “Is there any truth to these claims?” which sounded like it would be a big shock to discover MIRI/CFAR staff were considering short timelines a lot in their actions, when they’d actually stated it out loud in many places.
While I agree that I’m confused about MIRI/CFAR’s timelines and think that info-cascades around this have likely occurred, I want to mention that the thing you linked to is pretty hyperbolic.
I want to say that I think that Dario is not obviously untrustworthy; I think well of him for being an early EA who put in the work to write up their reasoning about donations (see his extensive writeup on the GiveWell blog from 2009) which I always take as a good sign about someone’s soul; the quote also says there’s no reason or argument to believe in short timelines, but the analyses above in Eliezer’s posts on AlphaGo Zero and Fire Alarm provide plenty of reasons for thinking AI could come within a decade. Don’t forget that Shane Legg, one of the cofounders of DeepMind, has been consistently predicting AGI with 50% probability by 2028 (e.g. he said it here in 2011).
Just noting that since then, half the time to 2028 has elapsed. If he’s still giving 50%, that’s kind of surprising.
Why is that surprising? Doesn’t it just mean that the pace of development in the last decade has been approximately equal to the average over Shane_{2011}’s distribution of development speeds?
I don’t think it’s that simple. The uncertainty isn’t just about pace of development but about how much development needs to be done.
But even if it does mean that, would that not be surprising? Perhaps not if he’d originally given a narrow confidence internal, but his 10% estimate was in 2018. For us to be hitting the average precisely enough to not move the 50% estimate much… I haven’t done any arithmetic here, but I think that would be surprising, yeah.
And my sense is that the additional complexity makes it more surprising, not less.
Yes, I agree that the space of things to be uncertain about is multidimensional. We project the uncertainty onto a one-dimensional space parameterized by “probability of <event> by <time>”.
It would be surprising for a sophisticated person to show a market of 49 @ 51 on this event. (Unpacking jargon, showing this market means being willing to buy for 49 or sell at 51 a contract which is worth 100 if the hypothesis is true and 0 if it is false)
(it’s somewhat similar saying that your 2-sigma confidence interval around the “true probability” of the event is 49 to 51. The market language can be interpreted with just decision theory while the confidence interval idea also requires some notion of statistics)
My interpretation of the second-hand evidence about Shane Legg’s opinion suggests that Shane would quote a market like 40 @ 60. (The only thing I know about Shane is that they apparently summarized their belief as 50% a number of years ago and hasn’t publicly changed their opinion since)
Perhaps I’m misinterpreting you, but I feel like this was intended as disagreement? If so, I’d appreciate clarification. It seems basically correct to me, and consistent with what I said previously. I still think that: if, in 2011, you gave 10% probability by 2018 and 50% by 2028; and if, in 2019, you still give 50% by 2028 (as an explicit estimate, i.e. you haven’t just not-given an updated estimate); then this is surprising, even acknowledging that 50% is probably not very precise in either case.
I realised after writing that I didn’t give a quote to show he that still believed it. I have the recollection that he still says 2028, I think someone more connected to AI/ML probably told me, but I can’t think of anywhere to quote him saying it.