I think Ajeya is inferring this from Eliezer’s 2017 bet with Bryan Caplan. The bet was jokey and therefore (IMO) doesn’t deserve much weight, though Eliezer comments that it’s maybe not totally unrelated to timelines he’d reflectively endorse:
[T]he generator of this bet does not necessarily represent a strong epistemic stance on my part, which seems important to emphasize. But I suppose one might draw conclusions from the fact that, when I was humorously imagining what sort of benefit I could get from exploiting this amazing phenomenon, my System 1 thought that having the world not end before 2030 seemed like the most I could reasonably ask.
In general, my (maybe-partly-mistaken) Eliezer-model...
thinks he knows very little about timelines (per the qualitative reasoning in There’s No Fire Alarm For AGI and in Nate’s recent post—though not necessarily endorsing Nate’s quantitative probabilities);
and is wary of trying to turn ‘I don’t know’ into a solid, stable number for this kind of question (cf. When (Not) To Use Probabilities);
but recognizes that his behavior at any given time, insofar as it is coherent, must reflect some implicit probabilities. Quoting Eliezer back in 2016:
[… T]imelines are the hardest part of AGI issues to forecast, by which I mean that if you ask me for a specific year, I throw up my hands and say “Not only do I not know, I make the much stronger statement that nobody else has good knowledge either.” Fermi said that positive-net-energy from nuclear power wouldn’t be possible for 50 years, two years before he oversaw the construction of the first pile of uranium bricks to go critical. The way these things work is that they look fifty years off to the slightly skeptical, and ten years later, they still look fifty years off, and then suddenly there’s a breakthrough and they look five years off, at which point they’re actually 2 to 20 years off.
If you hold a gun to my head and say “Infer your probability distribution from your own actions, you self-proclaimed Bayesian” then I think I seem to be planning for a time horizon between 8 and 40 years, but some of that because there’s very little I think I can do in less than 8 years, and, you know, if it takes longer than 40 years there’ll probably be some replanning to do anyway over that time period.
And then how *long* takeoff takes past that point is a separate issue, one that doesn’t correlate all that much to how long it took to start takeoff. [...]
Furthermore 2⁄3 doom is straightforwardly the wrong thing to infer from the 1:1 betting odds, even taking those at face value and even before taking interest rates into account; Bryan gave me $100 which gets returned as $200 later.
(I do consider this a noteworthy example of ‘People seem systematically to make the mistake in the direction that interprets Eliezer’s stuff as more weird and extreme’ because it’s a clear arithmetical error and because I saw a recorded transcript of it apparently passing the notice of several people I considered usually epistemically strong.)
(Though it’s also easier than people expect to just not notice things; I didn’t realize at the time that Ajeya was talking about a misinterpretation of the implied odds from the Caplan bet, and thought she was just guessing my own odds at 2⁄3, and I didn’t want to argue about that because I don’t think it valuable to the world or maybe even to myself to go about arguing those exact numbers.)
Yes, Rob is right about the inference coming from the bet and Eliezer is right that the bet was actually 1:1 odds but due to the somewhat unusual bet format I misread it as 2:1 odds.
Maybe I’m wrong about her deriving this from the Caplan bet? Ajeya hasn’t actually confirmed that, it was just an inference I drew. I’ll poke her to double-check.
I think the bet is a bad idea if you think in terms of Many Worlds. Say 55% of all worlds end by 2030. Then, even assuming that value-of-$-in-2017 = value-of-$-in-2030, Eliezer personally benefited from the bet. However, the epistemic result is Bryan getting prestige points in 45% of worlds, Eliezer getting prestige points in 0% of worlds.
The other problem with the bet is that, if we adjust for inflation and returns of money, the bet is positive EV for Eliezer even given P(world-ends-by-2030) << 12.
I think Ajeya is inferring this from Eliezer’s 2017 bet with Bryan Caplan. The bet was jokey and therefore (IMO) doesn’t deserve much weight, though Eliezer comments that it’s maybe not totally unrelated to timelines he’d reflectively endorse:
In general, my (maybe-partly-mistaken) Eliezer-model...
thinks he knows very little about timelines (per the qualitative reasoning in There’s No Fire Alarm For AGI and in Nate’s recent post—though not necessarily endorsing Nate’s quantitative probabilities);
and is wary of trying to turn ‘I don’t know’ into a solid, stable number for this kind of question (cf. When (Not) To Use Probabilities);
but recognizes that his behavior at any given time, insofar as it is coherent, must reflect some implicit probabilities. Quoting Eliezer back in 2016:
Furthermore 2⁄3 doom is straightforwardly the wrong thing to infer from the 1:1 betting odds, even taking those at face value and even before taking interest rates into account; Bryan gave me $100 which gets returned as $200 later.
(I do consider this a noteworthy example of ‘People seem systematically to make the mistake in the direction that interprets Eliezer’s stuff as more weird and extreme’ because it’s a clear arithmetical error and because I saw a recorded transcript of it apparently passing the notice of several people I considered usually epistemically strong.)
(Though it’s also easier than people expect to just not notice things; I didn’t realize at the time that Ajeya was talking about a misinterpretation of the implied odds from the Caplan bet, and thought she was just guessing my own odds at 2⁄3, and I didn’t want to argue about that because I don’t think it valuable to the world or maybe even to myself to go about arguing those exact numbers.)
Yes, Rob is right about the inference coming from the bet and Eliezer is right that the bet was actually 1:1 odds but due to the somewhat unusual bet format I misread it as 2:1 odds.
Maybe I’m wrong about her deriving this from the Caplan bet? Ajeya hasn’t actually confirmed that, it was just an inference I drew. I’ll poke her to double-check.
I think the bet is a bad idea if you think in terms of Many Worlds. Say 55% of all worlds end by 2030. Then, even assuming that value-of-$-in-2017 = value-of-$-in-2030, Eliezer personally benefited from the bet. However, the epistemic result is Bryan getting prestige points in 45% of worlds, Eliezer getting prestige points in 0% of worlds.
The other problem with the bet is that, if we adjust for inflation and returns of money, the bet is positive EV for Eliezer even given P(world-ends-by-2030) << 12.