Thanks for these detailed comments! I’ll aim to respond to some of the meat of your post within a few days latest, but real quick regarding the top portion:
I find the decision to brand the forecast as “AI 2027” very odd. The authors do not in fact believe this; they explicitly give 2028, 2030, or 2033 for their median dates for a superhuman coder.
The point of this project was presumably to warn about a possible outcome; by the authors’ own beliefs, their warning will be falsified immediately before it is needed.
Adding some more context: each of the timelines forecasts authors’ modal superhuman coder year is roughly 2027. The FutureSearch forecasters who have a 2033 median aren’t authors on the scenario itself (but neither is Nikola with the 2028 median). Of the AI 2027 authors, all have a modal year of roughly 2027 and give at least ~20% to getting it by 2027. Daniel, the lead author, has a median of early 2028.
IMO it seems reasonable to portray 2027 as the arrival year of superhuman coders, given the above. It’s not clear whether the median or modal year is better here, conditional on having substantial probability by the modal year (i.e. each of us has >=20% by 2027, Daniel has nearly 50%).
To be transparent though, we originally had it at 2027 because that was Daniel’s median year when we started the project. We decided against changing it when he lengthened his median because (a) it would have been a bunch of work and we’d already spent over a year on the project and (b) as I said above, it seemed roughly as justified as 2028 anyway from an epistemic perspective.
Overall though I sympathize with the concern that we will lose a bunch of credibility if we don’t get superhuman coders by 2027. Seems plausible that we should have lengthened story despite the reasoning above.
When presenting predictions, forecasters always face tradeoffs regarding how much confidence to present. Confident, precise forecasting attracts attempts and motivates action; adding many concrete details produces a compelling story, stimulating discussion; this also involves falsifiable predictions. Emphasizing uncertainty avoids losing credibility when some parts of story inevitably fail; prevents overconfidence; and encourages more robust strategies that can work across a range of outcomes. But I can’t think of any reason to give a confident, high precision story that you don’t even believe in!
I’d be curious to hear more about what made you perceive our scenario as confident. We included caveats signaling uncertainty in a bunch of places, for example in “Why is it valuable?” and several expendables and footnotes. Interestingly, this popular YouTuber made a quip that it seemed like we were adding tons of caveats everywhere,
I’d be curious to hear more about what made you perceive our scenario as confident. We included caveats signaling uncertainty in a bunch of places, for example in “Why is it valuable?” and several expendables and footnotes. Interestingly, this popular YouTuber made a quip that it seemed like we were adding tons of caveats everywhere,
I was imprecise (ha ha) with my terminology here- I should have only talked about a precise forecast rather than a confident one, I meant solely the attempt to highlight a single story about a single year. My bad. Edited the post.
Thanks for these detailed comments! I’ll aim to respond to some of the meat of your post within a few days latest, but real quick regarding the top portion:
Adding some more context: each of the timelines forecasts authors’ modal superhuman coder year is roughly 2027. The FutureSearch forecasters who have a 2033 median aren’t authors on the scenario itself (but neither is Nikola with the 2028 median). Of the AI 2027 authors, all have a modal year of roughly 2027 and give at least ~20% to getting it by 2027. Daniel, the lead author, has a median of early 2028.
IMO it seems reasonable to portray 2027 as the arrival year of superhuman coders, given the above. It’s not clear whether the median or modal year is better here, conditional on having substantial probability by the modal year (i.e. each of us has >=20% by 2027, Daniel has nearly 50%).
To be transparent though, we originally had it at 2027 because that was Daniel’s median year when we started the project. We decided against changing it when he lengthened his median because (a) it would have been a bunch of work and we’d already spent over a year on the project and (b) as I said above, it seemed roughly as justified as 2028 anyway from an epistemic perspective.
Overall though I sympathize with the concern that we will lose a bunch of credibility if we don’t get superhuman coders by 2027. Seems plausible that we should have lengthened story despite the reasoning above.
I’d be curious to hear more about what made you perceive our scenario as confident. We included caveats signaling uncertainty in a bunch of places, for example in “Why is it valuable?” and several expendables and footnotes. Interestingly, this popular YouTuber made a quip that it seemed like we were adding tons of caveats everywhere,
I was imprecise (ha ha) with my terminology here- I should have only talked about a precise forecast rather than a confident one, I meant solely the attempt to highlight a single story about a single year. My bad. Edited the post.