CEO at Redwood Research.
AI safety is a highly collaborative field—almost all the points I make were either explained to me by someone else, or developed in conversation with other people. I’m saying this here because it would feel repetitive to say “these ideas were developed in collaboration with various people” in all my comments, but I want to have it on the record that the ideas I present were almost entirely not developed by me in isolation.
(Obviously I’m biased here by being friends with Ajeya.) This is only tangentially related to the main point of the post, but I think you’re really overstating how many Bayes points you get against Ajeya’s timelines report. Ajeya gave 15% to AGI before 2036, with little of that in the first few years after her report; maybe she’d have said 10% between 2025 and 2036.
I don’t think you’ve ever made concrete predictions publicly (which makes me think it’s worse behavior for you to criticize people for their predictions), but I don’t think there are that many groups who would have put wildly higher probability on AGI in this particular time period. (I think some of the short-timelines people at the time put substantial mass on AGI arriving by now, which reduces their performance.) Maybe some of them would have said 40%? If we assume AGI by then, that’s a couple bits of better performance, but I don’t think it’s massive outperformance. (And I still think it’s plausible that AGI isn’t developed by 2036!)
In general, I think that disagreements on AI timelines often seem more extreme when you summarize people’s timelines by median timeline rather than by their probability on AGI by a particular time.