it’s quite plausible (40% if I had to make up a number, but I stress this is completely made up) that someday there will be an AI winter or other slowdown, and the general vibe will snap from “AGI in 3 years” to “AGI in 50 years”. when this happens it will become deeply unfashionable to continue believing that AGI is probably happening soonish (10-15 years), in the same way that suggesting that there might be a winter/slowdown is unfashionable today. however, I believe in these timelines roughly because I expect the road to AGI to involve both fast periods and slow bumpy periods. so unless there is some super surprising new evidence, I will probably only update moderately on timelines if/when this winter happens
also a lot of people will suggest that alignment people are discredited because they all believed AGI was 3 years away, because surely that’s the only possible thing an alignment person could have believed. I plan on pointing to this and other statements similar in vibe that I’ve made over the past year or two as direct counter evidence against that
(I do think a lot of people will rightly lose credibility for having very short timelines, but I think this includes a big mix of capabilities and alignment people, and I think they will probably lose more credibility than is justified because the rest of the world will overupdate on the winter)
My timelines are roughly 50% probability on something like transformative AI by 2030, 90% by 2045, and a long tail afterward. I don’t hold this strongly either, and my views on alignment are mostly decoupled from these beliefs. But if we do get an AI winter longer than that (through means other than by government intervention, which I haven’t accounted for), I should lose some Bayes points, and it seems worth saying so publicly.
to be clear, a “winter/slowdown” in my typology is more about the vibes and could only be a few years counterfactual slowdown. like the dot-com crash didn’t take that long for companies like Amazon or Google to recover from, but it was still a huge vibe shift
also to further clarify this is not an update I’ve made recently, I’m just making this post now as a regular reminder of my beliefs because it seems good to have had records of this kind of thing (though everyone who has heard me ramble about this irl can confirm I’ve believed sometime like this for a while now)
I was someone who had shorter timelines. At this point, most of the concrete part of what I expected has happened, but the “actually AGI” thing hasn’t. I’m not sure how long the tail will turn out to be. I only say this to get it on record.
If you keep updating such that you always “think AGI is <10 years away” then you will never work on things that take longer than 15 years to help. This is absolutely a mistake, and it should at least be corrected after the first round of “let’s not work on things that take too long because AGI is coming in the next 10 years”. I will definitely be collecting my Bayes points https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
Does it seem likely to you that, conditional on ‘slow bumpy period soon’, a lot of the funding we see at frontier labs dries up (so there’s kind of a double slowdown effect of ‘the science got hard, and also now we don’t have nearly the money we had to push global infrastructure and attract top talent’), or do you expect that frontier labs will stay well funded (either by leveraging low hanging fruit in mundane utility, or because some subset of their funders are true believers, or a secret third thing)?
it’s quite plausible (40% if I had to make up a number, but I stress this is completely made up) that someday there will be an AI winter or other slowdown, and the general vibe will snap from “AGI in 3 years” to “AGI in 50 years”. when this happens it will become deeply unfashionable to continue believing that AGI is probably happening soonish (10-15 years), in the same way that suggesting that there might be a winter/slowdown is unfashionable today. however, I believe in these timelines roughly because I expect the road to AGI to involve both fast periods and slow bumpy periods. so unless there is some super surprising new evidence, I will probably only update moderately on timelines if/when this winter happens
also a lot of people will suggest that alignment people are discredited because they all believed AGI was 3 years away, because surely that’s the only possible thing an alignment person could have believed. I plan on pointing to this and other statements similar in vibe that I’ve made over the past year or two as direct counter evidence against that
(I do think a lot of people will rightly lose credibility for having very short timelines, but I think this includes a big mix of capabilities and alignment people, and I think they will probably lose more credibility than is justified because the rest of the world will overupdate on the winter)
My timelines are roughly 50% probability on something like transformative AI by 2030, 90% by 2045, and a long tail afterward. I don’t hold this strongly either, and my views on alignment are mostly decoupled from these beliefs. But if we do get an AI winter longer than that (through means other than by government intervention, which I haven’t accounted for), I should lose some Bayes points, and it seems worth saying so publicly.
to be clear, a “winter/slowdown” in my typology is more about the vibes and could only be a few years counterfactual slowdown. like the dot-com crash didn’t take that long for companies like Amazon or Google to recover from, but it was still a huge vibe shift
also to further clarify this is not an update I’ve made recently, I’m just making this post now as a regular reminder of my beliefs because it seems good to have had records of this kind of thing (though everyone who has heard me ramble about this irl can confirm I’ve believed sometime like this for a while now)
I was someone who had shorter timelines. At this point, most of the concrete part of what I expected has happened, but the “actually AGI” thing hasn’t. I’m not sure how long the tail will turn out to be. I only say this to get it on record.
If you keep updating such that you always “think AGI is <10 years away” then you will never work on things that take longer than 15 years to help. This is absolutely a mistake, and it should at least be corrected after the first round of “let’s not work on things that take too long because AGI is coming in the next 10 years”. I will definitely be collecting my Bayes points https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
Does it seem likely to you that, conditional on ‘slow bumpy period soon’, a lot of the funding we see at frontier labs dries up (so there’s kind of a double slowdown effect of ‘the science got hard, and also now we don’t have nearly the money we had to push global infrastructure and attract top talent’), or do you expect that frontier labs will stay well funded (either by leveraging low hanging fruit in mundane utility, or because some subset of their funders are true believers, or a secret third thing)?
My guess is that for now, I’d give around a 10-30% chance to “AI winter happens for a short period/AI progress slows down” by 2027.
Also, what would you consider super surprising new evidence?
What do you think would be the cause(s) of the slowdown?