I’ve been arguing for 2027-ish AGI for several years now. I do somewhat fall into the annoying category of refusing to give my full details for believing this (publicly). I’ve had some more in-depth discussions about this privately.
One argument I have been making publicly is that I think Ajeya’s Bioanchors report greatly overestimated human brain compute. I think a more careful reading of Joe Carlsmith’s report that hers was based on supports my own estimates of around 1e15 FLOPs.
Another very relevant point is that recent research on the human connectome shows that long-range connections (particularly between regions of the cortex) are lower bandwidth than was previously thought. Examining this bandwidth in detail leads me to believe that efficient decentralized training should be possible. Even with considering that training a human brain equivalent model would require 10000x parallel brain equivalents to have a reasonable training time, the current levels of internet bandwidth between datacenters worldwide should be more than sufficient.
Thus, my beliefs are strongly pointint towards: “with the right algorithms we will have more than good enough hardware and more than sufficient data. Also, those algorithms are available to be found, and are hinted at by existing neuroscience data.” Thus, with AI R&D accelerated research on algorithms, we should expect rapid progress on peak capabilities and efficiency which doesn’t plateau at human-peak-capability or human-operation-speed. Super-fast and super-smart AGI within a few months of full AGI, and rapidly increasing speeds of progress leading up to AGI.
If I’m correct, then the period of time from 2026 to 2027 will contain as much progress on generally intelligent systems as all of history leading up to 2026. ASI will thus be possible before 2028.
Only social factors (e.g. massively destructive war or unprecedented international collaboration on enforcing an AI pause) will change these timelines.
Feeding this norm creates friction, filters evidence elicited in the agreement-voting. If there is a sense that a vote needs to be explained, it often won’t be cast.
I’m not interested in modifying the site to prevent this, or even really modifying the underlying norm (much). I guess I’m just complaining on my own behalf, making an ask of the community. And the result was that the squeaky wheel got the grease, I got the thoughtful engagement I was hoping for.
So, in conclusion, I’ll try to frame my future pleas for engagement in a way which doesn’t imply I think the norm of voting-without-commenting is bad.
I feel like a bet is fundamentally unfair here because in the cases where I’m wrong, there’s a high chance that I’ll be dead anyway and don’t have to pay. The combination of long timelines but high P(doom|AGI soon) means I’m not really risking my reputation/money in the way I’m supposed to with a bet. Are you optimistic about alignment, or does this asymmetry not bother you for other reasons? (And I don’t have the money to make a big bet regardless.)
Great question!
Short answer: I’m optimistic about muddling through with partial alignment combined with AI control and AI governance (limiting peak AI capabilities, global enforcement of anti-rogue-AI, anti-self-improving-AI, and anti-self-replicating-weapons laws). See my post “A Path to Human Autonomy” for more details.
I also don’t have money for big bets. I’m more interested in mostly-reputation-wagers about the very near future. So that I might get my reputational returns in time for them to pay off in respectful-attention-from-powerful-decisionmakers, which in turn I would hope might pay off in better outcomes for me, my loved ones, and humanity.
If I am incorrect, then I want to not be given the ear of decision makers, and I want them to instead pay more attention to someone with better models than me. Thus, seems to me like a fairly win-win situation to be making short term reputational bets.
Gotcha. I’m happy to offer 600 of my reputation points vs. 200 of yours on your description of 2026-2028 not panning out. (In general if it becomes obvious[1] that we’re racing toward ASI in the next few years, then people should probably not take me seriously anymore.)
Not that one; I would not be shocked if this market resolves Yes. I don’t have an alternative operationalization on hand; would have to be about AI doing serious intellectual work on real problems without any human input. (My model permits AI to be very useful in assisting humans.)
Hmm, yes. I agree that there’s something about self-guiding /self-correcting on complex lengthy open-ended tasks where current AIs seem at near-zero performance.
I do expect this to improve dramatically in the next 12 months. I think this current lack is more about limitations in the training regimes so far, rather than limitations in algorithms/architectures.
Contrast this with the challengingness of ARC-AGI, which seems like maybe an architecture weakness?
I think the algorithm progress is doing some heavy lifting in this model. I think if we had a future textbook on agi we could probably build one but AI is kinda famous for minor and simple things just not being implemented despite all the parts being there
See ReLU activations and sigmoid activations.
If we’re bottlenecking at algorithms alone is there a reason that isn’t a really bad bottleneck?
See my other response to Raphael elsewhere in this comment thread.
My model is that the big AI labs are currently throttling algorithmic progress by choosing to devote their resources to scaling.
If scaling leads to AGI, we get AGI soon that way. (I give this about 20% chance.)
If scaling doesn’t lead to AGI, then refocusing resources on experimentation seems like a natural next move. (I think this is about 80% likely to work in under two years if made a major focus of resources, including both giving human researchers the time, encouragement and compute resources they need, plus developing increasingly helpful AI reseachers.)
Hmm, mixed agree/disagree. Scale probably won’t work, algorithms probably would, but I don’t think it’s going to be that quick.
Namely, I think that the company struggling with fixed capital costs could accomplish much more, much quicker using the salary expenses of the top researchers they already have they’d have done it or gave it a good try at least
I’m 5 percent that a serious switch to algorithms would result in AGI in 2 years. You might be more well read than me on this so I’m not quite taking side bets right now!
One argument I have been making publicly is that I think Ajeya’s Bioanchors report greatly overestimated human brain compute. I think a more careful reading of Joe Carlsmith’s report that hers was based on supports my own estimates of around 1e15 FLOPs.
Am I getting things mixed up, or isn’t that just exactly Ajeya’s median estimate? Quote from the report: ”Under this definition, my median estimate for human brain computation is ~1e15 FLOP/s.”
I’ve been arguing for 2027-ish AGI for several years now. I do somewhat fall into the annoying category of refusing to give my full details for believing this (publicly). I’ve had some more in-depth discussions about this privately.
One argument I have been making publicly is that I think Ajeya’s Bioanchors report greatly overestimated human brain compute. I think a more careful reading of Joe Carlsmith’s report that hers was based on supports my own estimates of around 1e15 FLOPs.
Connor Leahy makes some points I agree with in his recent Future of Life interview. https://futureoflife.org/podcast/connor-leahy-on-why-humanity-risks-extinction-from-agi/
Another very relevant point is that recent research on the human connectome shows that long-range connections (particularly between regions of the cortex) are lower bandwidth than was previously thought. Examining this bandwidth in detail leads me to believe that efficient decentralized training should be possible. Even with considering that training a human brain equivalent model would require 10000x parallel brain equivalents to have a reasonable training time, the current levels of internet bandwidth between datacenters worldwide should be more than sufficient.
Thus, my beliefs are strongly pointint towards: “with the right algorithms we will have more than good enough hardware and more than sufficient data. Also, those algorithms are available to be found, and are hinted at by existing neuroscience data.” Thus, with AI R&D accelerated research on algorithms, we should expect rapid progress on peak capabilities and efficiency which doesn’t plateau at human-peak-capability or human-operation-speed. Super-fast and super-smart AGI within a few months of full AGI, and rapidly increasing speeds of progress leading up to AGI.
If I’m correct, then the period of time from 2026 to 2027 will contain as much progress on generally intelligent systems as all of history leading up to 2026. ASI will thus be possible before 2028.
Only social factors (e.g. massively destructive war or unprecedented international collaboration on enforcing an AI pause) will change these timelines.
Further thoughts here: A path to human autonomy
Lots of disagree votes, but no discussion. So annoying when that happens.
Propose a bet! Ask for my sources! Point out a flaw in my reasoning! Don’t just disagree and walk away!
Feeding this norm creates friction, filters evidence elicited in the agreement-voting. If there is a sense that a vote needs to be explained, it often won’t be cast.
Agree. I do think it is annoying, but allowing people to do that is quite crucial for the integrity of the voting system.
I’m not interested in modifying the site to prevent this, or even really modifying the underlying norm (much). I guess I’m just complaining on my own behalf, making an ask of the community. And the result was that the squeaky wheel got the grease, I got the thoughtful engagement I was hoping for.
So, in conclusion, I’ll try to frame my future pleas for engagement in a way which doesn’t imply I think the norm of voting-without-commenting is bad.
I feel like a bet is fundamentally unfair here because in the cases where I’m wrong, there’s a high chance that I’ll be dead anyway and don’t have to pay. The combination of long timelines but high P(doom|AGI soon) means I’m not really risking my reputation/money in the way I’m supposed to with a bet. Are you optimistic about alignment, or does this asymmetry not bother you for other reasons? (And I don’t have the money to make a big bet regardless.)
Great question! Short answer: I’m optimistic about muddling through with partial alignment combined with AI control and AI governance (limiting peak AI capabilities, global enforcement of anti-rogue-AI, anti-self-improving-AI, and anti-self-replicating-weapons laws). See my post “A Path to Human Autonomy” for more details.
I also don’t have money for big bets. I’m more interested in mostly-reputation-wagers about the very near future. So that I might get my reputational returns in time for them to pay off in respectful-attention-from-powerful-decisionmakers, which in turn I would hope might pay off in better outcomes for me, my loved ones, and humanity.
If I am incorrect, then I want to not be given the ear of decision makers, and I want them to instead pay more attention to someone with better models than me. Thus, seems to me like a fairly win-win situation to be making short term reputational bets.
Gotcha. I’m happy to offer 600 of my reputation points vs. 200 of yours on your description of 2026-2028 not panning out. (In general if it becomes obvious[1] that we’re racing toward ASI in the next few years, then people should probably not take me seriously anymore.)
well, so obvious that I agree, anyway; apparently it’s already obvious to some people.
I’ll happily accept that bet, but maybe we could also come up with something more specific about the next 12 months?
Example: https://manifold.markets/MaxHarms/will-ai-be-recursively-self-improvi
Not that one; I would not be shocked if this market resolves Yes. I don’t have an alternative operationalization on hand; would have to be about AI doing serious intellectual work on real problems without any human input. (My model permits AI to be very useful in assisting humans.)
Hmm, yes. I agree that there’s something about self-guiding /self-correcting on complex lengthy open-ended tasks where current AIs seem at near-zero performance.
I do expect this to improve dramatically in the next 12 months. I think this current lack is more about limitations in the training regimes so far, rather than limitations in algorithms/architectures.
Contrast this with the challengingness of ARC-AGI, which seems like maybe an architecture weakness?
Can we bet karma?
Edit: sarcasm
I think the algorithm progress is doing some heavy lifting in this model. I think if we had a future textbook on agi we could probably build one but AI is kinda famous for minor and simple things just not being implemented despite all the parts being there
See ReLU activations and sigmoid activations.
If we’re bottlenecking at algorithms alone is there a reason that isn’t a really bad bottleneck?
See my other response to Raphael elsewhere in this comment thread.
My model is that the big AI labs are currently throttling algorithmic progress by choosing to devote their resources to scaling.
If scaling leads to AGI, we get AGI soon that way. (I give this about 20% chance.)
If scaling doesn’t lead to AGI, then refocusing resources on experimentation seems like a natural next move. (I think this is about 80% likely to work in under two years if made a major focus of resources, including both giving human researchers the time, encouragement and compute resources they need, plus developing increasingly helpful AI reseachers.)
Hmm, mixed agree/disagree. Scale probably won’t work, algorithms probably would, but I don’t think it’s going to be that quick.
Namely, I think that the company struggling with fixed capital costs could accomplish much more, much quicker using the salary expenses of the top researchers they already have they’d have done it or gave it a good try at least
I’m 5 percent that a serious switch to algorithms would result in AGI in 2 years. You might be more well read than me on this so I’m not quite taking side bets right now!
Am I getting things mixed up, or isn’t that just exactly Ajeya’s median estimate? Quote from the report: ”Under this definition, my median estimate for human brain computation is ~1e15 FLOP/s.”
https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit