In 5 years compute will scale 2^(5÷0.5)=1024 times
This is a nitpick, but I think you meant 2^(5*2)=1024
I actually wrote it the (5*2) way in my first draft of this post, then edited it to (5÷0.5) as this is [time frame in years]÷[length of cycle in years], which is technically less wrong.
In 5 years AI will be superhuman at most tasks including designing AI
This kind of clashes with the idea that AI capabilities gains are driven mostly by compute. If “moar layers!” is the only way forward, then someone might say this is unlikely. I don’t think this is a hard problem, but I thing its a bit of a snag in the argument.
I think this is one of the weakest parts of my argument, so I agree it is definitely a snag. The move from “superhuman at some tasks” to “superhuman at most tasks” is a bit of a leap. I also don’t think I clarified what I meant very well. I will update to add ”, with ~1024 times the compute,”.
An AI will design a better version of itself and recursively loop this process until it reaches some limit
I think you’ll lose some people on this one. The missing step here is something like “the AI will be able to recognize and take actions that increase its reward function”. There is enough of a disconnect between current systems and systems that would actually take coherent, goal-oriented actions that the point kind of needs to be justified. Otherwise, it leaves room for something like a GPT-X to just kind of say good AI designs when asked, but which doesn’t really know how to actively maximize its reward function beyond just doing the normal sorts of things it was trained to do.
Would adding that suggested text to the previous argue step help? Perhaps “The AI will be able to recognize and take actions that increase its reward function. Designing a better version of itself will increase that reward function”
But yea I tend to agree that there needs to be some sort of agentic clause in this argument somewhere.
Such any AI will be superhuman at almost all tasks, including computer security, R&D, planning, and persuasion
I think this is a stronger claim than you need to make and might not actually be that well-justified. It might be worse than humans at loading the dishwasher bc that’s not important to it, but if it was important, then it could do a brief R&D program in which it quickly becomes superhuman at dish-washer-loading. Idk, maybe the distinction I’m making is pointless, but I guess I’m also saying that there’s a lot of tasks it might not need to be good at if its good at things like engineering and strategy.
Would this be an improvement? “Such any AI will be superhuman, or able to become superhuman, at almost all tasks, including computer security, R&D, planning, and persuasion”
Overall, I tend to agree with you. Most of my hope for a good outcome lies in something like the “bots get stuck in a local maximum and produce useful superhuman alignment work before one of them bootstraps itself and starts ‘disempowering’ humanity”. I guess that relates to the thing I said a couple paragraphs ago about coherent, goal-oriented actions potentially not arising even as other capabilities improve.
I would speculate that most of our implemented alignment strategies would be meta-stable, they only stay aligned for a random amount of time. This would mean we mostly rely on strategies that hope to get x before we get y. Obviously this is a gamble.
I am less and less optimistic about this as research specifically designed to make bots more “agentic” continues. In my eyes, this is among some of the worst research there is.
I speculate that a lot of the x-risk probability comes from agentic models. I am particularly concerned with better versions of models like AutoGPT that don’t have to be very intelligent (so long as they are able to continuously ask GPT5+ how to act intelligent) to pose a serious risk.
Meta question: how do I dig my way out of a karma grave when I can only comment once per hour and post once per 5 days?
Meta comment: I will reply to the other comments when the karma system allows me to.
Thank you Jacob for taking the time for a detailed reply. I will do my best to respond to your comments.
Source: https://www.lesswrong.com/posts/sDiGGhpw7Evw7zdR4/compute-trends-comparison-to-openai-s-ai-and-compute. They conclude 5.7 months from the years 2012 to 2022. This was rounded to 6 months to make calculations more clear. They also note that “OpenAI’s analysis shows a 3.4 month doubling from 2012 to 2018”
I actually wrote it the (5*2) way in my first draft of this post, then edited it to (5÷0.5) as this is [time frame in years]÷[length of cycle in years], which is technically less wrong.
I think this is one of the weakest parts of my argument, so I agree it is definitely a snag. The move from “superhuman at some tasks” to “superhuman at most tasks” is a bit of a leap. I also don’t think I clarified what I meant very well. I will update to add ”, with ~1024 times the compute,”.
Would adding that suggested text to the previous argue step help? Perhaps “The AI will be able to recognize and take actions that increase its reward function. Designing a better version of itself will increase that reward function” But yea I tend to agree that there needs to be some sort of agentic clause in this argument somewhere.
Would this be an improvement? “Such any AI will be superhuman, or able to become superhuman, at almost all tasks, including computer security, R&D, planning, and persuasion”
I would speculate that most of our implemented alignment strategies would be meta-stable, they only stay aligned for a random amount of time. This would mean we mostly rely on strategies that hope to get x before we get y. Obviously this is a gamble.
I speculate that a lot of the x-risk probability comes from agentic models. I am particularly concerned with better versions of models like AutoGPT that don’t have to be very intelligent (so long as they are able to continuously ask GPT5+ how to act intelligent) to pose a serious risk.
Meta question: how do I dig my way out of a karma grave when I can only comment once per hour and post once per 5 days?
Meta comment: I will reply to the other comments when the karma system allows me to.
Edit: formatting