How would something like OmegaStar be transformative?
How would something like GPT-7 be transformative?
(I think your kind of thinking is currently the best way to do timelines, and I buy that we could very likely do fun stuff with +12 OOMs, but I don’t understand through what mechanism these systems would be transformative, and this seems really important to think about carefully so we can determine other probabilities, like the probability that a GPT-style system/bureaucracy with 10^30 FLOP would be transformative.)
I think OmegaStar and Amp(GPT-7) would be APS-AI—advanced, planning, strategically aware. (Defined in the Carlsmith report). I think they’d be capable of playing the training game and deceiving humans into thinking they are aligned/trustworthy and/or incapable of doing truly dangerous things. I think they’d probably have powerbase ability.
As for economic impact, I think they’d be able to automate a significant fraction of the economy and in particular accelerate AI R&D significantly. (Once they and variations of them and things they built were suitably deployed widely, that is.)
This is just a brief answer of course, I imagine you want more—but could you perhaps explain which part of the above you disagree with and why?
I don’t really disagree—I just feel uncertain. I’d struggle to tell a story about how (e.g.) OmegaStar could “automate a significant fraction of the economy” or “accelerate AI R&D significantly” or take control of a leading AI lab (powerbase ability). To be more precise, I feel deeply uncertain about how agent-y OmegaStar would be, how capable it would be, and the kinds of actions that it could use to take over (if it wanted to and was sufficiently capable).
I can’t think of much written on this. I think it could be a great use of your time to write about how AI systems based on game-playing or token-prediction could be economically important, be important to AI R&D, and have powerbase ability, as well as what signs we might see along the way. (This might easily fit into a continuation of What 2026 looks like, or maybe not.)
I agree this is something I could do that would be valuable, I just haven’t prioritized it alas. I did say I wanted to write the sequel to that story at some point...
I encourage other people to think about it too. I suggest reasoning as follows: (1) What skills/properties would combine to create APS-AI? Agency, reasoning, situational awareness… maybe persuasion, deception, hacking… etc. (2) Does the training process of OmegaStar or whatever incentivize the growth of those skills? That is, does the net developing those skills help it to perform better in that training process, get reinforced, etc.? (3) Is the model big enough that it could in principle come to develop those skills? (4) Is the model trained long enough that it likely will in practice come to have those skills?
How would something like OmegaStar be transformative?
How would something like GPT-7 be transformative?
(I think your kind of thinking is currently the best way to do timelines, and I buy that we could very likely do fun stuff with +12 OOMs, but I don’t understand through what mechanism these systems would be transformative, and this seems really important to think about carefully so we can determine other probabilities, like the probability that a GPT-style system/bureaucracy with 10^30 FLOP would be transformative.)
I think OmegaStar and Amp(GPT-7) would be APS-AI—advanced, planning, strategically aware. (Defined in the Carlsmith report). I think they’d be capable of playing the training game and deceiving humans into thinking they are aligned/trustworthy and/or incapable of doing truly dangerous things. I think they’d probably have powerbase ability.
As for economic impact, I think they’d be able to automate a significant fraction of the economy and in particular accelerate AI R&D significantly. (Once they and variations of them and things they built were suitably deployed widely, that is.)
This is just a brief answer of course, I imagine you want more—but could you perhaps explain which part of the above you disagree with and why?
Thanks very much!
I don’t really disagree—I just feel uncertain. I’d struggle to tell a story about how (e.g.) OmegaStar could “automate a significant fraction of the economy” or “accelerate AI R&D significantly” or take control of a leading AI lab (powerbase ability). To be more precise, I feel deeply uncertain about how agent-y OmegaStar would be, how capable it would be, and the kinds of actions that it could use to take over (if it wanted to and was sufficiently capable).
I can’t think of much written on this. I think it could be a great use of your time to write about how AI systems based on game-playing or token-prediction could be economically important, be important to AI R&D, and have powerbase ability, as well as what signs we might see along the way. (This might easily fit into a continuation of What 2026 looks like, or maybe not.)
I agree this is something I could do that would be valuable, I just haven’t prioritized it alas. I did say I wanted to write the sequel to that story at some point...
I encourage other people to think about it too. I suggest reasoning as follows: (1) What skills/properties would combine to create APS-AI? Agency, reasoning, situational awareness… maybe persuasion, deception, hacking… etc. (2) Does the training process of OmegaStar or whatever incentivize the growth of those skills? That is, does the net developing those skills help it to perform better in that training process, get reinforced, etc.? (3) Is the model big enough that it could in principle come to develop those skills? (4) Is the model trained long enough that it likely will in practice come to have those skills?