Another important problem is that while x-risk is speculative and relatively far off, rent-seeking and exploitation are rampant and everpresent. These regulations will make the current ailing politico-economic system much worse to the detriment of almost everyone. In our history, giving tribute in exchange for safety has usually been a terrible idea.
AI x-risk is not far off at all, it’s something like 4 years away IMO. As for “speculative...” that’s not an argument, that’s an epithet.
I was trained in analytic philosophy, and then I got lots of experience thinking about AI risks of various kinds, trying to predict the future in other ways too (e.g. war in Ukraine, future of warfare assuming no AI) and I do acknowledge that it’s sometimes valid to add in lots of uncertainty to a topic on the grounds that currently the discussion on that topic is speculative, as opposed to mathematically rigorous or empirically verified etc. But I feel like people are playing this card inappropriately if they think that AGI might happen this decade but that AI x-risk is dismissible on grounds of being speculative. If AGI happens this decade the risks are very much real and valid and should not be dismissed, certainly not for such a flimsy reason.
AI x-risk is not far off at all, it’s something like 4 years away IMO
Can I ask where this four years number is coming from? It was also stated prominently in the new ‘superalignment’ announcement (https://openai.com/blog/introducing-superalignment). Is this some agreed upon median timelines at OAI? Is there an explicit plan to build AGI in four years? Is there strong evidence behind this view—i.e. that you think you know how to build AGI explicitly and it will just take four years more compute/scaling?
Sure. First of all, disclaimer: This is my opinion, not that of my employer. (I’m not supposed to say what my employer thinks.) Yes, I think I know how to build AGI. Lots of people do. The difficult innovations are already behind us, now it’s mostly a matter of scaling. And there are at least two huge corporate conglomerates in the process of doing so (Microsoft+OpenAI and Alphabet+GoogleDeepMind).
There’s a lot to say on the subject of AGI timelines. For miscellaneous writings of mine, see AI Timelines—LessWrong. But for the sake of brevity I’d recommend (1) The “Master Argument” I wrote in 2021, after reading Ajeya Cotra’s Bio Anchors Report, which lays out a way to manage one’s uncertainty about AI timelines (credit to Ajeya) by breaking it down into uncertainty about the compute ramp-up, uncertainty about how much compute would be needed to build AGI using the ideas of today, and uncertainty about the rate at which new ideas will come along that reduce the compute requirements. You can get soft upper bounds on your probability mass, and hard lower bounds, and then you can argue about where the probability mass should be in between those bounds and then look empirically at the rate of compute ramp-up and new-ideas-coming-along-that-reduce-compute-costs.
And (2) I’d recommend doing the following exercise: Think of what skills a system would need to have in order to constitute AGI. (I’d recommend being even more specific, and asking what skills are necessary to massively accelerate AI R&D, and what skills are necessary to have a good shot at disempowering humanity). Then think about how you’d design a system to have those skills today, if you were in charge of OpenAI and that was what you wanted to do for some reason. What skills are missing from e.g. AutoGPT-4? Can you think of any ways to fill in those gaps? When I do this exercise the conclusion I come to is “Yeah it seems like probably there isn’t any fundamental blocker here, we basically just need more scaling in various dimensions.” I’ve specifically gone around interviewing people who have longer timelines and asking them what blockers they think exist—what skills they think are necessary for AI R&D acceleration AND takeover, but will not be achieved by AI systems in the next ten years—and I’ve not been satisfied with any of the answers.
The prior is that dangerous AI will not happen in this decade. I have read a lot of arguments here for years, and I am not convinced that there is a good chance that the null hypothesis is wrong.
GPT4 can be said to be an AGI already. But it’s weak, it’s slow, it’s expensive, it has little agency, and it has already used up high-quality data and tricks such as ensembling. 4 years later, I expect to see GPT5.5 whose gap with GPT4 will be about the gap between GPT4 and GPT3.5. I absolutely do not expect the context window problem to get solved in this timeframe or even this decade. (https://arxiv.org/abs/2307.03172)
If AGI happens this decade the risks are very much real and valid and should not be dismissed, certainly not for such a flimsy reason.
Especially considering what people consider the near-term risks, which we can expect to become more and more visible and present, will likely shift the landscape in regards to taking AI x-risk seriously. I posit x-risk won’t remain speculative for long with roughly the same timeline you gave.
Another important problem is that while x-risk is speculative and relatively far off, rent-seeking and exploitation are rampant and everpresent. These regulations will make the current ailing politico-economic system much worse to the detriment of almost everyone. In our history, giving tribute in exchange for safety has usually been a terrible idea.
AI x-risk is not far off at all, it’s something like 4 years away IMO. As for “speculative...” that’s not an argument, that’s an epithet.
I was trained in analytic philosophy, and then I got lots of experience thinking about AI risks of various kinds, trying to predict the future in other ways too (e.g. war in Ukraine, future of warfare assuming no AI) and I do acknowledge that it’s sometimes valid to add in lots of uncertainty to a topic on the grounds that currently the discussion on that topic is speculative, as opposed to mathematically rigorous or empirically verified etc. But I feel like people are playing this card inappropriately if they think that AGI might happen this decade but that AI x-risk is dismissible on grounds of being speculative. If AGI happens this decade the risks are very much real and valid and should not be dismissed, certainly not for such a flimsy reason.
Can I ask where this four years number is coming from? It was also stated prominently in the new ‘superalignment’ announcement (https://openai.com/blog/introducing-superalignment). Is this some agreed upon median timelines at OAI? Is there an explicit plan to build AGI in four years? Is there strong evidence behind this view—i.e. that you think you know how to build AGI explicitly and it will just take four years more compute/scaling?
Sure. First of all, disclaimer: This is my opinion, not that of my employer. (I’m not supposed to say what my employer thinks.) Yes, I think I know how to build AGI. Lots of people do. The difficult innovations are already behind us, now it’s mostly a matter of scaling. And there are at least two huge corporate conglomerates in the process of doing so (Microsoft+OpenAI and Alphabet+GoogleDeepMind).
There’s a lot to say on the subject of AGI timelines. For miscellaneous writings of mine, see AI Timelines—LessWrong. But for the sake of brevity I’d recommend (1) The “Master Argument” I wrote in 2021, after reading Ajeya Cotra’s Bio Anchors Report, which lays out a way to manage one’s uncertainty about AI timelines (credit to Ajeya) by breaking it down into uncertainty about the compute ramp-up, uncertainty about how much compute would be needed to build AGI using the ideas of today, and uncertainty about the rate at which new ideas will come along that reduce the compute requirements. You can get soft upper bounds on your probability mass, and hard lower bounds, and then you can argue about where the probability mass should be in between those bounds and then look empirically at the rate of compute ramp-up and new-ideas-coming-along-that-reduce-compute-costs.
And (2) I’d recommend doing the following exercise: Think of what skills a system would need to have in order to constitute AGI. (I’d recommend being even more specific, and asking what skills are necessary to massively accelerate AI R&D, and what skills are necessary to have a good shot at disempowering humanity). Then think about how you’d design a system to have those skills today, if you were in charge of OpenAI and that was what you wanted to do for some reason. What skills are missing from e.g. AutoGPT-4? Can you think of any ways to fill in those gaps? When I do this exercise the conclusion I come to is “Yeah it seems like probably there isn’t any fundamental blocker here, we basically just need more scaling in various dimensions.” I’ve specifically gone around interviewing people who have longer timelines and asking them what blockers they think exist—what skills they think are necessary for AI R&D acceleration AND takeover, but will not be achieved by AI systems in the next ten years—and I’ve not been satisfied with any of the answers.
The prior is that dangerous AI will not happen in this decade. I have read a lot of arguments here for years, and I am not convinced that there is a good chance that the null hypothesis is wrong.
GPT4 can be said to be an AGI already. But it’s weak, it’s slow, it’s expensive, it has little agency, and it has already used up high-quality data and tricks such as ensembling. 4 years later, I expect to see GPT5.5 whose gap with GPT4 will be about the gap between GPT4 and GPT3.5. I absolutely do not expect the context window problem to get solved in this timeframe or even this decade. (https://arxiv.org/abs/2307.03172)
Especially considering what people consider the near-term risks, which we can expect to become more and more visible and present, will likely shift the landscape in regards to taking AI x-risk seriously. I posit x-risk won’t remain speculative for long with roughly the same timeline you gave.