AI x-risk is not far off at all, it’s something like 4 years away IMO
Can I ask where this four years number is coming from? It was also stated prominently in the new ‘superalignment’ announcement (https://openai.com/blog/introducing-superalignment). Is this some agreed upon median timelines at OAI? Is there an explicit plan to build AGI in four years? Is there strong evidence behind this view—i.e. that you think you know how to build AGI explicitly and it will just take four years more compute/scaling?
Sure. First of all, disclaimer: This is my opinion, not that of my employer. (I’m not supposed to say what my employer thinks.) Yes, I think I know how to build AGI. Lots of people do. The difficult innovations are already behind us, now it’s mostly a matter of scaling. And there are at least two huge corporate conglomerates in the process of doing so (Microsoft+OpenAI and Alphabet+GoogleDeepMind).
There’s a lot to say on the subject of AGI timelines. For miscellaneous writings of mine, see AI Timelines—LessWrong. But for the sake of brevity I’d recommend (1) The “Master Argument” I wrote in 2021, after reading Ajeya Cotra’s Bio Anchors Report, which lays out a way to manage one’s uncertainty about AI timelines (credit to Ajeya) by breaking it down into uncertainty about the compute ramp-up, uncertainty about how much compute would be needed to build AGI using the ideas of today, and uncertainty about the rate at which new ideas will come along that reduce the compute requirements. You can get soft upper bounds on your probability mass, and hard lower bounds, and then you can argue about where the probability mass should be in between those bounds and then look empirically at the rate of compute ramp-up and new-ideas-coming-along-that-reduce-compute-costs.
And (2) I’d recommend doing the following exercise: Think of what skills a system would need to have in order to constitute AGI. (I’d recommend being even more specific, and asking what skills are necessary to massively accelerate AI R&D, and what skills are necessary to have a good shot at disempowering humanity). Then think about how you’d design a system to have those skills today, if you were in charge of OpenAI and that was what you wanted to do for some reason. What skills are missing from e.g. AutoGPT-4? Can you think of any ways to fill in those gaps? When I do this exercise the conclusion I come to is “Yeah it seems like probably there isn’t any fundamental blocker here, we basically just need more scaling in various dimensions.” I’ve specifically gone around interviewing people who have longer timelines and asking them what blockers they think exist—what skills they think are necessary for AI R&D acceleration AND takeover, but will not be achieved by AI systems in the next ten years—and I’ve not been satisfied with any of the answers.
Can I ask where this four years number is coming from? It was also stated prominently in the new ‘superalignment’ announcement (https://openai.com/blog/introducing-superalignment). Is this some agreed upon median timelines at OAI? Is there an explicit plan to build AGI in four years? Is there strong evidence behind this view—i.e. that you think you know how to build AGI explicitly and it will just take four years more compute/scaling?
Sure. First of all, disclaimer: This is my opinion, not that of my employer. (I’m not supposed to say what my employer thinks.) Yes, I think I know how to build AGI. Lots of people do. The difficult innovations are already behind us, now it’s mostly a matter of scaling. And there are at least two huge corporate conglomerates in the process of doing so (Microsoft+OpenAI and Alphabet+GoogleDeepMind).
There’s a lot to say on the subject of AGI timelines. For miscellaneous writings of mine, see AI Timelines—LessWrong. But for the sake of brevity I’d recommend (1) The “Master Argument” I wrote in 2021, after reading Ajeya Cotra’s Bio Anchors Report, which lays out a way to manage one’s uncertainty about AI timelines (credit to Ajeya) by breaking it down into uncertainty about the compute ramp-up, uncertainty about how much compute would be needed to build AGI using the ideas of today, and uncertainty about the rate at which new ideas will come along that reduce the compute requirements. You can get soft upper bounds on your probability mass, and hard lower bounds, and then you can argue about where the probability mass should be in between those bounds and then look empirically at the rate of compute ramp-up and new-ideas-coming-along-that-reduce-compute-costs.
And (2) I’d recommend doing the following exercise: Think of what skills a system would need to have in order to constitute AGI. (I’d recommend being even more specific, and asking what skills are necessary to massively accelerate AI R&D, and what skills are necessary to have a good shot at disempowering humanity). Then think about how you’d design a system to have those skills today, if you were in charge of OpenAI and that was what you wanted to do for some reason. What skills are missing from e.g. AutoGPT-4? Can you think of any ways to fill in those gaps? When I do this exercise the conclusion I come to is “Yeah it seems like probably there isn’t any fundamental blocker here, we basically just need more scaling in various dimensions.” I’ve specifically gone around interviewing people who have longer timelines and asking them what blockers they think exist—what skills they think are necessary for AI R&D acceleration AND takeover, but will not be achieved by AI systems in the next ten years—and I’ve not been satisfied with any of the answers.