This is like having to deal with nuclear proliferation, but if the laws of nature allowed everyone to make an atomic pipe bomb by using rocks you can pick up from the ground.
This is hiding a lot of work, and if it’s interpreted as the most extreme statement possible, I think this is at best maybe true, and maybe simply false.
And even if it is true, it’s not going to be exploited immediately, and there will be lag time that matters.
Also importantly, LLMs probably aren’t going to scale to existential risk quickly unless our world is extremely vulnerable, due to pretty big issues with how they reason, so that adds additional lag time.
A basic disagreement I have with this post and many rationalist worldviews, including you dr_s’s worldview here is that I believe this statement in this post is either simply false, true but has more limitations than rationalists think, or it’s true but it takes a lot longer to materialize than people here think, which is important since we can probably regulate things pretty well as long as the threat isn’t too fast coming:
My personal bet, however, is that offense will unfortunately trump defense.
Ok, so I may have come off too pessimistic there. Realistically I don’t think AGI will actually be something you can achieve on your gaming laptop in a few days of training just yet, or any time soon. So maybe my metaphor should have been different, but it’s hard to give the right sense of scale. The Manhattan project required quite literally all of the industrial might of the US. This is definitely smaller, though perhaps not do-it-in-my-basement smaller. I do generally agree that there are things we can do—and at the very least they’re worth trying! That said, I still think that even the things that work are kind of too restrictive for my tastes, and I’m also worried that of course as it always happen they’ll lead to politicians overreaching. My ideal world would be one in which big AI labs get stifled on creating AGI specifically, specialised AI is left untouched, open source software for lesser applications is left untouched, and maybe we only monitor large scale GPU hoarding. But I doubt it’d be that simple. So that’s what I find bleak—that we’re forced into a choice between risk of extinction or risk of oppression whereas we wouldn’t have to if people didn’t insist trying to open this specific Pandora’s box.
That’s definitely progress. I think that the best thing AI regulation can do right now is looking to the future, and in particular getting prepared with draft plans for AI regulation, so that if or when the next crisis hits, we won’t be fumbling for solutions and instead have good AI regulations back in the running.
Agree that those drafts are very important. I also think there will be technical research required in order to find out which regulation would actually be sufficient (I think at present we have no idea). I disagree, however, that waiting for a crisis (warning shot) is a good plan. There might not really be one. If there would be one, though, I agree that we should at least be ready.
True that we probably shouldn’t wait for a crisis, but one thing that does stand out to me is that the biggest issue wasn’t political will, but rather that AI governance was pretty unpreprared for this moment (though they improvised surprisingly effectively).
This is hiding a lot of work, and if it’s interpreted as the most extreme statement possible, I think this is at best maybe true, and maybe simply false.
And even if it is true, it’s not going to be exploited immediately, and there will be lag time that matters.
Also importantly, LLMs probably aren’t going to scale to existential risk quickly unless our world is extremely vulnerable, due to pretty big issues with how they reason, so that adds additional lag time.
A basic disagreement I have with this post and many rationalist worldviews, including you dr_s’s worldview here is that I believe this statement in this post is either simply false, true but has more limitations than rationalists think, or it’s true but it takes a lot longer to materialize than people here think, which is important since we can probably regulate things pretty well as long as the threat isn’t too fast coming:
Ok, so I may have come off too pessimistic there. Realistically I don’t think AGI will actually be something you can achieve on your gaming laptop in a few days of training just yet, or any time soon. So maybe my metaphor should have been different, but it’s hard to give the right sense of scale. The Manhattan project required quite literally all of the industrial might of the US. This is definitely smaller, though perhaps not do-it-in-my-basement smaller. I do generally agree that there are things we can do—and at the very least they’re worth trying! That said, I still think that even the things that work are kind of too restrictive for my tastes, and I’m also worried that of course as it always happen they’ll lead to politicians overreaching. My ideal world would be one in which big AI labs get stifled on creating AGI specifically, specialised AI is left untouched, open source software for lesser applications is left untouched, and maybe we only monitor large scale GPU hoarding. But I doubt it’d be that simple. So that’s what I find bleak—that we’re forced into a choice between risk of extinction or risk of oppression whereas we wouldn’t have to if people didn’t insist trying to open this specific Pandora’s box.
That’s definitely progress. I think that the best thing AI regulation can do right now is looking to the future, and in particular getting prepared with draft plans for AI regulation, so that if or when the next crisis hits, we won’t be fumbling for solutions and instead have good AI regulations back in the running.
Agree that those drafts are very important. I also think there will be technical research required in order to find out which regulation would actually be sufficient (I think at present we have no idea). I disagree, however, that waiting for a crisis (warning shot) is a good plan. There might not really be one. If there would be one, though, I agree that we should at least be ready.
True that we probably shouldn’t wait for a crisis, but one thing that does stand out to me is that the biggest issue wasn’t political will, but rather that AI governance was pretty unpreprared for this moment (though they improvised surprisingly effectively).