I think that it’s important to be careful with elaborately modelled reasoning about this kind of thing, because the second order political effects are very hard to predict but also likely to be extremely important, possibly even more important than the direct effect on timelines in some scenarios.
For instance, you mention leading labs slowing down as bad (because the leading labs are ‘safety conscious’ and slowing down dilutes their lead). In my opinion, this is a very simplistic model of the likely effects of this intervention. There are a few reasons for this:
Taking drastic unilateral action creates new political possibilities. A good example is Hinton and Bengio ‘defecting’ to advocating strongly for AI safety in public; I think this has had a huge effect on ML researchers and governments in taking things seriously, even though the direct effect on AI research is probably neglible. For instance, Hinton in particular made me personally take a much more serious look at AI safety related arguments, and this has influenced me trying to re-orient my career in a more safety-focused direction. I find it implausible that a leading AI lab shutting themselves down for safety reasons would have no second order political effects along these lines, even if the direct impact was small: if there’s one lesson I would draw from covid and the last year or so of AI discourse, it’s that the overton window is much more mobile than people often think. A dramatic intervention like this would obviously have uncertain outcomes, but could trigger unforeseen possibilities. Unilateral action that disadvantages the actor also makes a political message much more powerful. There’s a lot of skepticism when labs like Anthropic talk loudly about AI risk because of the objection ‘if it’s so bad why are you making it’. While there are technical arguments one can make that there are good reasons to simultaneously work on safety and ai development, it makes communicating this message much harder and people will understandably have doubts about your motives.
‘we can’t slow down because someone else will do it anyway’ - I actually this is probably wrong: in a counterfactual world where OpenAI didn’t throw lots of resources and effort into language models, I’m not actually sure someone else would have bothered to continue scaling them, at least not for many years. Research is not a linear process and a field being unfashionable can delay progress by a considerable amount; just look at the history of neural network research! I remember many people in academia being extremely skeptical of scaling laws around the time they were being published; if OpenAI hadn’t pushed on it it could have taken years to decades for another lab to really throw enough resources at that hypothesis if it had become unfashionable for whatever reason.
I’m not sure it’s always true that other labs catch up if the leading ones stop: progress also isn’t a simple function of time; without people trying to scale massive GPU clusters you don’t get practical experience with the kind of problems such systems have, production lines don’t re-orient themselves towards the needs of such systems, etc. etc. There are important feedback loops in this kind of process that the big labs shutting down could disrupt, such as attracting more talent and enthusiasm into the field. It’s also not true that all ML research is a monolithic line towards ‘more AGI’ - from my experience of academia, many researchers would have quite happily worked on small specialised systems in a variety of domains for the rest of time.
I think many of these arguments also apply to arguments against ‘US moratorium now’ - for instance, it’s much easier to get other countries to listen to you if you take unilateral actions, as doing so is a costly signal that you are serious.
this isn’t neccesarily to say that I think a US moratorium or a leading lab shutting down would actually be a useful thing, just that I don’t think it’s cut and dry that it wouldn’t. Consider what would happen if a leading lab actually did shut themselves down—would there really be no political consequences that would have a serious effect on the development of AI? I think that your argument makes a lot of sense if we are considering ‘spherical AI labs in a vacuum’, but I’m not sure that’s how it plays out in reality.
This is good, thanks. In brief reply to your bullets:
Yeah, agree; this seems complicated.
I agree that progress isn’t inevitable. But to some extent it’s fine if you do-the-thing and don’t publish your research. But to some extent ideas leak.
I think LLMs are now sufficiently promising that if DeepMind, OpenAI, and Anthropic disappeared, the field would be set back a year or two but other labs would take their place.
I think that it’s important to be careful with elaborately modelled reasoning about this kind of thing, because the second order political effects are very hard to predict but also likely to be extremely important, possibly even more important than the direct effect on timelines in some scenarios. For instance, you mention leading labs slowing down as bad (because the leading labs are ‘safety conscious’ and slowing down dilutes their lead). In my opinion, this is a very simplistic model of the likely effects of this intervention. There are a few reasons for this:
Taking drastic unilateral action creates new political possibilities. A good example is Hinton and Bengio ‘defecting’ to advocating strongly for AI safety in public; I think this has had a huge effect on ML researchers and governments in taking things seriously, even though the direct effect on AI research is probably neglible. For instance, Hinton in particular made me personally take a much more serious look at AI safety related arguments, and this has influenced me trying to re-orient my career in a more safety-focused direction. I find it implausible that a leading AI lab shutting themselves down for safety reasons would have no second order political effects along these lines, even if the direct impact was small: if there’s one lesson I would draw from covid and the last year or so of AI discourse, it’s that the overton window is much more mobile than people often think. A dramatic intervention like this would obviously have uncertain outcomes, but could trigger unforeseen possibilities. Unilateral action that disadvantages the actor also makes a political message much more powerful. There’s a lot of skepticism when labs like Anthropic talk loudly about AI risk because of the objection ‘if it’s so bad why are you making it’. While there are technical arguments one can make that there are good reasons to simultaneously work on safety and ai development, it makes communicating this message much harder and people will understandably have doubts about your motives.
‘we can’t slow down because someone else will do it anyway’ - I actually this is probably wrong: in a counterfactual world where OpenAI didn’t throw lots of resources and effort into language models, I’m not actually sure someone else would have bothered to continue scaling them, at least not for many years. Research is not a linear process and a field being unfashionable can delay progress by a considerable amount; just look at the history of neural network research! I remember many people in academia being extremely skeptical of scaling laws around the time they were being published; if OpenAI hadn’t pushed on it it could have taken years to decades for another lab to really throw enough resources at that hypothesis if it had become unfashionable for whatever reason.
I’m not sure it’s always true that other labs catch up if the leading ones stop: progress also isn’t a simple function of time; without people trying to scale massive GPU clusters you don’t get practical experience with the kind of problems such systems have, production lines don’t re-orient themselves towards the needs of such systems, etc. etc. There are important feedback loops in this kind of process that the big labs shutting down could disrupt, such as attracting more talent and enthusiasm into the field. It’s also not true that all ML research is a monolithic line towards ‘more AGI’ - from my experience of academia, many researchers would have quite happily worked on small specialised systems in a variety of domains for the rest of time.
I think many of these arguments also apply to arguments against ‘US moratorium now’ - for instance, it’s much easier to get other countries to listen to you if you take unilateral actions, as doing so is a costly signal that you are serious.
this isn’t neccesarily to say that I think a US moratorium or a leading lab shutting down would actually be a useful thing, just that I don’t think it’s cut and dry that it wouldn’t. Consider what would happen if a leading lab actually did shut themselves down—would there really be no political consequences that would have a serious effect on the development of AI? I think that your argument makes a lot of sense if we are considering ‘spherical AI labs in a vacuum’, but I’m not sure that’s how it plays out in reality.
This is good, thanks. In brief reply to your bullets:
Yeah, agree; this seems complicated.
I agree that progress isn’t inevitable. But to some extent it’s fine if you do-the-thing and don’t publish your research. But to some extent ideas leak.
I think LLMs are now sufficiently promising that if DeepMind, OpenAI, and Anthropic disappeared, the field would be set back a year or two but other labs would take their place.