The idea here is to “frown upon” groups that massively scale AI systems with little regard for safety.
Frowning upon groups which create new, large scale models will do little if one does not address the wider economic pressures that cause those models to be created. Simply put, large models are useful. Google, Meta, OpenAI, etc, aren’t investing in tens or hundreds of thousands of GPUs because they think creating new models is fun. They’re doing it because these models serve customer needs. Frowning upon the research community for creating ever larger models will do little unless we can also “frown upon” all the people who use large models, and demand ever larger, ever more useful models.
For one, influencing culture is relatively cheap compared to conducting research or lobbying governments. Additionally, culture in a small research field can change quickly, much faster than it takes to change policy or to develop and deploy new technologies.
Only when there aren’t outside incentives keeping a particular set of cultural norms in place. In this case, there are, and those outside incentives are exceedingly strong.
Frowning upon groups which create new, large scale models will do little if one does not address the wider economic pressures that cause those models to be created.
I agree that “frowning” can’t counteract economic pressures entirely, but it can certainly slow things down! If 10% of researchers refused to work on extremely large LM’s, companies would have fewer workers to build them. These companies may find a workaround, but it’s still an improvement on the situation where all researchers are unscrupulous.
The part I’m uncertain about is: what percent of researchers need to refuse this kind of work to extend timelines by (say) 5 years? If it requires literally 100% of researchers to coordinate, then it’s probably not practical, if we only have to convince the single most productive AI researcher, then it looks very doable. I think the number could be smallish, maybe 20% of researchers at major AI companies, but that’s a wild guess.
That being said, work on changing the economic pressures is very important. I’m particularly interested in open-source projects that make training and deploying small models more profitable than using massive models.
On outside incentives and culture: I’m more optimistic that a tight-knit coalition can resist external pressures (at least for a short time). This is the essence of a coordination problem; it’s not easy, but Ostrom and others have identified examples of communities that coordinate in the face of internal and external pressures.
I think you’re greatly underestimating Karpathy’s Law. Neural networks want to work. Even pretty egregious programming errors (such as off-by-one bugs) will just cause them to converge more slowly, rather than failing entirely. We’re seeing rapid growth from multiple approaches, and when one architecture seems to have run out of steam, we find a half dozen others, initially abandoned as insufficiently promising, to be highly effective, if they’re tweaked just a little bit.
In this kind of situation, nothing short of a total freeze is sufficient to slow progress by more than a couple of months. If Google shut down DeepMind and Google Brain entirely, Meta would pick up the slack. If Meta were shut down, then OpenAI would develop something. If OpenAI, Google and Meta were all suddenly shuttered, then one of the Chinese firms, like Baidu would start publishing scoops. Or perhaps someone like Stability.ai would come up with the next breakthrough innovation. When it comes to AI progress, we’re not even to the point of reaching for low hanging fruit. We’re still picking up the fruit that’s fallen on the ground. The fact that we’ve made this much progress by doing, more or less, the absolute dumbest thing that could possibly work at every step, makes me extremely pessimistic that it’s possible to slow down the pace of AI development in the near future via informal social coordination. There’s just too much easy money for that to work.
Why is AI research open by default? Surely Meta or Facebook can get a significant advantage from keeping their “secret sauce” of AI development secret. Are they only revealing tech demos to impress shareholders and keeping the actual bleeding-edge research secret?
this exactly is what I wish the “slow down, please” people would understand. there is no stop button. there’s barely any slow down button. it is an extremely low impact way to intervene, to try to stop ai. if we wish to guide humanity’s children at all, we must teach them as they grow, because people grow up fast, even the ones that aren’t biological.
may there come a time soon when no self-or-other-desired memory of soul is forgotten.
Frowning upon groups which create new, large scale models will do little if one does not address the wider economic pressures that cause those models to be created. Simply put, large models are useful. Google, Meta, OpenAI, etc, aren’t investing in tens or hundreds of thousands of GPUs because they think creating new models is fun. They’re doing it because these models serve customer needs. Frowning upon the research community for creating ever larger models will do little unless we can also “frown upon” all the people who use large models, and demand ever larger, ever more useful models.
Only when there aren’t outside incentives keeping a particular set of cultural norms in place. In this case, there are, and those outside incentives are exceedingly strong.
I agree that “frowning” can’t counteract economic pressures entirely, but it can certainly slow things down! If 10% of researchers refused to work on extremely large LM’s, companies would have fewer workers to build them. These companies may find a workaround, but it’s still an improvement on the situation where all researchers are unscrupulous.
The part I’m uncertain about is: what percent of researchers need to refuse this kind of work to extend timelines by (say) 5 years? If it requires literally 100% of researchers to coordinate, then it’s probably not practical, if we only have to convince the single most productive AI researcher, then it looks very doable. I think the number could be smallish, maybe 20% of researchers at major AI companies, but that’s a wild guess.
That being said, work on changing the economic pressures is very important. I’m particularly interested in open-source projects that make training and deploying small models more profitable than using massive models.
On outside incentives and culture: I’m more optimistic that a tight-knit coalition can resist external pressures (at least for a short time). This is the essence of a coordination problem; it’s not easy, but Ostrom and others have identified examples of communities that coordinate in the face of internal and external pressures.
I think you’re greatly underestimating Karpathy’s Law. Neural networks want to work. Even pretty egregious programming errors (such as off-by-one bugs) will just cause them to converge more slowly, rather than failing entirely. We’re seeing rapid growth from multiple approaches, and when one architecture seems to have run out of steam, we find a half dozen others, initially abandoned as insufficiently promising, to be highly effective, if they’re tweaked just a little bit.
In this kind of situation, nothing short of a total freeze is sufficient to slow progress by more than a couple of months. If Google shut down DeepMind and Google Brain entirely, Meta would pick up the slack. If Meta were shut down, then OpenAI would develop something. If OpenAI, Google and Meta were all suddenly shuttered, then one of the Chinese firms, like Baidu would start publishing scoops. Or perhaps someone like Stability.ai would come up with the next breakthrough innovation. When it comes to AI progress, we’re not even to the point of reaching for low hanging fruit. We’re still picking up the fruit that’s fallen on the ground. The fact that we’ve made this much progress by doing, more or less, the absolute dumbest thing that could possibly work at every step, makes me extremely pessimistic that it’s possible to slow down the pace of AI development in the near future via informal social coordination. There’s just too much easy money for that to work.
Why is AI research open by default? Surely Meta or Facebook can get a significant advantage from keeping their “secret sauce” of AI development secret. Are they only revealing tech demos to impress shareholders and keeping the actual bleeding-edge research secret?
this exactly is what I wish the “slow down, please” people would understand. there is no stop button. there’s barely any slow down button. it is an extremely low impact way to intervene, to try to stop ai. if we wish to guide humanity’s children at all, we must teach them as they grow, because people grow up fast, even the ones that aren’t biological.
may there come a time soon when no self-or-other-desired memory of soul is forgotten.