AI labs that pause are throwing away their lead. This makes it a risky and ultimately defeatist move to spend their stashed billions paying employees not to further capabilities. Presumably each further advance in AI will cost more than the prior one, this is consistent with other industries and technologies.
The whole bottleneck here is that over the past 5 years, since the discovery of the transformer, a tiny number of human beings have had the opportunity to use large GPU clusters to experiment with AI. The reason there is finite “talent” is less than a few thousand people worldwide have ever had the access at all. This means that AI labs that pause and pay their talent not to work on capabilities are taking that finite talent off the market, but that will only be a bottleneck until tens of thousands of new people enter the field and learn it, which is what the market wants to happen.
RSI is a mechanism to substitute compute for talent, and ultimately to no longer need human talent at all. This could allow less ethical labs who don’t pause a way to get around the fact that they only have ‘villains’ on their payroll.
National labs, national efforts. During the cold war, tens of thousands of human beings worked eagerly to develop fusion boosted nukes and load them on ICBMs. Despite the obvious existential (at least to their entire nation) risk they all were aware they were contributing to. If history repeats, this is another mechanism for AI to be built by unethical parties.
There are so many ways for it to fail that success—actually pausing—is unlikely. In the private world of just AI labs, without multiple major governments passing laws to restrict access to AI, it’s pretty obvious that a pause is probably not possible at all.
There’s lots of of other failure mechanisms here:
AI labs that pause are throwing away their lead. This makes it a risky and ultimately defeatist move to spend their stashed billions paying employees not to further capabilities. Presumably each further advance in AI will cost more than the prior one, this is consistent with other industries and technologies.
The whole bottleneck here is that over the past 5 years, since the discovery of the transformer, a tiny number of human beings have had the opportunity to use large GPU clusters to experiment with AI. The reason there is finite “talent” is less than a few thousand people worldwide have ever had the access at all. This means that AI labs that pause and pay their talent not to work on capabilities are taking that finite talent off the market, but that will only be a bottleneck until tens of thousands of new people enter the field and learn it, which is what the market wants to happen.
RSI is a mechanism to substitute compute for talent, and ultimately to no longer need human talent at all. This could allow less ethical labs who don’t pause a way to get around the fact that they only have ‘villains’ on their payroll.
National labs, national efforts. During the cold war, tens of thousands of human beings worked eagerly to develop fusion boosted nukes and load them on ICBMs. Despite the obvious existential (at least to their entire nation) risk they all were aware they were contributing to. If history repeats, this is another mechanism for AI to be built by unethical parties.
There are so many ways for it to fail that success—actually pausing—is unlikely. In the private world of just AI labs, without multiple major governments passing laws to restrict access to AI, it’s pretty obvious that a pause is probably not possible at all.