The scenario I am most concerned about is a strongly multipolar Malthusian one. There is some chance (maybe even a fair one) that a singleton or oligopoly ASI decides or rigorously coordinate respectively to preserve the biosphere—including humans—at an adequate or superlative level of comfort or fulfillment, or help them ascend themselves, due to ethical considerations, for research purposes, or simulation/karma type considerations.
In a multipolar scenario of gazillions of AI at Malthusian subsistence levels, none of that matters in the default scenario. Individual AIs can be as ethical or empathic as they come, even much more so than any human. But keeping the biosphere around would be a luxury, and any that try to do so, will be outcompeted by more unsentimental economical ones. A farm that can feed a dozen people or an acre of rainforest that can support x species if converted to high efficiency solar panels can support a trillion AIs.
The second scenario is near certain doom so at a bare minimum we should at least get a good inkling of whether AI world is more likely to be unipolar or oligopolistic, or massively multipolar, before proceeding. So a pause is indeed needed, and the most credible way of effecting it is a hardware cap and subsequent back-peddling on compute power. (Roko has good ideas on how to go about that and should develop on them here and at his Substack). Granted if anthropic reasoning is valid, geopolitics might well soon do the job for us. 🚀💥
Frontier LLM performance on offline IQ tests is improving at perhaps 1 S.D. per year, and might have recently become even faster. These tests are a good measure of human general intelligence. One more such jump and there will be PhD-tier assistants for $20/month. At that point, I expect any lingering problems with invoking autonomy to be quickly fixed as human AI research acquires a vast multiplier through these assistants, and a few months later AI research becomes fully automated.