I found myself replying in private conversations on Paul’s arguments, repeatedly over the last months. To a point that I decided to write it up as a fiery comment.
The hardware overhang argument has poor grounding.
Labs scaling models results in more investment in producing more GPU chips with more flops (see Sam Altman’s play for the UAE chip factory) and less latency between (see the EA start-up Fathom Radiant, which started up offering fibre-optic-connected supercomputers for OpenAI and now probably shifted to Anthropic).
The increasing levels of model combinatorial complexity and outside signal connectivity become exponentially harder to keep safe. So the only viable pathway is not scaling that further, rather than “helplessly” take all the hardware that currently gets produced.
Further, AI Impacts found no historical analogues for a hardware overhang. And plenty of common sense reasons why the argument’s premises are unsound.
The hardware overhang claim lacks grounding, but that hasn’t prevented alignment researchers from repeating it in a way that ends up weakening coordination efforts to restrict AI corporations.
Responsible scaling policies have ‘safety-washing’ spelled all over them.
Consider the original formulation by Anthropic:
“Our RSP focuses on catastrophic risks – those where an AI model directly causes large scale devastation.”
In other words: our company can scale on as long as our staff/trustees do not deem the risk of a new AI model directly causing a catastrophe as sufficiently high.
Is that responsible?
It’s assuming that further scaling can be risk managed. It’s assuming that just risk management protocols are enough.
Paul argues that this could be the basis for effective regulation. But Anthropic et al. lobbying national governments to enforce the use of that wonky risk management framework makes things worse.
It distracts from policy efforts to prevent the increasing harms. It creates a perception of safety (instead of actually ensuring safety).
Ideal for AI corporations to keep scaling and circumvent being held accountable.
RSPs support regulatory capture. I want us to become clear about what we are dealing with.
I found myself replying in private conversations on Paul’s arguments, repeatedly over the last months. To a point that I decided to write it up as a fiery comment.
The hardware overhang argument has poor grounding.
Labs scaling models results in more investment in producing more GPU chips with more flops (see Sam Altman’s play for the UAE chip factory) and less latency between (see the EA start-up Fathom Radiant, which started up offering fibre-optic-connected supercomputers for OpenAI and now probably shifted to Anthropic).
The increasing levels of model combinatorial complexity and outside signal connectivity become exponentially harder to keep safe. So the only viable pathway is not scaling that further, rather than “helplessly” take all the hardware that currently gets produced.
Further, AI Impacts found no historical analogues for a hardware overhang. And plenty of common sense reasons why the argument’s premises are unsound.
The hardware overhang claim lacks grounding, but that hasn’t prevented alignment researchers from repeating it in a way that ends up weakening coordination efforts to restrict AI corporations.
Responsible scaling policies have ‘safety-washing’ spelled all over them.
Consider the original formulation by Anthropic: “Our RSP focuses on catastrophic risks – those where an AI model directly causes large scale devastation.”
In other words: our company can scale on as long as our staff/trustees do not deem the risk of a new AI model directly causing a catastrophe as sufficiently high.
Is that responsible?
It’s assuming that further scaling can be risk managed. It’s assuming that just risk management protocols are enough.
Then, the company invents a new wonky risk management framework, ignoring established and more comprehensive practices.
Paul argues that this could be the basis for effective regulation. But Anthropic et al. lobbying national governments to enforce the use of that wonky risk management framework makes things worse.
It distracts from policy efforts to prevent the increasing harms. It creates a perception of safety (instead of actually ensuring safety).
Ideal for AI corporations to keep scaling and circumvent being held accountable.
RSPs support regulatory capture. I want us to become clear about what we are dealing with.