I think AI safety isn’t as much a matter of government policy as you seem to think. Currently, sure. Frontier models are so expensive to train only the big labs can do it. Models have limited agentic capabilities, even at the frontier.
But we are rushing towards a point where science makes intelligence and learning better understood. Open source models are getting rapidly more powerful and cheap.
In a few years, the yrend suggests that any individual could create a dangerously powerful AI using a personal computer.
Any law which fails to protect society if even a single individual chooses to violate it once… Is not a very protective law. Historical evidence suggests that occasionally some people break laws. Especially when there’s a lot of money and power on offer in exchange for the risk.
What happens at that point depends a lot on the details of the lawbreaker’s creation. With what probability will it end up agentic, coherent, conscious, self-improvement capable, escape and self-replication capable, Omohundro goal driven (survival focused, resource and power hungry), etc...
The probability seems unlikely to me to be zero for the sorts of qualities which would make such an AI agent dangerous.
Then we must ask questions about the efficacy of governments in detecting and stopping such AI agents before they become catastrophically powerful.
What happens at that point depends a lot on the details of the lawbreaker’s creation. [ . . . ] The probability seems unlikely to me to be zero for the sorts of qualities which would make such an AI agent dangerous.
I think AI safety isn’t as much a matter of government policy as you seem to think. Currently, sure. Frontier models are so expensive to train only the big labs can do it. Models have limited agentic capabilities, even at the frontier.
But we are rushing towards a point where science makes intelligence and learning better understood. Open source models are getting rapidly more powerful and cheap.
In a few years, the yrend suggests that any individual could create a dangerously powerful AI using a personal computer.
Any law which fails to protect society if even a single individual chooses to violate it once… Is not a very protective law. Historical evidence suggests that occasionally some people break laws. Especially when there’s a lot of money and power on offer in exchange for the risk.
What happens at that point depends a lot on the details of the lawbreaker’s creation. With what probability will it end up agentic, coherent, conscious, self-improvement capable, escape and self-replication capable, Omohundro goal driven (survival focused, resource and power hungry), etc...
The probability seems unlikely to me to be zero for the sorts of qualities which would make such an AI agent dangerous. Then we must ask questions about the efficacy of governments in detecting and stopping such AI agents before they become catastrophically powerful.
Have you read The Sun is big, but superintelligences will not spare Earth a little sunlight?