I think AI safety isn’t as much a matter of government policy as you seem to think. Currently, sure. Frontier models are so expensive to train only the big labs can do it. Models have limited agentic capabilities, even at the frontier.
But we are rushing towards a point where science makes intelligence and learning better understood. Open source models are getting rapidly more powerful and cheap.
In a few years, the yrend suggests that any individual could create a dangerously powerful AI using a personal computer.
Any law which fails to protect society if even a single individual chooses to violate it once… Is not a very protective law. Historical evidence suggests that occasionally some people break laws. Especially when there’s a lot of money and power on offer in exchange for the risk.
What happens at that point depends a lot on the details of the lawbreaker’s creation. With what probability will it end up agentic, coherent, conscious, self-improvement capable, escape and self-replication capable, Omohundro goal driven (survival focused, resource and power hungry), etc...
The probability seems unlikely to me to be zero for the sorts of qualities which would make such an AI agent dangerous.
Then we must ask questions about the efficacy of governments in detecting and stopping such AI agents before they become catastrophically powerful.
What happens at that point depends a lot on the details of the lawbreaker’s creation. [ . . . ] The probability seems unlikely to me to be zero for the sorts of qualities which would make such an AI agent dangerous.
Is your question directed at me, or the person I was replying to? I agree with the point “Sun is big, but...” makes. Here’s a link to a recent summary of my view on a plausible plan for the world to handle surviving AI. Please feel free to share your thoughts on it. https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy
I think AI safety isn’t as much a matter of government policy as you seem to think. Currently, sure. Frontier models are so expensive to train only the big labs can do it. Models have limited agentic capabilities, even at the frontier.
But we are rushing towards a point where science makes intelligence and learning better understood. Open source models are getting rapidly more powerful and cheap.
In a few years, the yrend suggests that any individual could create a dangerously powerful AI using a personal computer.
Any law which fails to protect society if even a single individual chooses to violate it once… Is not a very protective law. Historical evidence suggests that occasionally some people break laws. Especially when there’s a lot of money and power on offer in exchange for the risk.
What happens at that point depends a lot on the details of the lawbreaker’s creation. With what probability will it end up agentic, coherent, conscious, self-improvement capable, escape and self-replication capable, Omohundro goal driven (survival focused, resource and power hungry), etc...
The probability seems unlikely to me to be zero for the sorts of qualities which would make such an AI agent dangerous. Then we must ask questions about the efficacy of governments in detecting and stopping such AI agents before they become catastrophically powerful.
Have you read The Sun is big, but superintelligences will not spare Earth a little sunlight?
Is your question directed at me, or the person I was replying to? I agree with the point “Sun is big, but...” makes. Here’s a link to a recent summary of my view on a plausible plan for the world to handle surviving AI. Please feel free to share your thoughts on it. https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy