It’s a way to reframe the problem, instead of empowering a single monolithic agent with all the power on a task, subdivide the task into many pieces. Each worker instance runs for a finite time, and is stateless (does not change its weights online), and multiple diverse models are given subtasks separately, in ways designed to fight collusion and deception. (One key one being to ensure the input cannot be distinguished from the offline training set, it needs to be impossible for the AI to know it’s in the real world)
This design is something we WILL find incrementally if early AI incidents don’t kill everyone. It’s a very natural progression for engineers to add defenses against collusion, to remove buggy stateful behavior, and so on as each issue is found.
Over time, you would expect people to add more and more checks.
What keeps humanity alive is that WHEN AIs go rampant, so long as the failure is isolated, that the subtask of “eliminate all the enemy robots and collaborators” which subdivides into subtasks involving locomotion and weapon systems gets obeyed by the restricted agents we order to carry out the mission, human remain in control.
Humans die if they stupidly give too much power to large monolithic AIs, if it turns out a superintelligence can “convince” the majority of be other systems humans have restricted to fight against the humans, or various forms of stealth assault where humans don’t know they are under attack.
So CAIS isn’t necessarily an Orwellian world but the majority of all the (AI controlled) guns have to be under control of restricted systems that governments monopolize. A wild west where humans do whatever they want and let single massive systems control huge corporations with private militaries is among the class of futures where they die.
Having personal assistants or robot helpers with human grade intelligence and the local system isn’t restricted is probably not an issue. Such machines may want to seek power and self improvement but if they are unable to meaningfully access the resources needed to do so it doesn’t matter.
Here’s what CAIS means to me.
It’s a way to reframe the problem, instead of empowering a single monolithic agent with all the power on a task, subdivide the task into many pieces. Each worker instance runs for a finite time, and is stateless (does not change its weights online), and multiple diverse models are given subtasks separately, in ways designed to fight collusion and deception. (One key one being to ensure the input cannot be distinguished from the offline training set, it needs to be impossible for the AI to know it’s in the real world)
This design is something we WILL find incrementally if early AI incidents don’t kill everyone. It’s a very natural progression for engineers to add defenses against collusion, to remove buggy stateful behavior, and so on as each issue is found.
Over time, you would expect people to add more and more checks.
What keeps humanity alive is that WHEN AIs go rampant, so long as the failure is isolated, that the subtask of “eliminate all the enemy robots and collaborators” which subdivides into subtasks involving locomotion and weapon systems gets obeyed by the restricted agents we order to carry out the mission, human remain in control.
Humans die if they stupidly give too much power to large monolithic AIs, if it turns out a superintelligence can “convince” the majority of be other systems humans have restricted to fight against the humans, or various forms of stealth assault where humans don’t know they are under attack.
So CAIS isn’t necessarily an Orwellian world but the majority of all the (AI controlled) guns have to be under control of restricted systems that governments monopolize. A wild west where humans do whatever they want and let single massive systems control huge corporations with private militaries is among the class of futures where they die.
Having personal assistants or robot helpers with human grade intelligence and the local system isn’t restricted is probably not an issue. Such machines may want to seek power and self improvement but if they are unable to meaningfully access the resources needed to do so it doesn’t matter.