Yes, you can avoid AGI misalignment if you choose to not employ AGI. What do you do about all the other people who will deploy AGI as soon as it is possible?
This is, arguably, AGI. The reason it’s AGI is because you can solve most real world problems by licensing a collection of common subcomponents (I would predict some stuff will be open source but the need for data and cloud compute resources to build and maintain a component means nothing can be free), where you only need to define your problem.
In this specific toy example, the only thing that is written by human devs for their paperclip factory might be a small number of json files that reference paperclip specs, define the system topology, and references another system that actually handles marketing/selling paperclips and sets the quotas.
Obviously then the meta task of authoring this file could be done by another AI based system that learns from examples of humans authoring these files.
So the system as a whole has the capabilities of AGI even though it’s made of a large number of narrow AIs. This is arguably how our brains most likely work at the subsystem level so there’s prior art.
Also I can see how you might get sloppy and stop adhering to the ‘sparseness’ criterion. Make your paperclip factory of subsystems that are smarter than they need to be, increasing their flexibility in novel situations at the cost of potentially unbounded behavior.
Yes, you can avoid AGI misalignment if you choose to not employ AGI. What do you do about all the other people who will deploy AGI as soon as it is possible?
This is, arguably, AGI. The reason it’s AGI is because you can solve most real world problems by licensing a collection of common subcomponents (I would predict some stuff will be open source but the need for data and cloud compute resources to build and maintain a component means nothing can be free), where you only need to define your problem.
In this specific toy example, the only thing that is written by human devs for their paperclip factory might be a small number of json files that reference paperclip specs, define the system topology, and references another system that actually handles marketing/selling paperclips and sets the quotas.
Obviously then the meta task of authoring this file could be done by another AI based system that learns from examples of humans authoring these files.
So the system as a whole has the capabilities of AGI even though it’s made of a large number of narrow AIs. This is arguably how our brains most likely work at the subsystem level so there’s prior art.
Also I can see how you might get sloppy and stop adhering to the ‘sparseness’ criterion. Make your paperclip factory of subsystems that are smarter than they need to be, increasing their flexibility in novel situations at the cost of potentially unbounded behavior.
This is definitively not AGI.
If it lacks cognitively ability to consider things that humans can consider, then it’s not AGI.