I want to use Steve Omohundro’s paper “The Basic AI Drives” in a slightly unusual way. The paper lists a number of behaviors that should be exhibited by a sufficiently sophisticated AI: it will try to model its own operation, clarify its goals, protect them from modification, protect itself from destruction, acquire resources and use them efficiently… The twist I propose is that Omohundro’s list of drives should be used as a design specification. If your goal is AGI, then you want a cognitive architecture that will exhibit these emergent behaviors.
IMHO, that doesn’t help too much. We mostly know what we want—what we don’t know is how to get there.
IMHO, that doesn’t help too much. We mostly know what we want—what we don’t know is how to get there.
Incidentally, what most people don’t want is just a bunch of universal instrumental values.