A general problem solver already has “goals”, since it already is a physical object with behavior in the world: there are things that it does and things that it does not do. So it is not clear that you can simply take a general problem solver and make it into a program that “makes trains run on time” ; the goal of making the trains run on time will come into conflict the general problem solver’s own “goals” (behavior), just as when we try to pursue some goal such as “save the world” this comes into conflict with our own preexisting goals such as getting food and so on.
Once general AI exists, there will be ones for basically all economic activity everywhere. Unlike humans which require a generation to train, and which are kind of hard to reprogram (even using North Korean methods), with AI once you got the code you can just fork() off a new one, with the goals changed as desired (or not...).
A general problem solver already has “goals”, since it already is a physical object with behavior in the world: there are things that it does and things that it does not do. So it is not clear that you can simply take a general problem solver and make it into a program that “makes trains run on time” ; the goal of making the trains run on time will come into conflict the general problem solver’s own “goals” (behavior), just as when we try to pursue some goal such as “save the world” this comes into conflict with our own preexisting goals such as getting food and so on.
Once general AI exists, there will be ones for basically all economic activity everywhere. Unlike humans which require a generation to train, and which are kind of hard to reprogram (even using North Korean methods), with AI once you got the code you can just fork() off a new one, with the goals changed as desired (or not...).