A better wording would probably be that you can’t design something with literally no goals and still call it an AI. A system that answers questions and solves specific problems has a goal: to answer questions and solve specific problems. To be useful for that task, its whole architecture has to be crafted with that purpose in mind.
For instance, suppose it was provided questions in the form of written text. This means that its designers will have to build it in such a way that it interprets text in a certain way and tries to discover what we mean by the question. That’s just one thing that it could do to the text, though—it could also just discard any text input, or transform each letter to a number and start searching for mathematical patterns in the numbers, or use the text to seed its random-number generator that it was using for some entirely different purpose, and so forth. In order for the AI to do anything useful, it has to have a large number of goals such as “interpret the meaning of this text file I was provided” implicit in its architecture. As the AI grows more powerful, these various goals may manifest themselves in unexpected ways.
Presumably once AGI becomes smarter than humans, it will develop goals of some kind, whether we want it or not. Might as well try to influence them.
Why?
A better wording would probably be that you can’t design something with literally no goals and still call it an AI. A system that answers questions and solves specific problems has a goal: to answer questions and solve specific problems. To be useful for that task, its whole architecture has to be crafted with that purpose in mind.
For instance, suppose it was provided questions in the form of written text. This means that its designers will have to build it in such a way that it interprets text in a certain way and tries to discover what we mean by the question. That’s just one thing that it could do to the text, though—it could also just discard any text input, or transform each letter to a number and start searching for mathematical patterns in the numbers, or use the text to seed its random-number generator that it was using for some entirely different purpose, and so forth. In order for the AI to do anything useful, it has to have a large number of goals such as “interpret the meaning of this text file I was provided” implicit in its architecture. As the AI grows more powerful, these various goals may manifest themselves in unexpected ways.
See http://wiki.lesswrong.com/wiki/Basic_AI_drives