When you create a program, it is not enough to say what should it achieve. You must specify how to do it.
You can’t just create a program by saying “maximize f(x)”, even if you give it a perfect definition of f(x). You must also provide a method, for example “try 1000 random values of x and remember the best result” or “keep trying and remembering the best result until I press Enter” or maybe something more complex, like “remember 10 best results, and try random values so that you more often choose numbers similar to these best known results”. You must provide some strategy.
Perhaps in some environments you don’t, because the strategy was already put there by the authors of the environment. But someone had to specify it. The strategy may remember some values and use them later so it kind of learns. But even the first version of this learning strategy was written by someone.
So what does it mean to have an “artificial agent that has a goal”? It is an incomplete description. The agent must also have a strategy, otherwise it won’t move.
Therefore, a precise question would be more like: “what kinds of initial strategies lead (in favorable conditions) toward developing a general intelligence?” Then we should specify what counts as reasonably favorable conditions, and what is outright cheating. (An agent with strategy “find the nearest data disk, remove your old program and read new program from this disk” could develop a general intelligence if it finds a disk with general intelligence program, but I guess that considers cheating. Although, humans also learn from others, so where exactly is the line between “learning with help” and “just copying”?)
When you create a program, it is not enough to say what should it achieve. You must specify how to do it.
You can’t just create a program by saying “maximize f(x)”, even if you give it a perfect definition of f(x). You must also provide a method, for example “try 1000 random values of x and remember the best result” or “keep trying and remembering the best result until I press Enter” or maybe something more complex, like “remember 10 best results, and try random values so that you more often choose numbers similar to these best known results”. You must provide some strategy.
Perhaps in some environments you don’t, because the strategy was already put there by the authors of the environment. But someone had to specify it. The strategy may remember some values and use them later so it kind of learns. But even the first version of this learning strategy was written by someone.
So what does it mean to have an “artificial agent that has a goal”? It is an incomplete description. The agent must also have a strategy, otherwise it won’t move.
Therefore, a precise question would be more like: “what kinds of initial strategies lead (in favorable conditions) toward developing a general intelligence?” Then we should specify what counts as reasonably favorable conditions, and what is outright cheating. (An agent with strategy “find the nearest data disk, remove your old program and read new program from this disk” could develop a general intelligence if it finds a disk with general intelligence program, but I guess that considers cheating. Although, humans also learn from others, so where exactly is the line between “learning with help” and “just copying”?)