Others have written on this as well—e.g. Robert Wright, Richard Dawkins, John Stewart,
Evolution is rather short-sighted—and only has the lookahead capabilities that organisms have (though these appear to be improving with time). So: whether the target can be described as being a “goal” is debatable.
However, we weren’t talking about evolution, we were talking about superintelligences. Those are likely to be highly goal-directed.
All the things you mentioned seemed pretty goal-directed to me. Evolution has only been relatively short on goals because it has been so primitive up until now. It is easy to see systematic ways in which agents we build will not be like evolution.
It is true that not all aspects of these things are goal-directed. Some aspects of behaviour are meaningless and random—for example.
Evolution apparently has an associated optimisation target. See my:
http://originoflife.net/direction/
http://originoflife.net/gods_utility_function/
Others have written on this as well—e.g. Robert Wright, Richard Dawkins, John Stewart,
Evolution is rather short-sighted—and only has the lookahead capabilities that organisms have (though these appear to be improving with time). So: whether the target can be described as being a “goal” is debatable.
However, we weren’t talking about evolution, we were talking about superintelligences. Those are likely to be highly goal-directed.
My point is that evolution IS a superintelligence and we should use it as a model for what other superintelligences might look like.
Reality doesn’t care how you abuse terminology. A GAI still isn’t going to act like evolution.
All the things you mentioned seemed pretty goal-directed to me. Evolution has only been relatively short on goals because it has been so primitive up until now. It is easy to see systematic ways in which agents we build will not be like evolution.
It is true that not all aspects of these things are goal-directed. Some aspects of behaviour are meaningless and random—for example.