@James: If we want a robot that can navigate mazes, we could put some known pathfinding/search algorithms into it.
Or we could put a neural network in it and run it through thousands of trials with slowly increasing levels of difficulty.
That evokes some loopy thinking. To wit:
It’s always seemed that AI programs, striving for intelligence, can have their intelligence measured by how easy it is to get them to do something. E.g. It’s easier to simply run that neural net through a bunch of trials than it is to painstakingly engineer an algorithm for a particular search problem.
So, does that mean that the definition of “intelligence” is: “How easy it is for me to get the intelligent being to do my bidding multiplied by the effect of their actions?”
Or is that a definition of “intelligence we want”? And the definition of “intelligence” is: “The ability to create “intelligence we want” and avoid “intelligence we don’t want”?
@James: If we want a robot that can navigate mazes, we could put some known pathfinding/search algorithms into it. Or we could put a neural network in it and run it through thousands of trials with slowly increasing levels of difficulty.
That evokes some loopy thinking. To wit:
It’s always seemed that AI programs, striving for intelligence, can have their intelligence measured by how easy it is to get them to do something. E.g. It’s easier to simply run that neural net through a bunch of trials than it is to painstakingly engineer an algorithm for a particular search problem.
So, does that mean that the definition of “intelligence” is: “How easy it is for me to get the intelligent being to do my bidding multiplied by the effect of their actions?”
Or is that a definition of “intelligence we want”? And the definition of “intelligence” is: “The ability to create “intelligence we want” and avoid “intelligence we don’t want”?