I think any program designed to maximise some quantity within a simulated situation will have the potential to solve some problems. It is interesting that, when the quantity you choose to try to maximise is the entropy of the situation, then some of the problems this solves are useful ones, but I don’t think it is particularly significant, with respect to understanding the nature of and reason for intelligence in a universe with our particular set of physical laws, as some are claiming.
Take, for example, Wissner-Gross’ explanation of “tool use” in his video.
Set a simulation going. See where the disks end up, under the rules you set for the simulation. THEN label the disks as being things (a hand, a tool and a piece of food) that would provide a plausible explanation for why an intelligent creature would want the disks to finish up at that particular end configuration.
If a creature were actually doing it, the intelligence would lie at least as much in selecting in advance which quantity to maximise in order to achieve a desired result, as in carrying out such an algorithm (and, also, there’s no evidence that this is actually how we implement the algorithm in our heads).
There’s also the matter that the universe isn’t particularly efficient at maximising entropy. Through the statistical properties underlying thermodynamics there’s a ratchet effect that entropy will tend to increase rather than decrease which, eventually, will lead to the universe ending up at maximum entropy; but that’s rather different from localised seeking behaviour intended to find a situation with maximum entropy in order to solve a problem.
I think any program designed to maximise some quantity within a simulated situation will have the potential to solve some problems. It is interesting that, when the quantity you choose to try to maximise is the entropy of the situation, then some of the problems this solves are useful ones, but I don’t think it is particularly significant, with respect to understanding the nature of and reason for intelligence in a universe with our particular set of physical laws, as some are claiming.
Take, for example, Wissner-Gross’ explanation of “tool use” in his video.
relevant still image illustrating tool use
Set a simulation going. See where the disks end up, under the rules you set for the simulation. THEN label the disks as being things (a hand, a tool and a piece of food) that would provide a plausible explanation for why an intelligent creature would want the disks to finish up at that particular end configuration.
If a creature were actually doing it, the intelligence would lie at least as much in selecting in advance which quantity to maximise in order to achieve a desired result, as in carrying out such an algorithm (and, also, there’s no evidence that this is actually how we implement the algorithm in our heads).
There’s also the matter that the universe isn’t particularly efficient at maximising entropy. Through the statistical properties underlying thermodynamics there’s a ratchet effect that entropy will tend to increase rather than decrease which, eventually, will lead to the universe ending up at maximum entropy; but that’s rather different from localised seeking behaviour intended to find a situation with maximum entropy in order to solve a problem.