Nod. I noticed your other comment after I wrote the grandparent. I replied there and I do actually consider your question there interesting, even though my conclusions are far different to yours.
Note that I’ve tried to briefly answer what I consider a much stronger variation of your fundamental question. I think that the question you have actually asked is relatively trivial compared to what you could have asked so I would be doing you and the topic a disservice by just responding to the question itself. Some notes for reference:
Demands of the general form “Where is the evidence for?” are somewhat of a hangover from traditional rational ‘debate’ mindsets where the game is one of social advocacy of a position. Finding evidence for something is easy but isn’t the sort of habit I like to encourage in myself. Advocacy is bad for thinking (but good for creating least-bad justice systems given human limitations).
“More impressive than humans” is a ridiculously low bar. It would be absolutely dumbfoundingly surprising if humans just happened to be the best ‘general intelligence’ we could arrive at in the local area. We haven’t had a chance to even reach a local minimum of optimising DNA and protein based mammalian general intelligences. Selection pressures are only superficially in favour of creating general intelligence and apart from that the flourishing of human civilisation and intellectual enquiry happened basically when we reached the minimum level to support it. Civilisation didn’t wait until our brains reached the best level DNA could support before it kicked in.
A more interesting question is whether it is possible to create a general intelligence algorithm that can in principle handle most any problem, given unlimited resources and time to do so. This is as opposed to progressively more complex problems requiring algorithms of progressively more complexity even to solve in principle.
Being able to ‘brute force’ a solution to any problem is actually a significant step towards being generally intelligent. Even being able to construct ways to brute force stuff and tell whether the brute force solution is in fact a solution is possibly a more difficult thing to find in algorithm space than optimisations thereof.
Finding evidence for something is easy but isn’t the sort of habit I like to encourage in myself.
My intention was merely to point out where I don’t follow your argument, but your criticism in my formulation is valid.
“More impressive than humans” is a ridiculously low bar.
I agree, we can probably build far better problem-solvers for many problems (including problems of great practical importance)
algorithm that can in principle handle most any problem, given unlimited resources
My concern is more about what we can do with limited ressources, this is why I’m not impressed with the brute-force-solution
Even being able to construct ways to brute force stuff and tell whether the brute force solution is in fact a solution is possibly a more difficult thing to find in algorithm space than optimisations thereof.
This is true, I was mostly thinking about a pure search-problem where evaluting the solution is simple. (The example was chess, where brute-forcing leads to perfect play given sufficient ressources)
The example was chess, where brute-forcing leads to perfect play given sufficient resources
It just occurred to me to wonder if this resource requirement is even finite. Is there are turn limit on the game? I suppose even “X turns without a piece being taken” would be sufficient depending on how idiotic the ‘brute force’ is. Is such a rule in place?
The example was chess, where brute-forcing leads to perfect play given sufficient ressources
I’m somewhat curious as to whether perfect play leads to a draw or a win (probably to white although if it turned out black should win that’d be an awesome finding!) I know tic-tac-toe and checkers are both a draw and I’m guessing chess will be a stalemate too but I don’t know for sure even whether we’ll ever be able to prove that one way or the other.
Discussion of chess AI a few weeks ago also got me thinking: The current trend is for the best AIs to beat the best human grandmasters even with progressively greater disadvantages. Even up to ’two moves and a pawn” or somesuch thing. My prediction:
As chess playing humans and AIs develop the AIs will be able to beat the humans with greater probability with progressively more significant handicaps. But given sufficient time this difference would peak and then actually decrease. Not because of anything to do with humans ‘catching up’. Rather, because if perfect play of a given handicap results in a stalemate or loss then even an exponentially increasing difference in ability will not be sufficient in preventing the weaker player from becoming better at forcing the expected ‘perfect’ result.
Nod. I noticed your other comment after I wrote the grandparent. I replied there and I do actually consider your question there interesting, even though my conclusions are far different to yours.
Note that I’ve tried to briefly answer what I consider a much stronger variation of your fundamental question. I think that the question you have actually asked is relatively trivial compared to what you could have asked so I would be doing you and the topic a disservice by just responding to the question itself. Some notes for reference:
Demands of the general form “Where is the evidence for?” are somewhat of a hangover from traditional rational ‘debate’ mindsets where the game is one of social advocacy of a position. Finding evidence for something is easy but isn’t the sort of habit I like to encourage in myself. Advocacy is bad for thinking (but good for creating least-bad justice systems given human limitations).
“More impressive than humans” is a ridiculously low bar. It would be absolutely dumbfoundingly surprising if humans just happened to be the best ‘general intelligence’ we could arrive at in the local area. We haven’t had a chance to even reach a local minimum of optimising DNA and protein based mammalian general intelligences. Selection pressures are only superficially in favour of creating general intelligence and apart from that the flourishing of human civilisation and intellectual enquiry happened basically when we reached the minimum level to support it. Civilisation didn’t wait until our brains reached the best level DNA could support before it kicked in.
A more interesting question is whether it is possible to create a general intelligence algorithm that can in principle handle most any problem, given unlimited resources and time to do so. This is as opposed to progressively more complex problems requiring algorithms of progressively more complexity even to solve in principle.
Being able to ‘brute force’ a solution to any problem is actually a significant step towards being generally intelligent. Even being able to construct ways to brute force stuff and tell whether the brute force solution is in fact a solution is possibly a more difficult thing to find in algorithm space than optimisations thereof.
My intention was merely to point out where I don’t follow your argument, but your criticism in my formulation is valid.
I agree, we can probably build far better problem-solvers for many problems (including problems of great practical importance)
My concern is more about what we can do with limited ressources, this is why I’m not impressed with the brute-force-solution
This is true, I was mostly thinking about a pure search-problem where evaluting the solution is simple. (The example was chess, where brute-forcing leads to perfect play given sufficient ressources)
It just occurred to me to wonder if this resource requirement is even finite. Is there are turn limit on the game? I suppose even “X turns without a piece being taken” would be sufficient depending on how idiotic the ‘brute force’ is. Is such a rule in place?
Yes, the fifty-move rule. Though technically it only allows you to claim a draw, it doesn’t force it.
OK, thanks. In that case brute force doesn’t actually produce perfect play in chess and doesn’t return if it tries.
(Incidentally, this observation that strengthens SimonF’s position.)
But the number of possible board position is finite, and there is a rule that forces a draw if the same position comes up three times. (Here)
This claims that generalized chess is EXPTIME-complete, which is in agreement with the above.
That rule will do it (given the forced).
(Pardon the below tangent...)
I’m somewhat curious as to whether perfect play leads to a draw or a win (probably to white although if it turned out black should win that’d be an awesome finding!) I know tic-tac-toe and checkers are both a draw and I’m guessing chess will be a stalemate too but I don’t know for sure even whether we’ll ever be able to prove that one way or the other.
Discussion of chess AI a few weeks ago also got me thinking: The current trend is for the best AIs to beat the best human grandmasters even with progressively greater disadvantages. Even up to ’two moves and a pawn” or somesuch thing. My prediction:
As chess playing humans and AIs develop the AIs will be able to beat the humans with greater probability with progressively more significant handicaps. But given sufficient time this difference would peak and then actually decrease. Not because of anything to do with humans ‘catching up’. Rather, because if perfect play of a given handicap results in a stalemate or loss then even an exponentially increasing difference in ability will not be sufficient in preventing the weaker player from becoming better at forcing the expected ‘perfect’ result.