Humans are “designed” to act intelligently in the physical world here on earth, we have complex adaptations for this environment. I don’t think we are capable of acting effectively in “strange” environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.
But we can recursively self optimize ourselves for understanding mechanical systems or programming computers, not infinitely, of course, but with different hardware, it seems extremely plausible to smash through whatever ceiling a human might have.with the brute force of many calculated iterations of whatever humans are using,
And this is before the computer uses it’s knowledge to reoptimize it’s optimization process.
I understand the concept of recursive self-optimization und I don’t consider it to be very implausible.
Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?
I’m also not convinced that the human mind is good counterexample, e.g. I do not know how much I could improve on a the sourcecode of a simulation of my brain once the simulation itself runs effectively.
Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?
I count “algorithm-space is really really really big” as at least some form of evidence. ;)
Mind you by “is there any evidence?” you really mean “does the evidence lead to a high assigned probability?” That being the case “No Free Lunch” must also be considered. Even so NFL in this case mostly suggests that a general intelligence algorithm will be systematically bad at being generally stupid.
Considerations that lead me to believe that a general intelligence algorithm are likely include the observation that we can already see progressively more general problem solving processes in evidence just by looking at mammals. I also take more evidence from humanity than you do. Not because I think humans are good at general intelligence. We suck at it, it’s something that has been tacked on to our brains relatively recently and it far less efficient than our more specific problem solving facilities. But the point is that we can do general intelligence of a form eventually if we dedicate ourselves to the problem.
I don’t think we are capable of acting effectively in “strange” environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.
You’re putting ‘effectively’ here in place of ‘intelligently’ in the original assertion.
Can you give any idea what these complex goals would look like? Or conversely, describe some complex goals humans can achieve, which are fundamentally beyond an entity with a similar abstract reasoning capabilities as humans have, but lack some of humans’ native capabilities for dealing more efficiently with certain types of problems?
The obvious examples are problems where a slow reaction time will lead to failure, but these don’t seem to tell that much about the general complexity handling abilities of the agents.
Not a good start if we are facing exponential search-spaces!
Not particularly. :)
But it would constitute an in principle method of bootstrapping a more impressive kind of general intelligence. I actually didn’t expect you would concede the ability to brute force ‘general optimisation’ - the ability to notice the brute forced solution is more than half the problem. From there it is just a matter of time to discover an algorithm that can do the search efficiently.
If brute-force would work, I imagine the AI-problem would be solved?
Not necessarily. Biases could easily have made humans worse than brute-force.
Nod. I noticed your other comment after I wrote the grandparent. I replied there and I do actually consider your question there interesting, even though my conclusions are far different to yours.
Note that I’ve tried to briefly answer what I consider a much stronger variation of your fundamental question. I think that the question you have actually asked is relatively trivial compared to what you could have asked so I would be doing you and the topic a disservice by just responding to the question itself. Some notes for reference:
Demands of the general form “Where is the evidence for?” are somewhat of a hangover from traditional rational ‘debate’ mindsets where the game is one of social advocacy of a position. Finding evidence for something is easy but isn’t the sort of habit I like to encourage in myself. Advocacy is bad for thinking (but good for creating least-bad justice systems given human limitations).
“More impressive than humans” is a ridiculously low bar. It would be absolutely dumbfoundingly surprising if humans just happened to be the best ‘general intelligence’ we could arrive at in the local area. We haven’t had a chance to even reach a local minimum of optimising DNA and protein based mammalian general intelligences. Selection pressures are only superficially in favour of creating general intelligence and apart from that the flourishing of human civilisation and intellectual enquiry happened basically when we reached the minimum level to support it. Civilisation didn’t wait until our brains reached the best level DNA could support before it kicked in.
A more interesting question is whether it is possible to create a general intelligence algorithm that can in principle handle most any problem, given unlimited resources and time to do so. This is as opposed to progressively more complex problems requiring algorithms of progressively more complexity even to solve in principle.
Being able to ‘brute force’ a solution to any problem is actually a significant step towards being generally intelligent. Even being able to construct ways to brute force stuff and tell whether the brute force solution is in fact a solution is possibly a more difficult thing to find in algorithm space than optimisations thereof.
Finding evidence for something is easy but isn’t the sort of habit I like to encourage in myself.
My intention was merely to point out where I don’t follow your argument, but your criticism in my formulation is valid.
“More impressive than humans” is a ridiculously low bar.
I agree, we can probably build far better problem-solvers for many problems (including problems of great practical importance)
algorithm that can in principle handle most any problem, given unlimited resources
My concern is more about what we can do with limited ressources, this is why I’m not impressed with the brute-force-solution
Even being able to construct ways to brute force stuff and tell whether the brute force solution is in fact a solution is possibly a more difficult thing to find in algorithm space than optimisations thereof.
This is true, I was mostly thinking about a pure search-problem where evaluting the solution is simple. (The example was chess, where brute-forcing leads to perfect play given sufficient ressources)
The example was chess, where brute-forcing leads to perfect play given sufficient resources
It just occurred to me to wonder if this resource requirement is even finite. Is there are turn limit on the game? I suppose even “X turns without a piece being taken” would be sufficient depending on how idiotic the ‘brute force’ is. Is such a rule in place?
The example was chess, where brute-forcing leads to perfect play given sufficient ressources
I’m somewhat curious as to whether perfect play leads to a draw or a win (probably to white although if it turned out black should win that’d be an awesome finding!) I know tic-tac-toe and checkers are both a draw and I’m guessing chess will be a stalemate too but I don’t know for sure even whether we’ll ever be able to prove that one way or the other.
Discussion of chess AI a few weeks ago also got me thinking: The current trend is for the best AIs to beat the best human grandmasters even with progressively greater disadvantages. Even up to ’two moves and a pawn” or somesuch thing. My prediction:
As chess playing humans and AIs develop the AIs will be able to beat the humans with greater probability with progressively more significant handicaps. But given sufficient time this difference would peak and then actually decrease. Not because of anything to do with humans ‘catching up’. Rather, because if perfect play of a given handicap results in a stalemate or loss then even an exponentially increasing difference in ability will not be sufficient in preventing the weaker player from becoming better at forcing the expected ‘perfect’ result.
Do you apply this to yourself?
Yes!
Humans are “designed” to act intelligently in the physical world here on earth, we have complex adaptations for this environment. I don’t think we are capable of acting effectively in “strange” environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.
But we can recursively self optimize ourselves for understanding mechanical systems or programming computers, not infinitely, of course, but with different hardware, it seems extremely plausible to smash through whatever ceiling a human might have.with the brute force of many calculated iterations of whatever humans are using,
And this is before the computer uses it’s knowledge to reoptimize it’s optimization process.
I understand the concept of recursive self-optimization und I don’t consider it to be very implausible.
Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?
I’m also not convinced that the human mind is good counterexample, e.g. I do not know how much I could improve on a the sourcecode of a simulation of my brain once the simulation itself runs effectively.
I count “algorithm-space is really really really big” as at least some form of evidence. ;)
Mind you by “is there any evidence?” you really mean “does the evidence lead to a high assigned probability?” That being the case “No Free Lunch” must also be considered. Even so NFL in this case mostly suggests that a general intelligence algorithm will be systematically bad at being generally stupid.
Considerations that lead me to believe that a general intelligence algorithm are likely include the observation that we can already see progressively more general problem solving processes in evidence just by looking at mammals. I also take more evidence from humanity than you do. Not because I think humans are good at general intelligence. We suck at it, it’s something that has been tacked on to our brains relatively recently and it far less efficient than our more specific problem solving facilities. But the point is that we can do general intelligence of a form eventually if we dedicate ourselves to the problem.
You’re putting ‘effectively’ here in place of ‘intelligently’ in the original assertion.
I understand “capable of behaving intelligently” to mean “capable of achieving complex goals in complex environments”, do you disagree?
I don’t disagree. Are you saying that humans aren’t capable of achieving complex goals in the domains of quantum mechanics or computer programming?
This is of course a matter of degree, but basically yes!
Can you give any idea what these complex goals would look like? Or conversely, describe some complex goals humans can achieve, which are fundamentally beyond an entity with a similar abstract reasoning capabilities as humans have, but lack some of humans’ native capabilities for dealing more efficiently with certain types of problems?
The obvious examples are problems where a slow reaction time will lead to failure, but these don’t seem to tell that much about the general complexity handling abilities of the agents.
I’ll try to give examples:
For computer programming: Given a simulation of a human brain, improve it so that the simulated human is significantly more intelligent.
For quantum mechanics: Design a high-temperature superconductor from scratch.
Are humans better than brute-force at a multi-dimensional version of chess where we can’t use our visual cortex?
We have a way to use brute force to achieve general optimisation goals? That seems like a good start to me!
Not a good start if we are facing exponential search-spaces! If brute-force would work, I imagine the AI-problem would be solved?
Not particularly. :)
But it would constitute an in principle method of bootstrapping a more impressive kind of general intelligence. I actually didn’t expect you would concede the ability to brute force ‘general optimisation’ - the ability to notice the brute forced solution is more than half the problem. From there it is just a matter of time to discover an algorithm that can do the search efficiently.
Not necessarily. Biases could easily have made humans worse than brute-force.
Please give evidence that “a more impressive kind of general intelligence” actually exists!
Nod. I noticed your other comment after I wrote the grandparent. I replied there and I do actually consider your question there interesting, even though my conclusions are far different to yours.
Note that I’ve tried to briefly answer what I consider a much stronger variation of your fundamental question. I think that the question you have actually asked is relatively trivial compared to what you could have asked so I would be doing you and the topic a disservice by just responding to the question itself. Some notes for reference:
Demands of the general form “Where is the evidence for?” are somewhat of a hangover from traditional rational ‘debate’ mindsets where the game is one of social advocacy of a position. Finding evidence for something is easy but isn’t the sort of habit I like to encourage in myself. Advocacy is bad for thinking (but good for creating least-bad justice systems given human limitations).
“More impressive than humans” is a ridiculously low bar. It would be absolutely dumbfoundingly surprising if humans just happened to be the best ‘general intelligence’ we could arrive at in the local area. We haven’t had a chance to even reach a local minimum of optimising DNA and protein based mammalian general intelligences. Selection pressures are only superficially in favour of creating general intelligence and apart from that the flourishing of human civilisation and intellectual enquiry happened basically when we reached the minimum level to support it. Civilisation didn’t wait until our brains reached the best level DNA could support before it kicked in.
A more interesting question is whether it is possible to create a general intelligence algorithm that can in principle handle most any problem, given unlimited resources and time to do so. This is as opposed to progressively more complex problems requiring algorithms of progressively more complexity even to solve in principle.
Being able to ‘brute force’ a solution to any problem is actually a significant step towards being generally intelligent. Even being able to construct ways to brute force stuff and tell whether the brute force solution is in fact a solution is possibly a more difficult thing to find in algorithm space than optimisations thereof.
My intention was merely to point out where I don’t follow your argument, but your criticism in my formulation is valid.
I agree, we can probably build far better problem-solvers for many problems (including problems of great practical importance)
My concern is more about what we can do with limited ressources, this is why I’m not impressed with the brute-force-solution
This is true, I was mostly thinking about a pure search-problem where evaluting the solution is simple. (The example was chess, where brute-forcing leads to perfect play given sufficient ressources)
It just occurred to me to wonder if this resource requirement is even finite. Is there are turn limit on the game? I suppose even “X turns without a piece being taken” would be sufficient depending on how idiotic the ‘brute force’ is. Is such a rule in place?
Yes, the fifty-move rule. Though technically it only allows you to claim a draw, it doesn’t force it.
OK, thanks. In that case brute force doesn’t actually produce perfect play in chess and doesn’t return if it tries.
(Incidentally, this observation that strengthens SimonF’s position.)
But the number of possible board position is finite, and there is a rule that forces a draw if the same position comes up three times. (Here)
This claims that generalized chess is EXPTIME-complete, which is in agreement with the above.
That rule will do it (given the forced).
(Pardon the below tangent...)
I’m somewhat curious as to whether perfect play leads to a draw or a win (probably to white although if it turned out black should win that’d be an awesome finding!) I know tic-tac-toe and checkers are both a draw and I’m guessing chess will be a stalemate too but I don’t know for sure even whether we’ll ever be able to prove that one way or the other.
Discussion of chess AI a few weeks ago also got me thinking: The current trend is for the best AIs to beat the best human grandmasters even with progressively greater disadvantages. Even up to ’two moves and a pawn” or somesuch thing. My prediction:
As chess playing humans and AIs develop the AIs will be able to beat the humans with greater probability with progressively more significant handicaps. But given sufficient time this difference would peak and then actually decrease. Not because of anything to do with humans ‘catching up’. Rather, because if perfect play of a given handicap results in a stalemate or loss then even an exponentially increasing difference in ability will not be sufficient in preventing the weaker player from becoming better at forcing the expected ‘perfect’ result.