There are 2 significant issues here. Consider a robot in physical reality with limited manipulation capability. Even with infinite intelligence, the robot has a maximum lifetime number of manipulations it can perform to the outside world. With a human, that’s 2 arms. What if an animal without opposable thumbs had infinite intelligence? Then it would be capable of less.
What does infinite intelligence mean? It means an algorithm that, given a set of inputs and a heuristic, can always find the most optimal solution—the global maximum—for any problem the agent faces.
Actual intelligent agents have to find a compromise—a local maxima. But a “good approximation” may often be pretty close to the global maxima if the agent is intelligent enough. This means that if the approximation an agent uses is 80% as good as the global maxima, infinite intelligence only gains the last 20 percent.
This is the first problem here. You have discovered a way to build a self-improving algorithm that theoretically has the capability of finding the global maxima every time for a given regression problem. (it won’t but it might get close). So what. You still can’t do any better than the best solution the information allows. (and it may or may not make any progress on thought to be NP problems like encryption)
Consider a real problem like camera-based facial recognition. The reason for remaining residual error—positive and negative misclassifications—may simply be the real world signal does not have sufficient information to identity the right human from 7 billion every time.
The second problem is your heuristic. We can easily today build agents that optimize for the wrong, ‘sorcerer’s apprentice’, heuristic that goes awry. Building a heuristic that even gives an interesting agent—one with self awareness and planning and everything else we expect—may take more than simply building a perfect algorithm to solve a single subproblem.
A concrete example of this is ImageNet. The best-in-class algorithms for solving it solve the problem of “get the right answer on ImageNet” but not the actual problem we meant which is “identify the real world object in this picture in the real world”. So the best algorithms tend to overfit and cheat.
There are 2 significant issues here. Consider a robot in physical reality with limited manipulation capability. Even with infinite intelligence, the robot has a maximum lifetime number of manipulations it can perform to the outside world. With a human, that’s 2 arms. What if an animal without opposable thumbs had infinite intelligence? Then it would be capable of less.
What does infinite intelligence mean? It means an algorithm that, given a set of inputs and a heuristic, can always find the most optimal solution—the global maximum—for any problem the agent faces.
Actual intelligent agents have to find a compromise—a local maxima. But a “good approximation” may often be pretty close to the global maxima if the agent is intelligent enough. This means that if the approximation an agent uses is 80% as good as the global maxima, infinite intelligence only gains the last 20 percent.
This is the first problem here. You have discovered a way to build a self-improving algorithm that theoretically has the capability of finding the global maxima every time for a given regression problem. (it won’t but it might get close). So what. You still can’t do any better than the best solution the information allows. (and it may or may not make any progress on thought to be NP problems like encryption)
Consider a real problem like camera-based facial recognition. The reason for remaining residual error—positive and negative misclassifications—may simply be the real world signal does not have sufficient information to identity the right human from 7 billion every time.
The second problem is your heuristic. We can easily today build agents that optimize for the wrong, ‘sorcerer’s apprentice’, heuristic that goes awry. Building a heuristic that even gives an interesting agent—one with self awareness and planning and everything else we expect—may take more than simply building a perfect algorithm to solve a single subproblem.
A concrete example of this is ImageNet. The best-in-class algorithms for solving it solve the problem of “get the right answer on ImageNet” but not the actual problem we meant which is “identify the real world object in this picture in the real world”. So the best algorithms tend to overfit and cheat.