Your third point is valid, but your first is basicallywrong; our environments occupy a small and extremely regular subset of the possibility space, so that success on a certain few tasks seems to correlate extremely well with predicted success across plausible future domains. Measuring success on these tasks is something AIs can easily do; EURISKO accomplished it in fits and starts. More generally, intelligence isn’t magical: if there’s any way we can tell whether a change in an AGI represents a bug or an improvement, then there’s an algorithm that an AI can run to do the same.
As for the second problem, one idea that may not have occurred to you is that an AI could write a future version of itself, bug-check and test out various subsystems and perhaps even the entire thing on a virtual machine first, and then shut itself down and start up the successor. If there’s a way for Lenat to see that EURISKO isn’t working properly and then fix it, then there’s a way for AI (version N) to see that AI (version N+1) isn’t working properly and fix it before making the change-over.
In those posts you are arguing something different from what I was talking about. Sure chimps will never make better technology than humans, but sometimes making more advanced clever technology is not what you want to do and be positively detrimental to your chances of shaping the world to a desirable state. The arms race for nuclear weapons for example or bio-weapons research.
If humans manage to invent a virus that wipes us out, would you still call that intelligent? If so it is not that sort of intelligence we need to create… we need to create things that win in the end, not have short term wins and then destroy itself.
“More generally, intelligence isn’t magical: if there’s any way we can tell whether a change in an AGI represents a bug or an improvement, then there’s an algorithm that an AI can run to do the same.”
Except that we don’t—can’t—do it by pure armchair thought, which is what the recursive self-improvement proposal amounts to.
The approach of testing a new version in a sandbox had occurred to me, and I agree it is a very promising one for many things—but recursive self-improvement isn’t among them! Consider, what’s the primary capability for which version N+1 is being tested? Why, the ability to create version N+2… which involves testing N+2… which involves creating N+3… etc.
Again, there’s enough correlation between ability to perform certain tasks that you don’t need an infinite recursion. To test AIv(N+1)‘s ability to program to exact specification, instead of having it program AIv(N+2) have it instead program some other things that AIvN finds difficult (but whose solutions are within AIvN’s power to verify). That we will be applying AIv(N+1)’s precision programming to itself doesn’t mean we can’t test it on non-recursive data first.
ETA: Of course, since we want the end result to be a superintelligence, AIvN might also ask AIv(N+1) for verifiable insight into an array of puzzling questions, some of which AIvN can’t figure out but suspects are tractable with increased intelligence.
If you observed something to work 15 times, how do you know that it’ll work 16th time? You obtain a model of increasing precision with each test, that lets you predict what happens next, on a test you haven’t performed yet. The same way, you can try to predict what happens on the first try, before any observations took place.
Another point is that testing can be a part of the final product: instead of building a working gizmo, you build a generic self-testing adaptive gizmo that finds the right parameters itself, and that is pre-designed to do that in the most optimal way.
Your third point is valid, but your first is basically wrong; our environments occupy a small and extremely regular subset of the possibility space, so that success on a certain few tasks seems to correlate extremely well with predicted success across plausible future domains. Measuring success on these tasks is something AIs can easily do; EURISKO accomplished it in fits and starts. More generally, intelligence isn’t magical: if there’s any way we can tell whether a change in an AGI represents a bug or an improvement, then there’s an algorithm that an AI can run to do the same.
As for the second problem, one idea that may not have occurred to you is that an AI could write a future version of itself, bug-check and test out various subsystems and perhaps even the entire thing on a virtual machine first, and then shut itself down and start up the successor. If there’s a way for Lenat to see that EURISKO isn’t working properly and then fix it, then there’s a way for AI (version N) to see that AI (version N+1) isn’t working properly and fix it before making the change-over.
In those posts you are arguing something different from what I was talking about. Sure chimps will never make better technology than humans, but sometimes making more advanced clever technology is not what you want to do and be positively detrimental to your chances of shaping the world to a desirable state. The arms race for nuclear weapons for example or bio-weapons research.
If humans manage to invent a virus that wipes us out, would you still call that intelligent? If so it is not that sort of intelligence we need to create… we need to create things that win in the end, not have short term wins and then destroy itself.
Super-plagues and other doomsday tools are possible with current technology. Effective countermeasures are not. Ergo, we need more intelligence, ASAP.
“More generally, intelligence isn’t magical: if there’s any way we can tell whether a change in an AGI represents a bug or an improvement, then there’s an algorithm that an AI can run to do the same.”
Except that we don’t—can’t—do it by pure armchair thought, which is what the recursive self-improvement proposal amounts to.
The approach of testing a new version in a sandbox had occurred to me, and I agree it is a very promising one for many things—but recursive self-improvement isn’t among them! Consider, what’s the primary capability for which version N+1 is being tested? Why, the ability to create version N+2… which involves testing N+2… which involves creating N+3… etc.
Again, there’s enough correlation between ability to perform certain tasks that you don’t need an infinite recursion. To test AIv(N+1)‘s ability to program to exact specification, instead of having it program AIv(N+2) have it instead program some other things that AIvN finds difficult (but whose solutions are within AIvN’s power to verify). That we will be applying AIv(N+1)’s precision programming to itself doesn’t mean we can’t test it on non-recursive data first.
ETA: Of course, since we want the end result to be a superintelligence, AIvN might also ask AIv(N+1) for verifiable insight into an array of puzzling questions, some of which AIvN can’t figure out but suspects are tractable with increased intelligence.
If you observed something to work 15 times, how do you know that it’ll work 16th time? You obtain a model of increasing precision with each test, that lets you predict what happens next, on a test you haven’t performed yet. The same way, you can try to predict what happens on the first try, before any observations took place.
Another point is that testing can be a part of the final product: instead of building a working gizmo, you build a generic self-testing adaptive gizmo that finds the right parameters itself, and that is pre-designed to do that in the most optimal way.
Where is the evidence that EURISKO ever accomplished anything? No one but the author has seen the source code.