If the problems are simple, why do you need a superintelligence? If they’re not, how are you verifying the results?
More importantly, how are you verifying that your (by necessity incredibly complicated) universal optimizing algorithms are actually doing what you want? It’s not like you can sit down and write out a proof—nontrivial applications of this technique are undecidable. (Also, “some code that . . . finds a good solution” is just a little bit of an understatement. . .)
The problems are easy to verify but hard to solve (like many NP problems). Verify the results through a dumb program. I verify that the optimization algorithms do what I want by testing them against the training set; if it does well on the training set without overfitting it too much, it should do well on new problems.
As for how useful this is: I think general induction (resource-bounded Solomonoff induction) is NP-like in that you can verify an inductive explanation is a relatively short time. Just execute the program and verify that its output matches the observations so far.
(Also, “some code that . . . finds a good solution” is just a little bit of an understatement. . .)
Yes, but any seed AI will be difficult to write. This setup allows the seed program to improve itself.
edit: I just realized that mathematical proofs are also verifiable. So, a program that is very very good at verifiable optimization problems will be able to prove many mathematical things. I think all these problems it could solve are sufficient to demonstrate that it is an AGI and very very useful.
You appear to be operating under the assumption that you can just write a program that analyzes arbitrarily complicated specifications for how to organize matter and hands you a “score” that’s in some way related to the actual functionality of those specifications. Or possibly that you can make exhaustive predictions about the results to problems complicated enough to justify developing an AGI superintelligence in the first place. Which is, to be frank, about as likely as you solving the problems by way of randomly mixing chemicals and hoping something useful happens.
This system is only meant to solve problems that are verifiable (e.g. NP problems). Which includes general induction, mathematical proofs, optimization problems, etc. I’m not sure how to extend this system to problems that aren’t efficiently verifiable but it might be possible.
One use of this system would be to write a seed AI once we have a specification for the seed AI. Specifying the seed AI itself is quite difficult, but probably not as difficult as satisfying that specification.
It can prove things about mathematics than can be proven procedurally, but that’s not all that impressive. Lots of real-world problems are either mathematically intractable (really intractable, not just “computers aren’t fast enough yet” intractable) or based in mathematics that aren’t amenable to proofs. So you approximate and estimate and experiment and guess. Then you test the results repeatedly to make sure they don’t induce cancer in 80% of the population, unless the results are so complicated that you can’t figure out what it is you’re supposed to be testing.
Right, this doesn’t solve friendly AI. But lots of problems are verifiable (e.g. hardware design, maybe). And if the hardware design the program creates causes cancer and the humans don’t recognize this until it’s too late, they probably would have invented the cancer-causing hardware anyway. The program has no motive other than to execute an optimization program that does well on a wide variety of problems.
Basically I claim that I’ve solved friendly AI for verifiable problems, which is actually a wide class of problems, including the problems mentioned in the original post (source code optimization etc.)
Now it doesn’t seem like your program is really a general artificial intelligence—improving our solutions to NP problems is neat, but not “general intelligence.” Further, there’s no reason to think that “easy to verify but hard to solve problems” include improvements to the program itself. In fact, there’s every reason to think this isn’t so.
Now it doesn’t seem like your program is really a general artificial intelligence—improving our solutions to NP problems is neat, but not “general intelligence.”
General induction, general mathematical proving, etc. aren’t general intelligence? Anyway, the original post concerned optimizing things program code, which can be done if the optimizations have to be proven.
Further, there’s no reason to think that “easy to verify but hard to solve problems” include improvements to the program itself. In fact, there’s every reason to think this isn’t so.
That’s what step (3) is. Program (3) is itself an optimizable function which runs relatively quickly.
If the problems are simple, why do you need a superintelligence? If they’re not, how are you verifying the results?
More importantly, how are you verifying that your (by necessity incredibly complicated) universal optimizing algorithms are actually doing what you want? It’s not like you can sit down and write out a proof—nontrivial applications of this technique are undecidable. (Also, “some code that . . . finds a good solution” is just a little bit of an understatement. . .)
The problems are easy to verify but hard to solve (like many NP problems). Verify the results through a dumb program. I verify that the optimization algorithms do what I want by testing them against the training set; if it does well on the training set without overfitting it too much, it should do well on new problems.
As for how useful this is: I think general induction (resource-bounded Solomonoff induction) is NP-like in that you can verify an inductive explanation is a relatively short time. Just execute the program and verify that its output matches the observations so far.
Yes, but any seed AI will be difficult to write. This setup allows the seed program to improve itself.
edit: I just realized that mathematical proofs are also verifiable. So, a program that is very very good at verifiable optimization problems will be able to prove many mathematical things. I think all these problems it could solve are sufficient to demonstrate that it is an AGI and very very useful.
You appear to be operating under the assumption that you can just write a program that analyzes arbitrarily complicated specifications for how to organize matter and hands you a “score” that’s in some way related to the actual functionality of those specifications. Or possibly that you can make exhaustive predictions about the results to problems complicated enough to justify developing an AGI superintelligence in the first place. Which is, to be frank, about as likely as you solving the problems by way of randomly mixing chemicals and hoping something useful happens.
This system is only meant to solve problems that are verifiable (e.g. NP problems). Which includes general induction, mathematical proofs, optimization problems, etc. I’m not sure how to extend this system to problems that aren’t efficiently verifiable but it might be possible.
One use of this system would be to write a seed AI once we have a specification for the seed AI. Specifying the seed AI itself is quite difficult, but probably not as difficult as satisfying that specification.
It can prove things about mathematics than can be proven procedurally, but that’s not all that impressive. Lots of real-world problems are either mathematically intractable (really intractable, not just “computers aren’t fast enough yet” intractable) or based in mathematics that aren’t amenable to proofs. So you approximate and estimate and experiment and guess. Then you test the results repeatedly to make sure they don’t induce cancer in 80% of the population, unless the results are so complicated that you can’t figure out what it is you’re supposed to be testing.
Right, this doesn’t solve friendly AI. But lots of problems are verifiable (e.g. hardware design, maybe). And if the hardware design the program creates causes cancer and the humans don’t recognize this until it’s too late, they probably would have invented the cancer-causing hardware anyway. The program has no motive other than to execute an optimization program that does well on a wide variety of problems.
Basically I claim that I’ve solved friendly AI for verifiable problems, which is actually a wide class of problems, including the problems mentioned in the original post (source code optimization etc.)
Now it doesn’t seem like your program is really a general artificial intelligence—improving our solutions to NP problems is neat, but not “general intelligence.” Further, there’s no reason to think that “easy to verify but hard to solve problems” include improvements to the program itself. In fact, there’s every reason to think this isn’t so.
General induction, general mathematical proving, etc. aren’t general intelligence? Anyway, the original post concerned optimizing things program code, which can be done if the optimizations have to be proven.
That’s what step (3) is. Program (3) is itself an optimizable function which runs relatively quickly.