if you have a very general and information theoretically simple problem solver, like evolution or AIXI, then in order to make it solve a specific problem you need a complex fitness function
I agree with this as well. That said, sometimes that fitness function is implicit in the real world, and need not be explicitly formalized by me.
Once you generated a hypothesis, and an experiment to test it, you have already done most of the work. What reason do I have to believe that this is not true for the protein folding problem?
As I’ve said a couple of times now, I don’t have a dog in the race wrt the protein folding problem, but your argument seems to apply equally well to all conceivable problems. That’s why I asked a while back whether you think algorithm design is the single hardest problem for humans to solve. As I suggested then, I have no particular reason to think the protein-folding problem is harder (or easier) than the algorithm-design problem, but it seems really unlikely that no problem has this property.
That’s why I asked a while back whether you think algorithm design is the single hardest problem for humans to solve.
The problem is that I don’t know what you mean by “algorithm design”. Once you solved “algorithm design”, what do you expect to be able to do with it, and how?
Once you compute this “algorithm design”-algorithm, how will its behavior look like? Will it output all possible algorithms, or just the algorithms that you care about? If the latter, how does it know what algorithms you care about?
There is no brain area for “algorithm design”. There is just this computational substrate that can learn, recognize patterns etc. and whose behavior is defined and constrained by its environmental circumstances.
Say you cloned Donald E. Knuth and made him grow up under completely different circumstances, e.g. as a member of some Amazonian tribe. Now this clone has the same algorithm-design-potential, but he lacks the right input and constrains to output “The Art of Computer Programming”.
What I want to highlight is that “algorithm design”, or even “general intelligence”, is not a sufficient feature in order to get “algorithm that predicts protein structures from their sequences”.
Solving “algorithm design” or “general intelligence” does not give you some sort of oracle. In the same sense as an universal Turing machine does not give you “algorithm design” or “general intelligence”. You have to program the Turing machine in order to compute “algorithm design” or “general intelligence”. In the same sense you have to define what algorithm you want, respectively what problem you want to be solved, in order for your “algorithm design” or “general intelligence” to do what you want.
Just imagine having a human baby, the clone of a 250 IQ eugenics experiment, and ask it to solve protein folding for you. Well, it doesn’t even speak English yet. Even though you have this superior general intelligence, it won’t do what you want it to do without a lot of additional work. And even then it is not clear that it will have the motivation to do so.
I agree with this as well. That said, sometimes that fitness function is implicit in the real world, and need not be explicitly formalized by me.
As I’ve said a couple of times now, I don’t have a dog in the race wrt the protein folding problem, but your argument seems to apply equally well to all conceivable problems. That’s why I asked a while back whether you think algorithm design is the single hardest problem for humans to solve. As I suggested then, I have no particular reason to think the protein-folding problem is harder (or easier) than the algorithm-design problem, but it seems really unlikely that no problem has this property.
The problem is that I don’t know what you mean by “algorithm design”. Once you solved “algorithm design”, what do you expect to be able to do with it, and how?
Once you compute this “algorithm design”-algorithm, how will its behavior look like? Will it output all possible algorithms, or just the algorithms that you care about? If the latter, how does it know what algorithms you care about?
There is no brain area for “algorithm design”. There is just this computational substrate that can learn, recognize patterns etc. and whose behavior is defined and constrained by its environmental circumstances.
Say you cloned Donald E. Knuth and made him grow up under completely different circumstances, e.g. as a member of some Amazonian tribe. Now this clone has the same algorithm-design-potential, but he lacks the right input and constrains to output “The Art of Computer Programming”.
What I want to highlight is that “algorithm design”, or even “general intelligence”, is not a sufficient feature in order to get “algorithm that predicts protein structures from their sequences”.
Solving “algorithm design” or “general intelligence” does not give you some sort of oracle. In the same sense as an universal Turing machine does not give you “algorithm design” or “general intelligence”. You have to program the Turing machine in order to compute “algorithm design” or “general intelligence”. In the same sense you have to define what algorithm you want, respectively what problem you want to be solved, in order for your “algorithm design” or “general intelligence” to do what you want.
Just imagine having a human baby, the clone of a 250 IQ eugenics experiment, and ask it to solve protein folding for you. Well, it doesn’t even speak English yet. Even though you have this superior general intelligence, it won’t do what you want it to do without a lot of additional work. And even then it is not clear that it will have the motivation to do so.
Tapping out now.