Sorry, I didn’t see that you had answered most of this question in the other thread where I first asked it.
Toby, if you were too dumb to see the closed-form solution to problem 1, it might take an intense effort to tweak the bit on each occasion, or perhaps you might have trouble turning the global criterion of total success or failure into a local bit-fixer; now imagine that you are also a mind that finds it very easy to sing MP3s...
The reason you think one problem is simple is that you perceive a solution in closed form; you can imagine a short program, much shorter than 10 million bits, that solves it, and the work of inventing this program was done in your mind without apparent effort. So this problem is very trivial on the meta-level because the program that solves it optimally appears very quickly in the ordering of possible programs and is moreover prominent in that ordering relative to our instinctive transformations of the problem specification.
But if you were trying random solutions and the solution tester was a black box, then the alternating-bits problem would indeed be harder—so you can’t be measuring the raw difficulty of optimization if you say that one is easier than the other.
This is why I say that the human notion of “impressiveness” is best constructed out of a more primitive notion of “optimization”.
We also do, legitimately, find it more natural to talk about “optimized” performance on multiple problems than on a single problem—if we’re talking about just a single problem, then it may not compress the message much to say “This is the goal” rather than just “This is the output.”
I take it then that you agree that (1) is a problem of 9,999,999 bits and that the travelling salesman version is as well. Could you take these things and generate an example which doesn’t just give ‘optimization power’, but ‘intelligence’ or maybe just ‘intelligence-without-adjusting-for-resources-spent’. You say over a set of problem domains, but presumably not over all of them given the no-free-lunch theorems. Any example, or is this vague?
But if you were trying random solutions and the solution tester was a black box
Then you’re not solving the same optimization problem anymore. If the black box just had two outputs, “good” and “bad”, then, yes, a black box that accepts fewer input sequences is going to be one that is harder to make accept. On the other hand, if the black box had some sort of metric on a scale from “bad” going up to “good”, and the optimizer could update on the output each time, the sequence problem is still going to be much easier than the MP3 problem.
Sorry, I didn’t see that you had answered most of this question in the other thread where I first asked it.
Toby, if you were too dumb to see the closed-form solution to problem 1, it might take an intense effort to tweak the bit on each occasion, or perhaps you might have trouble turning the global criterion of total success or failure into a local bit-fixer; now imagine that you are also a mind that finds it very easy to sing MP3s...
The reason you think one problem is simple is that you perceive a solution in closed form; you can imagine a short program, much shorter than 10 million bits, that solves it, and the work of inventing this program was done in your mind without apparent effort. So this problem is very trivial on the meta-level because the program that solves it optimally appears very quickly in the ordering of possible programs and is moreover prominent in that ordering relative to our instinctive transformations of the problem specification.
But if you were trying random solutions and the solution tester was a black box, then the alternating-bits problem would indeed be harder—so you can’t be measuring the raw difficulty of optimization if you say that one is easier than the other.
This is why I say that the human notion of “impressiveness” is best constructed out of a more primitive notion of “optimization”.
We also do, legitimately, find it more natural to talk about “optimized” performance on multiple problems than on a single problem—if we’re talking about just a single problem, then it may not compress the message much to say “This is the goal” rather than just “This is the output.”
I take it then that you agree that (1) is a problem of 9,999,999 bits and that the travelling salesman version is as well. Could you take these things and generate an example which doesn’t just give ‘optimization power’, but ‘intelligence’ or maybe just ‘intelligence-without-adjusting-for-resources-spent’. You say over a set of problem domains, but presumably not over all of them given the no-free-lunch theorems. Any example, or is this vague?
Then you’re not solving the same optimization problem anymore. If the black box just had two outputs, “good” and “bad”, then, yes, a black box that accepts fewer input sequences is going to be one that is harder to make accept. On the other hand, if the black box had some sort of metric on a scale from “bad” going up to “good”, and the optimizer could update on the output each time, the sequence problem is still going to be much easier than the MP3 problem.