Did you do the math on this one? Even with only 10% of programs caught in a loop, then it would take almost 400 years to get through all programs up to 24 bits long.
We need something faster.
(Do you see now why Hutter hasn’t simply run AIXI with your shortcut?)
Uh, I was giving a computable algorithm, not a rapid one.
But you were implying that the uncomputability is somehow “not a problem” because of a quick fix you gave, when the quick fix actually means waiting at least 400 years—under unrealistically optimistic assumptions.
The objection that compression is uncomputable strategy is a useless one—you just use a computable approximation instead—with no great loss.
Yes, I do use a computable approximation, and my computable approximation has already done the work of identifying the important part of the search space (and the structure thereof).
And that’s the point—compression algorithms haven’t done so, except to the extent that a programmer has fed them the “insights” (known regularities of the search space) in advance. That doesn’t tell you the algorithmic way to find those regularities in the first place.
Re: “But you were implying that the uncomputability is somehow “not a problem”″
That’s right—uncomputability in not a problem—you just use a computable compression algorithm instead.
Re: “And that’s the point—compression algorithms haven’t done so, except to the extent that a programmer has fed them the “insights” (known regularities of the search space) in advance.”
The universe itself exhibits regularities. In particular sequences generated by small automata are found relatively frequently. This principle is known as Occam’s razor. That fact is exploited by general purpose compressors to compress a wide range of different data types—including many never seen before by the programmers.
“But you were implying that the uncomputability is somehow “not a problem”″
That’s right—uncomputability in not a problem—you just use a computable compression algorithm.
You said that it was not a problem with respect to creating superintelligent beings, and I showed that it is.
The universe itself exhibits regularities. …
Yes, it does. But, again, scientists don’t find them by iterating through the set of computable generating functions, starting with the smallest. As I’ve repeatedly emphasized, that takes too long. Which is why you’re wrong to generalize compression as a practical, all-encompassing answer to the problem of intelligence.
This is growing pretty tedious, for me, and probably others :-(
You did not show uncomputability is a problem in that context.
I never claimed iterating through programs was an effective practical means of compression. So it seems as though you are attacking a straw man.
Nor do I claim that compression is “a practical, all-encompassing answer to the problem of intelligence”.
Stream compression is largely what you need if you want to predict the future, or build parsimonious models based on observations. Those are important things that many intelligent agents want to do—but they are not themselves a complete solution to the problem.
What’s with complaining that compressors are uncomputable?!? Just let your search through the space of possible programs skip on to the next one whenever you spend more than an hour executing. Then you have a computable compressor. That ignores a few especially tedious and boring areas of the search space—but so what?!? Those areas can be binned with no great loss.
You also say:
Nor do I claim that compression is “a practical, all-encompassing answer to the problem of intelligence”.
Again, yes you did. Right here. Though you said compression was only one of the abilities needed, you did claim “If we had good stream compressors we would be able to predict the future consequences of actions...” and predicting the future is largely what people would classify as having solved the problem of intelligence.
I disagree with all three of your points. However, because the discussion has already been going on already for so long—and because it is so tedious and low grade for me, I am not going to publicly argue the toss with you any more. Best wishes...
Did you do the math on this one? Even with only 10% of programs caught in a loop, then it would take almost 400 years to get through all programs up to 24 bits long.
We need something faster.
(Do you see now why Hutter hasn’t simply run AIXI with your shortcut?)
Of course, in practice many loops can be caught, but combinatorial explosions really does blow any technique out of the water.
Uh, I was giving a computable algorithm, not a rapid one.
The objection that compression is uncomputable strategy is a useless one—you just use a computable approximation instead—with no great loss.
But you were implying that the uncomputability is somehow “not a problem” because of a quick fix you gave, when the quick fix actually means waiting at least 400 years—under unrealistically optimistic assumptions.
Yes, I do use a computable approximation, and my computable approximation has already done the work of identifying the important part of the search space (and the structure thereof).
And that’s the point—compression algorithms haven’t done so, except to the extent that a programmer has fed them the “insights” (known regularities of the search space) in advance. That doesn’t tell you the algorithmic way to find those regularities in the first place.
Re: “But you were implying that the uncomputability is somehow “not a problem”″
That’s right—uncomputability in not a problem—you just use a computable compression algorithm instead.
Re: “And that’s the point—compression algorithms haven’t done so, except to the extent that a programmer has fed them the “insights” (known regularities of the search space) in advance.”
The universe itself exhibits regularities. In particular sequences generated by small automata are found relatively frequently. This principle is known as Occam’s razor. That fact is exploited by general purpose compressors to compress a wide range of different data types—including many never seen before by the programmers.
You said that it was not a problem with respect to creating superintelligent beings, and I showed that it is.
Yes, it does. But, again, scientists don’t find them by iterating through the set of computable generating functions, starting with the smallest. As I’ve repeatedly emphasized, that takes too long. Which is why you’re wrong to generalize compression as a practical, all-encompassing answer to the problem of intelligence.
This is growing pretty tedious, for me, and probably others :-(
You did not show uncomputability is a problem in that context.
I never claimed iterating through programs was an effective practical means of compression. So it seems as though you are attacking a straw man.
Nor do I claim that compression is “a practical, all-encompassing answer to the problem of intelligence”.
Stream compression is largely what you need if you want to predict the future, or build parsimonious models based on observations. Those are important things that many intelligent agents want to do—but they are not themselves a complete solution to the problem.
Just to show the circles I’m going in here:
Right, I showed it is a problem in the context in which you originally brought up compression—as a means to solve the problem of intelligence.
Yes, you did. Right here:
You also say:
Again, yes you did. Right here. Though you said compression was only one of the abilities needed, you did claim “If we had good stream compressors we would be able to predict the future consequences of actions...” and predicting the future is largely what people would classify as having solved the problem of intelligence.
I disagree with all three of your points. However, because the discussion has already been going on already for so long—and because it is so tedious and low grade for me, I am not going to publicly argue the toss with you any more. Best wishes...
Okay, onlookers: please decide which of us (or both, or neither) was engaging the arguments of the other, and comment or vote accordingly.
ETA: Other than timtyler, I mean.