About the relationship to Occam’s razor: note you could make a version of SI that gave some long programs more weight than some short programs, and it would still have the same nice properties. SI only needs some probability distribution that sums to 1 over the space of all programs. Since there’s a countable infinity of programs, the weights should converge to zero, but the convergence doesn’t have to be monotonous.
About the universe being computable: in some games, like the log score game, you don’t need to assume that. It’s enough to say that you are computable and therefore SI will beat you in the long run on any given sequence of observations. That’s true even when the sequence of observations is allowed to be uncomputable (the proof is a simple exercise).
About the relationship to Occam’s razor: note you could make a version of SI that gave some long programs more weight than some short programs, and it would still have the same nice properties.
A conjecture of mine, but I don’t know enough to answer on my own: is it true that for every Solomonoff-like priors, shorter programs have almost always higher probabilities than longer ones? Almost always in this case means “all but a finite number”.
About the universe being computable: in some games, like the log score game, you don’t need to assume that. It’s enough to say that you are computable and therefore SI will beat you in the long run on any given sequence of observations.
Well, SI itself is uncomputable, so... I guess that one can prove this is not the case for computable priors.
is it true that for every Solomonoff-like priors, shorter program have almost always higher probability than longer ones? Almost always in this case means “all but a finite number”.
No, that’s not true. You can take any infinite set of pairs of programs and swap the weights in each pair.
I guess that one can prove this is not the case for computable priors.
Yeah, all universal priors are uncomputable. For any computable prior, you can build a computable sequence whose probability under that prior is 0. For example, you can ask the prior “which choice of next bit would have probability below 0.6?” at each step.
for every Solomonoff-like priors, shorter program have almost always higher probability than longer ones?
No, but something weaker is true. Take any probability P, e.g. 1/1000. No more than 1/P (e.g. 1000) programs can carry this or higher probability, and they’re all short in the trivial sense of being shorter in length than some fixed L, depending only on P and your formalism. So in a sense, the likely programs are all short, and all the really long programs are really unlikely.
For every program T, all but a finite amount of program longer than T have lower probabilities.
Let’s say that T has probability P. This means that 1/P programs carry this or higher probabily. Let’s say that L is the max of length of those programs, while T has length S. This means that max 2^(L-S) programs are both longer and more probable than T, while all the (infinitely) others are longer and less probable.
For every program T, all but a finite amount of program longer than T have lower probabilities.
Yeah, that’s correct. Note that it’s also correct if you remove “longer than T”: “For every program T, all but a finite number of other programs have lower probabilities”. Which is obviously true because you can’t have an infinite number of programs whose probability is above some threshold.
It’s a well known argument, I learned it from Shalizi’s note. There’s other work trying to justify Occam’s razor when the set of hypotheses is finite, e.g. Kevin Kelly’s work.
Bayesian explanations of Ockham’s razor are based on a circular appeal to a prior bias toward simple possibilities.
I think it is possible to appeal to simpler principles, modifying some of the points I made above.
Indeed, I think it is possible to non-circularly explain why
the Razor is the rule which says “among the theories compatible with the evidence, chose the simplest”, and the question is why this leads to the truth better than a rule like “among the theories compatible with the evidence, chose the one whose statement involves the fewest occurrences of the letter ‘e’”, or even “the one which most glorifies Divine Providence”.
Yep, that’s correct. Basically, short programs suffer from there only being a finite supply of them. The actual distribution of probabilities to programs doesn’t matter that much.
That’s why I think SI doesn’t work very well as justification/support for Occam’s Razor.
Because the prediction itself doesn’t use the shortness of programs in any essential way, and the shortness only matters because of the unavoidable nature of the infinite formalism.
A thought experiment: suppose we’ve determined that the Universe is finite (possible) and worked out a higher bound M on its informational capacity, and we know that programs longer than M will never be run in this universe. We might want to rework SI using a very large finite probability space of programs rather than an infinite one. Given a valid prior distribution over this finite space, SI itself, as a prediction procedure, will continue to work, and its promise of correctness will still be there in an updated fashion. But for SI to work there will be no reason whatsoever to assign greater probabilities to shorter program in this large finite space. A uniform distribution will work just as well, or even giving really large weight to the longest programs and systematically biasing against shorter ones. It will be all the same to SI. Clearly it doesn’t have anything to do with Occam’s Razor in this thought experiment. The connection, then, is forced upon us by the mere fact that we’re using an infinite set of finite-length algorithms, and nothing else, not anything about SI: about what we predict, how we do it, etc.
In this case I feel more inclined to bite the bullet. After all, Solomonoff-as-it-is is an application of the basic requirement of probability: “use all your available information, and nothing else”.
When the totality of all we know doesn’t constrain on a finite universe, then Occam emerges as a trivial fact about allowing infinite explanations.
But if the totality of what we know does indeed produce a finite model, and induction works from any prior, then I say that the razor simply doesn’t apply.
After all, I prefer to derive my epistemology from what works than the other way around.
Sure, if that’s the way you like it, but for me that just doesn’t work. Occam’s Razor is a principle that is supposed to help me think better here and now; to decide that its justification rests on whether, say, the Universe is discrete at Planck scale or not, when this choice has to be compatible with QM and Newtonian mechanics at larger scales and therefore doesn’t change anything in practical terms in my life here and now—that seems absurd. To me, it’s a clear evidence that this is no justification at all.
Yeah, people get quite deeply confused by references to Occam’s razor, “length of programs”, and other such stuff.
If you only consider the programs of length N bits, approximately 2^(N-M) programs will have an output sequence forever identical to that of a program that can be minimally written in M<N bits.
Other confusion is with regards to how Solomonoff Induction would learn the program of length ~N from getting N bits of input. No it wouldn’t—you can write in a small number of bits a program that outputs BB(20) zeroes, then a string of ones, and until you read over BB(20) input bits you won’t learn a thing.
I’m confused. I thought SI assigns equal probabilities to every program in the infinite space, and then simpler predictions turn out to be more likely because there are many more programs that make simple predictions than there are programs that make complicated predictions.
As I understand it, SI works even better in the finite case than in the infinite case, because you don’t have to worry about infinite sums and whatnot. (Provided the domain of the finite case was something like “All the programs of length N or less” for some finite N.)
Yep, you’re confused. You can’t possibly assign equal probability to every program in an infinite space, because then the sum of all probabilities will diverge (go to infinity), and you need it to sum up to 1. What you do is assign probabilities in a way that the infinite sum will converge to 1, for example by lining all programs up in an infinite list: M1, M2, M3… and assigning probabilities 1⁄2, 1⁄4, 1⁄8, 1⁄16, 1⁄32 and so on to infinity. That’s essentially what the universal prior does, using a length-sorted infinite list of all possible programs (I’m skipping a few small technical difficulties).
The simpler predictions are typically more likely because of the one shortest program that generates them, not the many longer programs that also do—they do help, but because of the exponential decay they usually don’t amount to much. The shortest program tends to dominate.
Except I just lied a little in what I said. SI prediction doesn’t care if your program is short or long. It cares how early it comes on the infinite list M1,M2,M3… of all programs. When you arrange this list by program length, “earliest in the list” and “shortest” are basically the same. And in the usual infinite case, in a sense you cannot NOT arrange the list by program length. You can give your favourite very long programs some of the first spots on the list, and so high probabilities, but there’s an infinity of long programs out there, and you can’t nudge the infinity to where you want to go. But in the finite thought experiment, you’re free to arrange your list any way you want (and you don’t even need to have probabilities go down exponentially either). So you can just piss on Occam’s Razor and strictly reverse it by having longer programs always be more likely than shorter ones, if you feel like it. And SI prediction will still work.
That’s what I used to think about how SI worked. Then I read this explanation which seems to make sense to me, and seems to justify my view of things.
Specifically, this explanation shows how Solomonoff Induction doesn’t need to assume a prior that some hypotheses are more likely than others; it can weight them all equally, and then prove that simpler hypotheses have more “copies” and thus simpler predictions are more likely.
In the infinite case there is still the problem of how to get an infinite number of equally-weighted hypotheses to sum to probability 1. But this is what Measure Theory does, I believe. (I’m not a mathematician, but this is what I’ve read and been told.) So it isn’t a problem, any more than math is a problem.
But if the space was finite, as I said, then Solomonoff Induction wouldn’t even have that problem!
So, I no longer think I’m confused. I think that your understanding of SI portrays it as more arbitrary than it actually is. SI isn’t just a weighting by simplicity! It is a proof that weighting by simplicity is justified, given certain premises! (namely, that the space of hypotheses is the space of computable programs in a certain language, and that they are all equally likely, modulo evidence.)
In the infinite case there is still the problem of how to get an infinite number of equally-weighted hypotheses to sum to probability 1. But this is what Measure Theory does, I believe. (I’m not a mathematician, but this is what I’ve read and been told.) So it isn’t a problem, any more than math is a problem.
I am a mathematician, and yes measure theory can do this, but it will require you to make many (in fact infinitely many) arbitrary choices along the way.
Huh. Okay, thanks for the info. This is troubling, because I have long held out hope that SI would not turn out to be arbitrary. Could you direct me to where I can learn more about this arbitrariness in measure theory?
You can say that the input tape (where the program gets loaded from) is an infinite sequence of random bits. A ‘program of length l’ is a piece of code which sets the machine up so that after l bits subsequent bits do not matter. Every short program is also a set of longer programs.
You can. But is there any reason to think that this models well the inherent complexity of programs? Do we ever execute programs by choosing randomly bits that constitute them? We have algorithms that utilize randomness, to be sure, but UTMs are not, normally, such algorithms. I appreciate that “choosing program bits randomly” is a simple, short, succinct way of getting to 2^-length(p), but I don’t see a reason to think it’s natural in any interesting sense.
Well, indeed. There’s nothing natural about that whole silly notion. Laws of physics in our universe are highly symmetric (and fundamental laws are even time-reversible, when mirrored and charge-flipped), what works for guessing the laws of physics is assumptions of symmetries, something that S.I. most dramatically does not do.
Furthermore, incorrect theories in S.I. can take very long time to rule out because it is very easy to set up—in very few bits—a long-running busy beaver that changes the output after many bits are outputted. For an S.I. driven agent, there’s various arbitrary doomsdays looming with probabilities greater than 1/1024 (extra length less than 10 bits) .
Furthermore, in total absence of external input, an ideal inductor must assume it is not within an universe where the laws of physics do not support universal computation. This rules out vast majority of the input tapes, which are given non-zero prior in S.I.
Furthermore, incorrect theories in S.I. can take very long time to rule out because it is very easy to set up—in very few bits—a long-running busy beaver that changes the output after many bits are outputted. For an S.I. driven agent, there’s various arbitrary doomsdays looming with probabilities greater than 1/1024 (extra length less than 10 bits).
Do you think a better agent should assign low probabilities to such arbitrary doomsdays in the first place? What would be a good general rule for that?
Having thought about it some more, I feel that a good induction method would start with p_doomsday(time) being a smooth function, and it would have to acquire great many bits of information-theoretic data before that function would grow very sharp and specific peaks.
Meanwhile, S.I. starts with an enormous set of weird preconceptions due to the use of some one dimensional Turing machine, and consequently produces really really bad priors. The badness of said priors is somewhat masked by very-easy-for-laymen-to-misinterpret optimality proofs.
I think that if you plot sane agent’s probability of doomsday by time, it won’t be some really weird shaped curve with incredibly sharp (Planck-time sharp) peaks at various points highly specific to the internal details of the agent. Really, it’s as if someone with a weird number-fear synasthesia looked at the list of years, since the alleged birth of Christ and picked the scariest numbers, then prophesied doomsday at those years. It is clearly completely insane.
I don’t understand the question. I thought that’s what we were talking about. Am I missing something?
To be more explicit: setting up a UTM with random bits on the input tape is a natural-seeming way of getting the probability distribution over programs (2^-length(p)) that goes into the Solomonoff prior. But as I’m trying to say in the comment you replied to, I don’t think it’s really natural at all. And of course SI doesn’t need this particular distribution in order to be effective at its job.
Yep, you’re confused. You can’t possibly assign equal probability to every program in an infinite space, because then the sum of all probabilities will diverge (go to infinity), and you need it to sum up to 1. What you do is assign probabilities in a way that the infinite sum will converge to 1, for example by lining all programs up in an infinite list: M1, M2, M3… and assigning probabilities 1⁄2, 1⁄4, 1⁄8, 1⁄16, 1⁄32 and so on to infinity. That’s essentially what the universal prior does, using a length-sorted infinite list of all possible programs (I’m skipping a few small technical difficulties).
Only if you insist on assigning non-zero probability to each individual program.
Is that responding to the “You can’t possibly assign equal probability to every program in an infinite space” part of the parent comment? If you assign equal probability to every program in an infinite space, then (assuming “probability” refers to a real number, and not some other mathematical object such as an infinitesimal) either that equal probability is non-zero, in which case the sum is infinite, or the equal probability is zero, in which case the sum is zero. Remember, if all the probabilities are equal, then either all of them are equal to zero, or none are, and it’s generally agreed that an infinite number of zeros add up to zero.
and it’s generally agreed that an infinite number of zeros add up to zero.
Actually 0·∞ is considered an indeterminate form. For example the interval from 0 to 1 has length 1 even though it’s composed of infinitely many points each of which has length 0.
See my reply to asr for links to more technical explanations.
Yep, you’re confused. You can’t possibly assign equal probability to every program in an infinite space, because then the sum of all probabilities will diverge (go to infinity), and you need it to sum up to 1.
Only if you insist on assigning non-zero probability to each individual program.
If you assign an equal probability to every program, and that probability is zero, then the sum, the total probability, is zero and. If you assign an equal nonzero probability, the probability is infinite.
If you want to have a valid probability distribution, you can’t assign equal probabilities.
Well, if you’re willing to relax the third axiom of probability to finite additivity you can. Alternatively, use an uncountable state space, e.g., the space of all ultra-filters on Turing machines. Also, while we’re on the subject, there are numerous types of quasi-probability distributions worth considering.
Err, no, I wouldn’t call it a justification. It’s more like a technical setup of why shorter programs turn out to be more likely (because the padding happens to include them in this way). This is not a profound thing. It’s just one possible way to get where you want, which is to assign 2^-length(p) to p. Note that it’s brittle in the sense that you need a support from your formalism for that. You need your notion of what is a program to allow adding arbitrary bits to the end without affecting its validity or what it does. That’s not a very interesting thing to do: why would you even consider all those programs with dummy padding as real programs, given that whatever’s running them can probably easily rule them out? E.g. modern compilers will usually reject your program if you add some junk at the end.
If you restrict yourself to a finite program length, you could still only count minimal programs, those w/o padding, and give them appropriate weights (the way they do in the paper immediately afterwards for the infinite case, the standard way to set it up) and end up with the same thing. There’s no profound reason I can think of to say “oh, for fixed bound L on program length we’re just only going to look at programs of length exactly L, including padded programs, but for unrestricted length we’re throwing out padded programs and only looking at minimal ones”. The real reason is that you want to sum up to 1, and you want to prefer short programs while doing so.
Now, something closer to justiication is when they point our right afterwards that 2^-length(p) is what you get if you randomly choose program bits one by one. But as justifications go, I think it’s a pretty weak one, because no one really computes anything in any way that looks like that. And more importantly, as I said above, it just doesn’t matter for SI prediction. You don’t have to use 2^-length(p) for SI prediction to work. You just need to have it all sum up to 1.
You kind of missed my point. I provided it as a technical justification why the exact formula 2^-length(p) makes sense. As you said, “It’s one possible way to get where you want”. I know that 2^-length(p) is not the only possible option, but it’s the most natural one given that we know nothing about the input.
The model of computation is a “Universal Turing machine provided with completely random noise” (i.e. fair coin flips on the input tape). It’s a mathematical formalism; it has nothing to do with modern compilers.
I didn’t miss your point; instead, I’ve tried to explain your point to you, and perhaps didn’t do a really good job. You’re confusing two different things which are both spelled out in the paper you referenced on page 1120, but in your comment you linked me to you’ve only recounted the first one, perhaps thinking that the second one is the same thing, which it is not.
The first thing is setting up probabilities over programs of a given fixed length L, so that padding helps us take into account also shorter programs. In the paper, this is the paragraph starting “This sum is still only over programs of length ℓ...”. This is what you talked about in your other comment, and this is what I talked about in my first paragraph of the reply above.
The second thing is a way of setting up probabilities over programs of any length without bound, using a UTM-with-random-noise model as justification for the formula. This is an inherently different way of assigning probabilities, where in particular we do not use padding, but on the contrary only take into account minimal programs. This is what the paper talks about in the next two paragraphs, the second of which, starting “This is a highly technical explanation...” explains the UTM-with-random-noise. I talk about this second thing in the third paragraph of my reply above.
Hopefully I made it clearer. If you still think I missed your point, let’s leave it at that.
Unless I have access to the right kind of uncomputable oracle.
If you’re computable and interacting with a world that contains an uncomputable oracle, and SI is in the same position, I don’t really see what advantage you have. After all, your algorithm can be found somewhere within SI’s mixture. It depends on the game setup though. We discussed some setups here.
What do you mean, precisely, by “SI is in the same position”?
If I understand correctly, SI can be “upgraded” by changing the underlying prior.
So if we have strong reasons suspect that we, humans, have access to the halting oracle, then we should try to build AI reasoning that approximates SI + a prior defined over the universal Turing machine enhanced with halting oracle.
If we don’t (as is the case at the moment), then we just build AI that approximates SI + the Universal prior.
Either way, there are no guessing games when we, on average, are able to beat the AI, as long as the approximation is good enough.
What do you mean, precisely, by “SI is in the same position”?
I mean that the computable part of you is facing the decision problem “which buttons on the oracle black box do I press, and how do I use the answers on the screen when choosing my next action?” and SI is facing the same decision problem. (Unless of course you want to define the problem so that only you can press buttons on the black box, and the SI agent by definition can’t. That would seem unfair to me, though.)
Given the above, in some game setups (though probably not all) I would expect SI to beat you, just like it beats you in the log-score game even when the input is uncomputable and comes from a level 100 oracle or whatever. The relevant fact is not whether the input is computable, but that SI is pretty much a multiplicative weights mixture of all possible computable “experts” which includes you, and there are many situations where a multiplicative weights mixture provably beats any expert on any input, up to a constant etc. etc.
I thought it was implied that SI cannot access any boxes or push any buttons, because it’s a mathematical abstraction. But I see that you mean “an AI agent with SI”.
So, the precise statement would be that whatever weight you assign each program, if
f(n) := sum of the weights of all programs of length n
F(n) := sum of f(k) for all k < n
then limit as n goes to infinity of F(n) must be finite (if the weights are normalized, then the limit must be 1) and thus limit as n goes to infinity of f(n) must be zero. In fact, by the ratio test, the limit of f(n)/f(n+1) must be greater than or equal to one. Since there are 2^n programs of size n, that means that if
a(n) := average weight of programs of size n
then limit as n goes to infinity of a(n)/a(n+1) must be greater than or equal to 2
We are therefore highly constrained as far as the average weight for programs of size n, but less so for the maximum weight. For instance, we could have a weighting system such that for all n, there is some program of size n with weight (.999^n)/10000. We could even have a weighting system such that for all k, there is some n > k such that there is a program of length n and weight (.9999^log(log (log n) ) )/10
Do I have that correct?
PS I believe that you meant “monotonic”, rather than “monotonous” :)
About the relationship to Occam’s razor: note you could make a version of SI that gave some long programs more weight than some short programs, and it would still have the same nice properties. SI only needs some probability distribution that sums to 1 over the space of all programs. Since there’s a countable infinity of programs, the weights should converge to zero, but the convergence doesn’t have to be monotonous.
About the universe being computable: in some games, like the log score game, you don’t need to assume that. It’s enough to say that you are computable and therefore SI will beat you in the long run on any given sequence of observations. That’s true even when the sequence of observations is allowed to be uncomputable (the proof is a simple exercise).
A conjecture of mine, but I don’t know enough to answer on my own: is it true that for every Solomonoff-like priors, shorter programs have almost always higher probabilities than longer ones? Almost always in this case means “all but a finite number”.
Well, SI itself is uncomputable, so...
I guess that one can prove this is not the case for computable priors.
No, that’s not true. You can take any infinite set of pairs of programs and swap the weights in each pair.
Yeah, all universal priors are uncomputable. For any computable prior, you can build a computable sequence whose probability under that prior is 0. For example, you can ask the prior “which choice of next bit would have probability below 0.6?” at each step.
No, but something weaker is true. Take any probability P, e.g. 1/1000. No more than 1/P (e.g. 1000) programs can carry this or higher probability, and they’re all short in the trivial sense of being shorter in length than some fixed L, depending only on P and your formalism. So in a sense, the likely programs are all short, and all the really long programs are really unlikely.
Thanks, I guess my wording wasn’t precise enough.
For every program T, all but a finite amount of program longer than T have lower probabilities.
Let’s say that T has probability P. This means that 1/P programs carry this or higher probabily. Let’s say that L is the max of length of those programs, while T has length S. This means that max 2^(L-S) programs are both longer and more probable than T, while all the (infinitely) others are longer and less probable.
Yeah, that’s correct. Note that it’s also correct if you remove “longer than T”: “For every program T, all but a finite number of other programs have lower probabilities”. Which is obviously true because you can’t have an infinite number of programs whose probability is above some threshold.
Sure, now that you’ve pointed it out I see that my conjecture was trivially true :)
I guess on the same line of thought you can informally deconstruct Occam’s razor:
every finite-complexity state of affair can be equivalently explained by longer and longer hypothesis;
one of them must be true;
for any explanation, the list of longer explanations is infinite, but only grabs the finite remaining portion of probability mass;
so, besides for a finite number of instances, all of them must have lower and lower probabilities.
Might be worth a post to informally deduce Occam’s razor.
It’s a well known argument, I learned it from Shalizi’s note. There’s other work trying to justify Occam’s razor when the set of hypotheses is finite, e.g. Kevin Kelly’s work.
Thanks for the very interesting papers.
In Kelly’s page, this
I think it is possible to appeal to simpler principles, modifying some of the points I made above.
Indeed, I think it is possible to non-circularly explain why
Yep, that’s correct. Basically, short programs suffer from there only being a finite supply of them. The actual distribution of probabilities to programs doesn’t matter that much.
That’s why I think SI doesn’t work very well as justification/support for Occam’s Razor.
Because as an explanation it is too simple or because you feel there should be a deeper reason?
Because the prediction itself doesn’t use the shortness of programs in any essential way, and the shortness only matters because of the unavoidable nature of the infinite formalism.
A thought experiment: suppose we’ve determined that the Universe is finite (possible) and worked out a higher bound M on its informational capacity, and we know that programs longer than M will never be run in this universe. We might want to rework SI using a very large finite probability space of programs rather than an infinite one. Given a valid prior distribution over this finite space, SI itself, as a prediction procedure, will continue to work, and its promise of correctness will still be there in an updated fashion. But for SI to work there will be no reason whatsoever to assign greater probabilities to shorter program in this large finite space. A uniform distribution will work just as well, or even giving really large weight to the longest programs and systematically biasing against shorter ones. It will be all the same to SI. Clearly it doesn’t have anything to do with Occam’s Razor in this thought experiment. The connection, then, is forced upon us by the mere fact that we’re using an infinite set of finite-length algorithms, and nothing else, not anything about SI: about what we predict, how we do it, etc.
In this case I feel more inclined to bite the bullet. After all, Solomonoff-as-it-is is an application of the basic requirement of probability: “use all your available information, and nothing else”.
When the totality of all we know doesn’t constrain on a finite universe, then Occam emerges as a trivial fact about allowing infinite explanations.
But if the totality of what we know does indeed produce a finite model, and induction works from any prior, then I say that the razor simply doesn’t apply.
After all, I prefer to derive my epistemology from what works than the other way around.
Sure, if that’s the way you like it, but for me that just doesn’t work. Occam’s Razor is a principle that is supposed to help me think better here and now; to decide that its justification rests on whether, say, the Universe is discrete at Planck scale or not, when this choice has to be compatible with QM and Newtonian mechanics at larger scales and therefore doesn’t change anything in practical terms in my life here and now—that seems absurd. To me, it’s a clear evidence that this is no justification at all.
Ah, the joy of mass downvoting… Sigh.
Yeah, people get quite deeply confused by references to Occam’s razor, “length of programs”, and other such stuff.
If you only consider the programs of length N bits, approximately 2^(N-M) programs will have an output sequence forever identical to that of a program that can be minimally written in M<N bits.
Other confusion is with regards to how Solomonoff Induction would learn the program of length ~N from getting N bits of input. No it wouldn’t—you can write in a small number of bits a program that outputs BB(20) zeroes, then a string of ones, and until you read over BB(20) input bits you won’t learn a thing.
I’m confused. I thought SI assigns equal probabilities to every program in the infinite space, and then simpler predictions turn out to be more likely because there are many more programs that make simple predictions than there are programs that make complicated predictions.
As I understand it, SI works even better in the finite case than in the infinite case, because you don’t have to worry about infinite sums and whatnot. (Provided the domain of the finite case was something like “All the programs of length N or less” for some finite N.)
Yep, you’re confused. You can’t possibly assign equal probability to every program in an infinite space, because then the sum of all probabilities will diverge (go to infinity), and you need it to sum up to 1. What you do is assign probabilities in a way that the infinite sum will converge to 1, for example by lining all programs up in an infinite list: M1, M2, M3… and assigning probabilities 1⁄2, 1⁄4, 1⁄8, 1⁄16, 1⁄32 and so on to infinity. That’s essentially what the universal prior does, using a length-sorted infinite list of all possible programs (I’m skipping a few small technical difficulties).
The simpler predictions are typically more likely because of the one shortest program that generates them, not the many longer programs that also do—they do help, but because of the exponential decay they usually don’t amount to much. The shortest program tends to dominate.
Except I just lied a little in what I said. SI prediction doesn’t care if your program is short or long. It cares how early it comes on the infinite list M1,M2,M3… of all programs. When you arrange this list by program length, “earliest in the list” and “shortest” are basically the same. And in the usual infinite case, in a sense you cannot NOT arrange the list by program length. You can give your favourite very long programs some of the first spots on the list, and so high probabilities, but there’s an infinity of long programs out there, and you can’t nudge the infinity to where you want to go. But in the finite thought experiment, you’re free to arrange your list any way you want (and you don’t even need to have probabilities go down exponentially either). So you can just piss on Occam’s Razor and strictly reverse it by having longer programs always be more likely than shorter ones, if you feel like it. And SI prediction will still work.
That’s what I used to think about how SI worked. Then I read this explanation which seems to make sense to me, and seems to justify my view of things.
Specifically, this explanation shows how Solomonoff Induction doesn’t need to assume a prior that some hypotheses are more likely than others; it can weight them all equally, and then prove that simpler hypotheses have more “copies” and thus simpler predictions are more likely.
In the infinite case there is still the problem of how to get an infinite number of equally-weighted hypotheses to sum to probability 1. But this is what Measure Theory does, I believe. (I’m not a mathematician, but this is what I’ve read and been told.) So it isn’t a problem, any more than math is a problem.
But if the space was finite, as I said, then Solomonoff Induction wouldn’t even have that problem!
So, I no longer think I’m confused. I think that your understanding of SI portrays it as more arbitrary than it actually is. SI isn’t just a weighting by simplicity! It is a proof that weighting by simplicity is justified, given certain premises! (namely, that the space of hypotheses is the space of computable programs in a certain language, and that they are all equally likely, modulo evidence.)
I am a mathematician, and yes measure theory can do this, but it will require you to make many (in fact infinitely many) arbitrary choices along the way.
Huh. Okay, thanks for the info. This is troubling, because I have long held out hope that SI would not turn out to be arbitrary. Could you direct me to where I can learn more about this arbitrariness in measure theory?
You can say that the input tape (where the program gets loaded from) is an infinite sequence of random bits. A ‘program of length l’ is a piece of code which sets the machine up so that after l bits subsequent bits do not matter. Every short program is also a set of longer programs.
You can. But is there any reason to think that this models well the inherent complexity of programs? Do we ever execute programs by choosing randomly bits that constitute them? We have algorithms that utilize randomness, to be sure, but UTMs are not, normally, such algorithms. I appreciate that “choosing program bits randomly” is a simple, short, succinct way of getting to 2^-length(p), but I don’t see a reason to think it’s natural in any interesting sense.
Well, indeed. There’s nothing natural about that whole silly notion. Laws of physics in our universe are highly symmetric (and fundamental laws are even time-reversible, when mirrored and charge-flipped), what works for guessing the laws of physics is assumptions of symmetries, something that S.I. most dramatically does not do.
Furthermore, incorrect theories in S.I. can take very long time to rule out because it is very easy to set up—in very few bits—a long-running busy beaver that changes the output after many bits are outputted. For an S.I. driven agent, there’s various arbitrary doomsdays looming with probabilities greater than 1/1024 (extra length less than 10 bits) .
Furthermore, in total absence of external input, an ideal inductor must assume it is not within an universe where the laws of physics do not support universal computation. This rules out vast majority of the input tapes, which are given non-zero prior in S.I.
Do you think a better agent should assign low probabilities to such arbitrary doomsdays in the first place? What would be a good general rule for that?
Having thought about it some more, I feel that a good induction method would start with p_doomsday(time) being a smooth function, and it would have to acquire great many bits of information-theoretic data before that function would grow very sharp and specific peaks.
Meanwhile, S.I. starts with an enormous set of weird preconceptions due to the use of some one dimensional Turing machine, and consequently produces really really bad priors. The badness of said priors is somewhat masked by very-easy-for-laymen-to-misinterpret optimality proofs.
I think that if you plot sane agent’s probability of doomsday by time, it won’t be some really weird shaped curve with incredibly sharp (Planck-time sharp) peaks at various points highly specific to the internal details of the agent. Really, it’s as if someone with a weird number-fear synasthesia looked at the list of years, since the alleged birth of Christ and picked the scariest numbers, then prophesied doomsday at those years. It is clearly completely insane.
Just checking—you do know the formulation of SI that uses a universal prefix Turing machine provided with fair coinflips on the input tape, right?
I don’t understand the question. I thought that’s what we were talking about. Am I missing something?
To be more explicit: setting up a UTM with random bits on the input tape is a natural-seeming way of getting the probability distribution over programs (2^-length(p)) that goes into the Solomonoff prior. But as I’m trying to say in the comment you replied to, I don’t think it’s really natural at all. And of course SI doesn’t need this particular distribution in order to be effective at its job.
Yeah, sorry. It’s me who was missing something =)
Only if you insist on assigning non-zero probability to each individual program.
Is that responding to the “You can’t possibly assign equal probability to every program in an infinite space” part of the parent comment? If you assign equal probability to every program in an infinite space, then (assuming “probability” refers to a real number, and not some other mathematical object such as an infinitesimal) either that equal probability is non-zero, in which case the sum is infinite, or the equal probability is zero, in which case the sum is zero. Remember, if all the probabilities are equal, then either all of them are equal to zero, or none are, and it’s generally agreed that an infinite number of zeros add up to zero.
Actually 0·∞ is considered an indeterminate form. For example the interval from 0 to 1 has length 1 even though it’s composed of infinitely many points each of which has length 0.
See my reply to asr for links to more technical explanations.
If you assign an equal probability to every program, and that probability is zero, then the sum, the total probability, is zero and. If you assign an equal nonzero probability, the probability is infinite.
If you want to have a valid probability distribution, you can’t assign equal probabilities.
Well, if you’re willing to relax the third axiom of probability to finite additivity you can. Alternatively, use an uncountable state space, e.g., the space of all ultra-filters on Turing machines. Also, while we’re on the subject, there are numerous types of quasi-probability distributions worth considering.
Ah, fair enough. I had been assuming only real probabilities.
See this comment for a justification of why shorter programs are more likely.
The argument is taken from this paper.
Err, no, I wouldn’t call it a justification. It’s more like a technical setup of why shorter programs turn out to be more likely (because the padding happens to include them in this way). This is not a profound thing. It’s just one possible way to get where you want, which is to assign 2^-length(p) to p. Note that it’s brittle in the sense that you need a support from your formalism for that. You need your notion of what is a program to allow adding arbitrary bits to the end without affecting its validity or what it does. That’s not a very interesting thing to do: why would you even consider all those programs with dummy padding as real programs, given that whatever’s running them can probably easily rule them out? E.g. modern compilers will usually reject your program if you add some junk at the end.
If you restrict yourself to a finite program length, you could still only count minimal programs, those w/o padding, and give them appropriate weights (the way they do in the paper immediately afterwards for the infinite case, the standard way to set it up) and end up with the same thing. There’s no profound reason I can think of to say “oh, for fixed bound L on program length we’re just only going to look at programs of length exactly L, including padded programs, but for unrestricted length we’re throwing out padded programs and only looking at minimal ones”. The real reason is that you want to sum up to 1, and you want to prefer short programs while doing so.
Now, something closer to justiication is when they point our right afterwards that 2^-length(p) is what you get if you randomly choose program bits one by one. But as justifications go, I think it’s a pretty weak one, because no one really computes anything in any way that looks like that. And more importantly, as I said above, it just doesn’t matter for SI prediction. You don’t have to use 2^-length(p) for SI prediction to work. You just need to have it all sum up to 1.
You kind of missed my point. I provided it as a technical justification why the exact formula 2^-length(p) makes sense. As you said, “It’s one possible way to get where you want”. I know that 2^-length(p) is not the only possible option, but it’s the most natural one given that we know nothing about the input.
The model of computation is a “Universal Turing machine provided with completely random noise” (i.e. fair coin flips on the input tape). It’s a mathematical formalism; it has nothing to do with modern compilers.
I didn’t miss your point; instead, I’ve tried to explain your point to you, and perhaps didn’t do a really good job. You’re confusing two different things which are both spelled out in the paper you referenced on page 1120, but in your comment you linked me to you’ve only recounted the first one, perhaps thinking that the second one is the same thing, which it is not.
The first thing is setting up probabilities over programs of a given fixed length L, so that padding helps us take into account also shorter programs. In the paper, this is the paragraph starting “This sum is still only over programs of length ℓ...”. This is what you talked about in your other comment, and this is what I talked about in my first paragraph of the reply above.
The second thing is a way of setting up probabilities over programs of any length without bound, using a UTM-with-random-noise model as justification for the formula. This is an inherently different way of assigning probabilities, where in particular we do not use padding, but on the contrary only take into account minimal programs. This is what the paper talks about in the next two paragraphs, the second of which, starting “This is a highly technical explanation...” explains the UTM-with-random-noise. I talk about this second thing in the third paragraph of my reply above.
Hopefully I made it clearer. If you still think I missed your point, let’s leave it at that.
+1. Hadn’t thought of that.
Unless I have access to the right kind of uncomputable oracle. Which seems to be not totally impossible if we’re living in an uncomputable universe.
If you’re computable and interacting with a world that contains an uncomputable oracle, and SI is in the same position, I don’t really see what advantage you have. After all, your algorithm can be found somewhere within SI’s mixture. It depends on the game setup though. We discussed some setups here.
What do you mean, precisely, by “SI is in the same position”?
If I understand correctly, SI can be “upgraded” by changing the underlying prior.
So if we have strong reasons suspect that we, humans, have access to the halting oracle, then we should try to build AI reasoning that approximates SI + a prior defined over the universal Turing machine enhanced with halting oracle.
If we don’t (as is the case at the moment), then we just build AI that approximates SI + the Universal prior.
Either way, there are no guessing games when we, on average, are able to beat the AI, as long as the approximation is good enough.
I mean that the computable part of you is facing the decision problem “which buttons on the oracle black box do I press, and how do I use the answers on the screen when choosing my next action?” and SI is facing the same decision problem. (Unless of course you want to define the problem so that only you can press buttons on the black box, and the SI agent by definition can’t. That would seem unfair to me, though.)
Given the above, in some game setups (though probably not all) I would expect SI to beat you, just like it beats you in the log-score game even when the input is uncomputable and comes from a level 100 oracle or whatever. The relevant fact is not whether the input is computable, but that SI is pretty much a multiplicative weights mixture of all possible computable “experts” which includes you, and there are many situations where a multiplicative weights mixture provably beats any expert on any input, up to a constant etc. etc.
I thought it was implied that SI cannot access any boxes or push any buttons, because it’s a mathematical abstraction. But I see that you mean “an AI agent with SI”.
So, the precise statement would be that whatever weight you assign each program, if f(n) := sum of the weights of all programs of length n F(n) := sum of f(k) for all k < n then limit as n goes to infinity of F(n) must be finite (if the weights are normalized, then the limit must be 1) and thus limit as n goes to infinity of f(n) must be zero. In fact, by the ratio test, the limit of f(n)/f(n+1) must be greater than or equal to one. Since there are 2^n programs of size n, that means that if a(n) := average weight of programs of size n then limit as n goes to infinity of a(n)/a(n+1) must be greater than or equal to 2
We are therefore highly constrained as far as the average weight for programs of size n, but less so for the maximum weight. For instance, we could have a weighting system such that for all n, there is some program of size n with weight (.999^n)/10000. We could even have a weighting system such that for all k, there is some n > k such that there is a program of length n and weight (.9999^log(log (log n) ) )/10
Do I have that correct?
PS I believe that you meant “monotonic”, rather than “monotonous” :)
The limit is >=1 if it exists, but it doesn’t have to exist. And the lim inf can be arbitrarily close to zero.
Ditto.