Yep, that’s correct. Basically, short programs suffer from there only being a finite supply of them. The actual distribution of probabilities to programs doesn’t matter that much.
That’s why I think SI doesn’t work very well as justification/support for Occam’s Razor.
Because the prediction itself doesn’t use the shortness of programs in any essential way, and the shortness only matters because of the unavoidable nature of the infinite formalism.
A thought experiment: suppose we’ve determined that the Universe is finite (possible) and worked out a higher bound M on its informational capacity, and we know that programs longer than M will never be run in this universe. We might want to rework SI using a very large finite probability space of programs rather than an infinite one. Given a valid prior distribution over this finite space, SI itself, as a prediction procedure, will continue to work, and its promise of correctness will still be there in an updated fashion. But for SI to work there will be no reason whatsoever to assign greater probabilities to shorter program in this large finite space. A uniform distribution will work just as well, or even giving really large weight to the longest programs and systematically biasing against shorter ones. It will be all the same to SI. Clearly it doesn’t have anything to do with Occam’s Razor in this thought experiment. The connection, then, is forced upon us by the mere fact that we’re using an infinite set of finite-length algorithms, and nothing else, not anything about SI: about what we predict, how we do it, etc.
In this case I feel more inclined to bite the bullet. After all, Solomonoff-as-it-is is an application of the basic requirement of probability: “use all your available information, and nothing else”.
When the totality of all we know doesn’t constrain on a finite universe, then Occam emerges as a trivial fact about allowing infinite explanations.
But if the totality of what we know does indeed produce a finite model, and induction works from any prior, then I say that the razor simply doesn’t apply.
After all, I prefer to derive my epistemology from what works than the other way around.
Sure, if that’s the way you like it, but for me that just doesn’t work. Occam’s Razor is a principle that is supposed to help me think better here and now; to decide that its justification rests on whether, say, the Universe is discrete at Planck scale or not, when this choice has to be compatible with QM and Newtonian mechanics at larger scales and therefore doesn’t change anything in practical terms in my life here and now—that seems absurd. To me, it’s a clear evidence that this is no justification at all.
Yeah, people get quite deeply confused by references to Occam’s razor, “length of programs”, and other such stuff.
If you only consider the programs of length N bits, approximately 2^(N-M) programs will have an output sequence forever identical to that of a program that can be minimally written in M<N bits.
Other confusion is with regards to how Solomonoff Induction would learn the program of length ~N from getting N bits of input. No it wouldn’t—you can write in a small number of bits a program that outputs BB(20) zeroes, then a string of ones, and until you read over BB(20) input bits you won’t learn a thing.
I’m confused. I thought SI assigns equal probabilities to every program in the infinite space, and then simpler predictions turn out to be more likely because there are many more programs that make simple predictions than there are programs that make complicated predictions.
As I understand it, SI works even better in the finite case than in the infinite case, because you don’t have to worry about infinite sums and whatnot. (Provided the domain of the finite case was something like “All the programs of length N or less” for some finite N.)
Yep, you’re confused. You can’t possibly assign equal probability to every program in an infinite space, because then the sum of all probabilities will diverge (go to infinity), and you need it to sum up to 1. What you do is assign probabilities in a way that the infinite sum will converge to 1, for example by lining all programs up in an infinite list: M1, M2, M3… and assigning probabilities 1⁄2, 1⁄4, 1⁄8, 1⁄16, 1⁄32 and so on to infinity. That’s essentially what the universal prior does, using a length-sorted infinite list of all possible programs (I’m skipping a few small technical difficulties).
The simpler predictions are typically more likely because of the one shortest program that generates them, not the many longer programs that also do—they do help, but because of the exponential decay they usually don’t amount to much. The shortest program tends to dominate.
Except I just lied a little in what I said. SI prediction doesn’t care if your program is short or long. It cares how early it comes on the infinite list M1,M2,M3… of all programs. When you arrange this list by program length, “earliest in the list” and “shortest” are basically the same. And in the usual infinite case, in a sense you cannot NOT arrange the list by program length. You can give your favourite very long programs some of the first spots on the list, and so high probabilities, but there’s an infinity of long programs out there, and you can’t nudge the infinity to where you want to go. But in the finite thought experiment, you’re free to arrange your list any way you want (and you don’t even need to have probabilities go down exponentially either). So you can just piss on Occam’s Razor and strictly reverse it by having longer programs always be more likely than shorter ones, if you feel like it. And SI prediction will still work.
That’s what I used to think about how SI worked. Then I read this explanation which seems to make sense to me, and seems to justify my view of things.
Specifically, this explanation shows how Solomonoff Induction doesn’t need to assume a prior that some hypotheses are more likely than others; it can weight them all equally, and then prove that simpler hypotheses have more “copies” and thus simpler predictions are more likely.
In the infinite case there is still the problem of how to get an infinite number of equally-weighted hypotheses to sum to probability 1. But this is what Measure Theory does, I believe. (I’m not a mathematician, but this is what I’ve read and been told.) So it isn’t a problem, any more than math is a problem.
But if the space was finite, as I said, then Solomonoff Induction wouldn’t even have that problem!
So, I no longer think I’m confused. I think that your understanding of SI portrays it as more arbitrary than it actually is. SI isn’t just a weighting by simplicity! It is a proof that weighting by simplicity is justified, given certain premises! (namely, that the space of hypotheses is the space of computable programs in a certain language, and that they are all equally likely, modulo evidence.)
In the infinite case there is still the problem of how to get an infinite number of equally-weighted hypotheses to sum to probability 1. But this is what Measure Theory does, I believe. (I’m not a mathematician, but this is what I’ve read and been told.) So it isn’t a problem, any more than math is a problem.
I am a mathematician, and yes measure theory can do this, but it will require you to make many (in fact infinitely many) arbitrary choices along the way.
Huh. Okay, thanks for the info. This is troubling, because I have long held out hope that SI would not turn out to be arbitrary. Could you direct me to where I can learn more about this arbitrariness in measure theory?
You can say that the input tape (where the program gets loaded from) is an infinite sequence of random bits. A ‘program of length l’ is a piece of code which sets the machine up so that after l bits subsequent bits do not matter. Every short program is also a set of longer programs.
You can. But is there any reason to think that this models well the inherent complexity of programs? Do we ever execute programs by choosing randomly bits that constitute them? We have algorithms that utilize randomness, to be sure, but UTMs are not, normally, such algorithms. I appreciate that “choosing program bits randomly” is a simple, short, succinct way of getting to 2^-length(p), but I don’t see a reason to think it’s natural in any interesting sense.
Well, indeed. There’s nothing natural about that whole silly notion. Laws of physics in our universe are highly symmetric (and fundamental laws are even time-reversible, when mirrored and charge-flipped), what works for guessing the laws of physics is assumptions of symmetries, something that S.I. most dramatically does not do.
Furthermore, incorrect theories in S.I. can take very long time to rule out because it is very easy to set up—in very few bits—a long-running busy beaver that changes the output after many bits are outputted. For an S.I. driven agent, there’s various arbitrary doomsdays looming with probabilities greater than 1/1024 (extra length less than 10 bits) .
Furthermore, in total absence of external input, an ideal inductor must assume it is not within an universe where the laws of physics do not support universal computation. This rules out vast majority of the input tapes, which are given non-zero prior in S.I.
Furthermore, incorrect theories in S.I. can take very long time to rule out because it is very easy to set up—in very few bits—a long-running busy beaver that changes the output after many bits are outputted. For an S.I. driven agent, there’s various arbitrary doomsdays looming with probabilities greater than 1/1024 (extra length less than 10 bits).
Do you think a better agent should assign low probabilities to such arbitrary doomsdays in the first place? What would be a good general rule for that?
Having thought about it some more, I feel that a good induction method would start with p_doomsday(time) being a smooth function, and it would have to acquire great many bits of information-theoretic data before that function would grow very sharp and specific peaks.
Meanwhile, S.I. starts with an enormous set of weird preconceptions due to the use of some one dimensional Turing machine, and consequently produces really really bad priors. The badness of said priors is somewhat masked by very-easy-for-laymen-to-misinterpret optimality proofs.
I think that if you plot sane agent’s probability of doomsday by time, it won’t be some really weird shaped curve with incredibly sharp (Planck-time sharp) peaks at various points highly specific to the internal details of the agent. Really, it’s as if someone with a weird number-fear synasthesia looked at the list of years, since the alleged birth of Christ and picked the scariest numbers, then prophesied doomsday at those years. It is clearly completely insane.
I don’t understand the question. I thought that’s what we were talking about. Am I missing something?
To be more explicit: setting up a UTM with random bits on the input tape is a natural-seeming way of getting the probability distribution over programs (2^-length(p)) that goes into the Solomonoff prior. But as I’m trying to say in the comment you replied to, I don’t think it’s really natural at all. And of course SI doesn’t need this particular distribution in order to be effective at its job.
Yep, you’re confused. You can’t possibly assign equal probability to every program in an infinite space, because then the sum of all probabilities will diverge (go to infinity), and you need it to sum up to 1. What you do is assign probabilities in a way that the infinite sum will converge to 1, for example by lining all programs up in an infinite list: M1, M2, M3… and assigning probabilities 1⁄2, 1⁄4, 1⁄8, 1⁄16, 1⁄32 and so on to infinity. That’s essentially what the universal prior does, using a length-sorted infinite list of all possible programs (I’m skipping a few small technical difficulties).
Only if you insist on assigning non-zero probability to each individual program.
Is that responding to the “You can’t possibly assign equal probability to every program in an infinite space” part of the parent comment? If you assign equal probability to every program in an infinite space, then (assuming “probability” refers to a real number, and not some other mathematical object such as an infinitesimal) either that equal probability is non-zero, in which case the sum is infinite, or the equal probability is zero, in which case the sum is zero. Remember, if all the probabilities are equal, then either all of them are equal to zero, or none are, and it’s generally agreed that an infinite number of zeros add up to zero.
and it’s generally agreed that an infinite number of zeros add up to zero.
Actually 0·∞ is considered an indeterminate form. For example the interval from 0 to 1 has length 1 even though it’s composed of infinitely many points each of which has length 0.
See my reply to asr for links to more technical explanations.
Yep, you’re confused. You can’t possibly assign equal probability to every program in an infinite space, because then the sum of all probabilities will diverge (go to infinity), and you need it to sum up to 1.
Only if you insist on assigning non-zero probability to each individual program.
If you assign an equal probability to every program, and that probability is zero, then the sum, the total probability, is zero and. If you assign an equal nonzero probability, the probability is infinite.
If you want to have a valid probability distribution, you can’t assign equal probabilities.
Well, if you’re willing to relax the third axiom of probability to finite additivity you can. Alternatively, use an uncountable state space, e.g., the space of all ultra-filters on Turing machines. Also, while we’re on the subject, there are numerous types of quasi-probability distributions worth considering.
Err, no, I wouldn’t call it a justification. It’s more like a technical setup of why shorter programs turn out to be more likely (because the padding happens to include them in this way). This is not a profound thing. It’s just one possible way to get where you want, which is to assign 2^-length(p) to p. Note that it’s brittle in the sense that you need a support from your formalism for that. You need your notion of what is a program to allow adding arbitrary bits to the end without affecting its validity or what it does. That’s not a very interesting thing to do: why would you even consider all those programs with dummy padding as real programs, given that whatever’s running them can probably easily rule them out? E.g. modern compilers will usually reject your program if you add some junk at the end.
If you restrict yourself to a finite program length, you could still only count minimal programs, those w/o padding, and give them appropriate weights (the way they do in the paper immediately afterwards for the infinite case, the standard way to set it up) and end up with the same thing. There’s no profound reason I can think of to say “oh, for fixed bound L on program length we’re just only going to look at programs of length exactly L, including padded programs, but for unrestricted length we’re throwing out padded programs and only looking at minimal ones”. The real reason is that you want to sum up to 1, and you want to prefer short programs while doing so.
Now, something closer to justiication is when they point our right afterwards that 2^-length(p) is what you get if you randomly choose program bits one by one. But as justifications go, I think it’s a pretty weak one, because no one really computes anything in any way that looks like that. And more importantly, as I said above, it just doesn’t matter for SI prediction. You don’t have to use 2^-length(p) for SI prediction to work. You just need to have it all sum up to 1.
You kind of missed my point. I provided it as a technical justification why the exact formula 2^-length(p) makes sense. As you said, “It’s one possible way to get where you want”. I know that 2^-length(p) is not the only possible option, but it’s the most natural one given that we know nothing about the input.
The model of computation is a “Universal Turing machine provided with completely random noise” (i.e. fair coin flips on the input tape). It’s a mathematical formalism; it has nothing to do with modern compilers.
I didn’t miss your point; instead, I’ve tried to explain your point to you, and perhaps didn’t do a really good job. You’re confusing two different things which are both spelled out in the paper you referenced on page 1120, but in your comment you linked me to you’ve only recounted the first one, perhaps thinking that the second one is the same thing, which it is not.
The first thing is setting up probabilities over programs of a given fixed length L, so that padding helps us take into account also shorter programs. In the paper, this is the paragraph starting “This sum is still only over programs of length ℓ...”. This is what you talked about in your other comment, and this is what I talked about in my first paragraph of the reply above.
The second thing is a way of setting up probabilities over programs of any length without bound, using a UTM-with-random-noise model as justification for the formula. This is an inherently different way of assigning probabilities, where in particular we do not use padding, but on the contrary only take into account minimal programs. This is what the paper talks about in the next two paragraphs, the second of which, starting “This is a highly technical explanation...” explains the UTM-with-random-noise. I talk about this second thing in the third paragraph of my reply above.
Hopefully I made it clearer. If you still think I missed your point, let’s leave it at that.
Yep, that’s correct. Basically, short programs suffer from there only being a finite supply of them. The actual distribution of probabilities to programs doesn’t matter that much.
That’s why I think SI doesn’t work very well as justification/support for Occam’s Razor.
Because as an explanation it is too simple or because you feel there should be a deeper reason?
Because the prediction itself doesn’t use the shortness of programs in any essential way, and the shortness only matters because of the unavoidable nature of the infinite formalism.
A thought experiment: suppose we’ve determined that the Universe is finite (possible) and worked out a higher bound M on its informational capacity, and we know that programs longer than M will never be run in this universe. We might want to rework SI using a very large finite probability space of programs rather than an infinite one. Given a valid prior distribution over this finite space, SI itself, as a prediction procedure, will continue to work, and its promise of correctness will still be there in an updated fashion. But for SI to work there will be no reason whatsoever to assign greater probabilities to shorter program in this large finite space. A uniform distribution will work just as well, or even giving really large weight to the longest programs and systematically biasing against shorter ones. It will be all the same to SI. Clearly it doesn’t have anything to do with Occam’s Razor in this thought experiment. The connection, then, is forced upon us by the mere fact that we’re using an infinite set of finite-length algorithms, and nothing else, not anything about SI: about what we predict, how we do it, etc.
In this case I feel more inclined to bite the bullet. After all, Solomonoff-as-it-is is an application of the basic requirement of probability: “use all your available information, and nothing else”.
When the totality of all we know doesn’t constrain on a finite universe, then Occam emerges as a trivial fact about allowing infinite explanations.
But if the totality of what we know does indeed produce a finite model, and induction works from any prior, then I say that the razor simply doesn’t apply.
After all, I prefer to derive my epistemology from what works than the other way around.
Sure, if that’s the way you like it, but for me that just doesn’t work. Occam’s Razor is a principle that is supposed to help me think better here and now; to decide that its justification rests on whether, say, the Universe is discrete at Planck scale or not, when this choice has to be compatible with QM and Newtonian mechanics at larger scales and therefore doesn’t change anything in practical terms in my life here and now—that seems absurd. To me, it’s a clear evidence that this is no justification at all.
Ah, the joy of mass downvoting… Sigh.
Yeah, people get quite deeply confused by references to Occam’s razor, “length of programs”, and other such stuff.
If you only consider the programs of length N bits, approximately 2^(N-M) programs will have an output sequence forever identical to that of a program that can be minimally written in M<N bits.
Other confusion is with regards to how Solomonoff Induction would learn the program of length ~N from getting N bits of input. No it wouldn’t—you can write in a small number of bits a program that outputs BB(20) zeroes, then a string of ones, and until you read over BB(20) input bits you won’t learn a thing.
I’m confused. I thought SI assigns equal probabilities to every program in the infinite space, and then simpler predictions turn out to be more likely because there are many more programs that make simple predictions than there are programs that make complicated predictions.
As I understand it, SI works even better in the finite case than in the infinite case, because you don’t have to worry about infinite sums and whatnot. (Provided the domain of the finite case was something like “All the programs of length N or less” for some finite N.)
Yep, you’re confused. You can’t possibly assign equal probability to every program in an infinite space, because then the sum of all probabilities will diverge (go to infinity), and you need it to sum up to 1. What you do is assign probabilities in a way that the infinite sum will converge to 1, for example by lining all programs up in an infinite list: M1, M2, M3… and assigning probabilities 1⁄2, 1⁄4, 1⁄8, 1⁄16, 1⁄32 and so on to infinity. That’s essentially what the universal prior does, using a length-sorted infinite list of all possible programs (I’m skipping a few small technical difficulties).
The simpler predictions are typically more likely because of the one shortest program that generates them, not the many longer programs that also do—they do help, but because of the exponential decay they usually don’t amount to much. The shortest program tends to dominate.
Except I just lied a little in what I said. SI prediction doesn’t care if your program is short or long. It cares how early it comes on the infinite list M1,M2,M3… of all programs. When you arrange this list by program length, “earliest in the list” and “shortest” are basically the same. And in the usual infinite case, in a sense you cannot NOT arrange the list by program length. You can give your favourite very long programs some of the first spots on the list, and so high probabilities, but there’s an infinity of long programs out there, and you can’t nudge the infinity to where you want to go. But in the finite thought experiment, you’re free to arrange your list any way you want (and you don’t even need to have probabilities go down exponentially either). So you can just piss on Occam’s Razor and strictly reverse it by having longer programs always be more likely than shorter ones, if you feel like it. And SI prediction will still work.
That’s what I used to think about how SI worked. Then I read this explanation which seems to make sense to me, and seems to justify my view of things.
Specifically, this explanation shows how Solomonoff Induction doesn’t need to assume a prior that some hypotheses are more likely than others; it can weight them all equally, and then prove that simpler hypotheses have more “copies” and thus simpler predictions are more likely.
In the infinite case there is still the problem of how to get an infinite number of equally-weighted hypotheses to sum to probability 1. But this is what Measure Theory does, I believe. (I’m not a mathematician, but this is what I’ve read and been told.) So it isn’t a problem, any more than math is a problem.
But if the space was finite, as I said, then Solomonoff Induction wouldn’t even have that problem!
So, I no longer think I’m confused. I think that your understanding of SI portrays it as more arbitrary than it actually is. SI isn’t just a weighting by simplicity! It is a proof that weighting by simplicity is justified, given certain premises! (namely, that the space of hypotheses is the space of computable programs in a certain language, and that they are all equally likely, modulo evidence.)
I am a mathematician, and yes measure theory can do this, but it will require you to make many (in fact infinitely many) arbitrary choices along the way.
Huh. Okay, thanks for the info. This is troubling, because I have long held out hope that SI would not turn out to be arbitrary. Could you direct me to where I can learn more about this arbitrariness in measure theory?
You can say that the input tape (where the program gets loaded from) is an infinite sequence of random bits. A ‘program of length l’ is a piece of code which sets the machine up so that after l bits subsequent bits do not matter. Every short program is also a set of longer programs.
You can. But is there any reason to think that this models well the inherent complexity of programs? Do we ever execute programs by choosing randomly bits that constitute them? We have algorithms that utilize randomness, to be sure, but UTMs are not, normally, such algorithms. I appreciate that “choosing program bits randomly” is a simple, short, succinct way of getting to 2^-length(p), but I don’t see a reason to think it’s natural in any interesting sense.
Well, indeed. There’s nothing natural about that whole silly notion. Laws of physics in our universe are highly symmetric (and fundamental laws are even time-reversible, when mirrored and charge-flipped), what works for guessing the laws of physics is assumptions of symmetries, something that S.I. most dramatically does not do.
Furthermore, incorrect theories in S.I. can take very long time to rule out because it is very easy to set up—in very few bits—a long-running busy beaver that changes the output after many bits are outputted. For an S.I. driven agent, there’s various arbitrary doomsdays looming with probabilities greater than 1/1024 (extra length less than 10 bits) .
Furthermore, in total absence of external input, an ideal inductor must assume it is not within an universe where the laws of physics do not support universal computation. This rules out vast majority of the input tapes, which are given non-zero prior in S.I.
Do you think a better agent should assign low probabilities to such arbitrary doomsdays in the first place? What would be a good general rule for that?
Having thought about it some more, I feel that a good induction method would start with p_doomsday(time) being a smooth function, and it would have to acquire great many bits of information-theoretic data before that function would grow very sharp and specific peaks.
Meanwhile, S.I. starts with an enormous set of weird preconceptions due to the use of some one dimensional Turing machine, and consequently produces really really bad priors. The badness of said priors is somewhat masked by very-easy-for-laymen-to-misinterpret optimality proofs.
I think that if you plot sane agent’s probability of doomsday by time, it won’t be some really weird shaped curve with incredibly sharp (Planck-time sharp) peaks at various points highly specific to the internal details of the agent. Really, it’s as if someone with a weird number-fear synasthesia looked at the list of years, since the alleged birth of Christ and picked the scariest numbers, then prophesied doomsday at those years. It is clearly completely insane.
Just checking—you do know the formulation of SI that uses a universal prefix Turing machine provided with fair coinflips on the input tape, right?
I don’t understand the question. I thought that’s what we were talking about. Am I missing something?
To be more explicit: setting up a UTM with random bits on the input tape is a natural-seeming way of getting the probability distribution over programs (2^-length(p)) that goes into the Solomonoff prior. But as I’m trying to say in the comment you replied to, I don’t think it’s really natural at all. And of course SI doesn’t need this particular distribution in order to be effective at its job.
Yeah, sorry. It’s me who was missing something =)
Only if you insist on assigning non-zero probability to each individual program.
Is that responding to the “You can’t possibly assign equal probability to every program in an infinite space” part of the parent comment? If you assign equal probability to every program in an infinite space, then (assuming “probability” refers to a real number, and not some other mathematical object such as an infinitesimal) either that equal probability is non-zero, in which case the sum is infinite, or the equal probability is zero, in which case the sum is zero. Remember, if all the probabilities are equal, then either all of them are equal to zero, or none are, and it’s generally agreed that an infinite number of zeros add up to zero.
Actually 0·∞ is considered an indeterminate form. For example the interval from 0 to 1 has length 1 even though it’s composed of infinitely many points each of which has length 0.
See my reply to asr for links to more technical explanations.
If you assign an equal probability to every program, and that probability is zero, then the sum, the total probability, is zero and. If you assign an equal nonzero probability, the probability is infinite.
If you want to have a valid probability distribution, you can’t assign equal probabilities.
Well, if you’re willing to relax the third axiom of probability to finite additivity you can. Alternatively, use an uncountable state space, e.g., the space of all ultra-filters on Turing machines. Also, while we’re on the subject, there are numerous types of quasi-probability distributions worth considering.
Ah, fair enough. I had been assuming only real probabilities.
See this comment for a justification of why shorter programs are more likely.
The argument is taken from this paper.
Err, no, I wouldn’t call it a justification. It’s more like a technical setup of why shorter programs turn out to be more likely (because the padding happens to include them in this way). This is not a profound thing. It’s just one possible way to get where you want, which is to assign 2^-length(p) to p. Note that it’s brittle in the sense that you need a support from your formalism for that. You need your notion of what is a program to allow adding arbitrary bits to the end without affecting its validity or what it does. That’s not a very interesting thing to do: why would you even consider all those programs with dummy padding as real programs, given that whatever’s running them can probably easily rule them out? E.g. modern compilers will usually reject your program if you add some junk at the end.
If you restrict yourself to a finite program length, you could still only count minimal programs, those w/o padding, and give them appropriate weights (the way they do in the paper immediately afterwards for the infinite case, the standard way to set it up) and end up with the same thing. There’s no profound reason I can think of to say “oh, for fixed bound L on program length we’re just only going to look at programs of length exactly L, including padded programs, but for unrestricted length we’re throwing out padded programs and only looking at minimal ones”. The real reason is that you want to sum up to 1, and you want to prefer short programs while doing so.
Now, something closer to justiication is when they point our right afterwards that 2^-length(p) is what you get if you randomly choose program bits one by one. But as justifications go, I think it’s a pretty weak one, because no one really computes anything in any way that looks like that. And more importantly, as I said above, it just doesn’t matter for SI prediction. You don’t have to use 2^-length(p) for SI prediction to work. You just need to have it all sum up to 1.
You kind of missed my point. I provided it as a technical justification why the exact formula 2^-length(p) makes sense. As you said, “It’s one possible way to get where you want”. I know that 2^-length(p) is not the only possible option, but it’s the most natural one given that we know nothing about the input.
The model of computation is a “Universal Turing machine provided with completely random noise” (i.e. fair coin flips on the input tape). It’s a mathematical formalism; it has nothing to do with modern compilers.
I didn’t miss your point; instead, I’ve tried to explain your point to you, and perhaps didn’t do a really good job. You’re confusing two different things which are both spelled out in the paper you referenced on page 1120, but in your comment you linked me to you’ve only recounted the first one, perhaps thinking that the second one is the same thing, which it is not.
The first thing is setting up probabilities over programs of a given fixed length L, so that padding helps us take into account also shorter programs. In the paper, this is the paragraph starting “This sum is still only over programs of length ℓ...”. This is what you talked about in your other comment, and this is what I talked about in my first paragraph of the reply above.
The second thing is a way of setting up probabilities over programs of any length without bound, using a UTM-with-random-noise model as justification for the formula. This is an inherently different way of assigning probabilities, where in particular we do not use padding, but on the contrary only take into account minimal programs. This is what the paper talks about in the next two paragraphs, the second of which, starting “This is a highly technical explanation...” explains the UTM-with-random-noise. I talk about this second thing in the third paragraph of my reply above.
Hopefully I made it clearer. If you still think I missed your point, let’s leave it at that.