When you use e-raised-to-the alpha times expectation, is that similar to the use of an exponential distribution in something like Adaboost, to take something like odds information and form a distribution over assorted weights?
I’m not really that familiar with adaboost. The planning model is just reflecting the fact that bounded agents don’t always take the maximum expected utility action. The higher alpha is, the more bias there is towards good actions, but the more potentially expensive the computation is (e.g. if you use rejection sampling).
Since any realistic agent needs to be able to handle whatever its environment throws at it, it seems to follow that a realistic agent needs some resource-rational way to handle nonprovable nonhalting.
Ah, that makes sense! I think I see how “trading computational power for algorithmic information” makes sense in this framework.
The planning model is just reflecting the fact that bounded agents don’t always take the maximum expected utility action. The higher alpha is, the more bias there is towards good actions, but the more potentially expensive the computation is (e.g. if you use rejection sampling).
Ah, that makes sense!
Ah, that makes sense! I think I see how “trading computational power for algorithmic information” makes sense in this framework.
And before I could scribble a damned thing, Calude went and solved it six months ago. The Halting Problem, I mean.
Cool. If I get the meaning of the result well, it’s that if you run a random program for some number of steps and it doesn’t halt, then (depending on the exact numbers) it will be unlikely to halt when run on a supercomputer either, because halting times have low density. So almost all programs halt quickly or run a really really long time. Is this correct? This doesn’t quite let you approximate Chaitin’s omega, but it’s interesting that you can approximate a bounded variant of Chaitin’s omega (like what percentage of Turing machines halt when run for 10^50 steps). I can see how this would let you solve the halting problem well enough when you live in a bounded universe.
Cool. If I get the meaning of the result well, it’s that if you run a random program for some number of steps and it doesn’t halt, then (depending on the exact numbers) it will be unlikely to halt when run on a supercomputer either, because halting times have low density. So almost all programs halt quickly or run a really really long time. Is this correct?
Yep. Or, to put it a little tiny bit more accurately, you get a halting probability for your particular Turing machine, conditioned on the number of steps for which you’ve run it.
This doesn’t quite let you approximate Chaitin’s omega
Well technically, you can approximate Chaitin’s omega from below just as Chaitin himself describes in his book. You’ll just only be able to calculate finitely many digits.
I can see how this would let you solve the halting problem well enough when you live in a bounded universe.
Which we do ;-). You could run until you get past a desired threshold of probability (hypothesis testing), or you could use a bounded-rationality approach to vary your surety of nonhalting with your available processing power.
But overall, it gives you a way to “reason around” the Halting Problem, which, when we apply it to the various paradoxes of self-reference… you can see where I’m going with this.
I’m not really that familiar with adaboost. The planning model is just reflecting the fact that bounded agents don’t always take the maximum expected utility action. The higher alpha is, the more bias there is towards good actions, but the more potentially expensive the computation is (e.g. if you use rejection sampling).
Ah, that makes sense! I think I see how “trading computational power for algorithmic information” makes sense in this framework.
Ah, that makes sense!
And before I could scribble a damned thing, Calude went and solved it six months ago. The Halting Problem, I mean.
I wonder how he feels about that, because my current feeling about this is HOLY FUCKING SHIT. By GOD, my AIT textbook cannot get here fast enough.
Cool. If I get the meaning of the result well, it’s that if you run a random program for some number of steps and it doesn’t halt, then (depending on the exact numbers) it will be unlikely to halt when run on a supercomputer either, because halting times have low density. So almost all programs halt quickly or run a really really long time. Is this correct? This doesn’t quite let you approximate Chaitin’s omega, but it’s interesting that you can approximate a bounded variant of Chaitin’s omega (like what percentage of Turing machines halt when run for 10^50 steps). I can see how this would let you solve the halting problem well enough when you live in a bounded universe.
Yep. Or, to put it a little tiny bit more accurately, you get a halting probability for your particular Turing machine, conditioned on the number of steps for which you’ve run it.
Well technically, you can approximate Chaitin’s omega from below just as Chaitin himself describes in his book. You’ll just only be able to calculate finitely many digits.
Which we do ;-). You could run until you get past a desired threshold of probability (hypothesis testing), or you could use a bounded-rationality approach to vary your surety of nonhalting with your available processing power.
But overall, it gives you a way to “reason around” the Halting Problem, which, when we apply it to the various paradoxes of self-reference… you can see where I’m going with this.
You can always acquire it nautically to cut down on shipping time :P
What one did you get? I went with Li and Vitanyi, and have enjoyed my first readthrough.
The first one I bought was Calude’s, but it demands more background in pure maths than I have, so I went for Li and Vitanyi’s.