Maybe I was unclear. I don’t dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities.
He’s not saying that the leverage penalty might be correct because we might live in a certain type of Tegmark IV, he’s saying that the fact that the leverage penalty would be correct if we did live in Tegmark IV + some other assumptions shows (a) that it is a consistent decision procedure and¹ (b) it is the sort of decision procedure that emerges reasonably naturally and is thus a more reasonable hypothesis than if we didn’t know it comes up natuarally like that.
It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.
¹ The word ‘and’ isn’t really correct here. It’s very likely that EY means one of (a) and (b), and possibly both.
Huh. This whole exchange makes me more certain than I am missing something crucial, but reading and dissecting it repeatedly does not seem to help. And apparently it’s not the issue of not knowing enough math. I guess the mental block I can’t get over is “why TL4?”. Or maybe “what other mental constructs could one use in place of TL4 to make a similar argument?”
Maybe paper-machine or someone else on #lesswrong will be able to clarify this.
Not sure why you are asking, but yes, I pointed some out 5 levels up. They clearly have a complexity penalty, but I am not sure how much vs TL4. At least I know that the “sloppy programmer” construct is finite (though possibly circular). I am not sure how to even begin to estimate the Kolmogorov complexity of “everything mathematically possible exists physically”. What Turing machine would output all possible mathematical structures?
What Turing machine would output all possible mathematical structures?
“Loop infinitely, incrementing count from 1: [Let steps be count. Iterate all legal programs until steps = 0 into prog: [Load submachine state from “cache tape”. Execute one step of prog, writing output to “output tape”. Save machine state onto “cache tape”. Decrement steps.] ]”
The output of every program is found on the output tape (albeit at intervals). I’m sure one could design the Turing machine so that it reordered the output tape with every piece of data written so that they’re in order too, if you want that. Or make it copypaste the entire output so far to the end of the tape, so that every number of evaluation steps for every Turing machine has its own tape location. Seemed a little wasteful though.
It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.
You are right, I am out of my depth math-wise. Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).
He’s not saying that the leverage penalty might be correct because we might live in a certain type of Tegmark IV, he’s saying that the fact that the leverage penalty would be correct if we did live in Tegmark IV + some other assumptions shows (a) that it is a consistent decision procedure and¹ (b) it is the sort of decision procedure that emerges reasonably naturally and is thus a more reasonable hypothesis than if we didn’t know it comes up natuarally like that.
It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.
¹ The word ‘and’ isn’t really correct here. It’s very likely that EY means one of (a) and (b), and possibly both.
(Yep. More a than b, it still feels pretty unnatural to me.)
Huh. This whole exchange makes me more certain than I am missing something crucial, but reading and dissecting it repeatedly does not seem to help. And apparently it’s not the issue of not knowing enough math. I guess the mental block I can’t get over is “why TL4?”. Or maybe “what other mental constructs could one use in place of TL4 to make a similar argument?”
Maybe paper-machine or someone else on #lesswrong will be able to clarify this.
Have you got one?
Not sure why you are asking, but yes, I pointed some out 5 levels up. They clearly have a complexity penalty, but I am not sure how much vs TL4. At least I know that the “sloppy programmer” construct is finite (though possibly circular). I am not sure how to even begin to estimate the Kolmogorov complexity of “everything mathematically possible exists physically”. What Turing machine would output all possible mathematical structures?
“Loop infinitely, incrementing
count
from 1: [Letsteps
becount
. Iterate all legal programs untilsteps
= 0 intoprog
: [Load submachine state from “cache tape”. Execute one step ofprog
, writing output to “output tape”. Save machine state onto “cache tape”. Decrementsteps
.] ]”The output of every program is found on the output tape (albeit at intervals). I’m sure one could design the Turing machine so that it reordered the output tape with every piece of data written so that they’re in order too, if you want that. Or make it copypaste the entire output so far to the end of the tape, so that every number of evaluation steps for every Turing machine has its own tape location. Seemed a little wasteful though.
edit: THANK YOU GWERN . This is indeed what I was thinking of :D
Hey, don’t look at me. I’m with you on “Existence of T4 is untestable therefore boring.”
You are right, I am out of my depth math-wise. Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).
(Also yep.)