Even if the field X is confused, to confidently dismiss subtheory Y you must know something confidently about Y from within this confusion, such as that Y is inconsistent or nonreductionist or something.
Maybe I was unclear. I don’t dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities. I agree that I am “arrogant” here, in the sense that I discount an opinion of a smart and popular MIT prof as misguided. The postulate “mathematical existence = physical existence” raises a category error exception for me, as one is, in your words, logic, the other is physics. In fact, I don’t understand why privilege math to begin with. Maybe the universe indeed does not run on math (man, I still chuckle every time I recall that omake). Maybe the trouble we have with understanding the world is that we rely on math too much (sorry, getting too Chopra here). Maybe the matrix lord was a sloppy programmer whose bugs and self-contradictory assumptions manifest themselves to us as black hole singularities, which are hidden from view only because the code maintainers did a passable job of acting on the QA reports. There are many ideas which are just as pretty and just as unjustifiable as TL4. I don’t pretend to fully grok the “complexity+leverage penalty” idea, except to say that your dark energy example makes me think less of it, as it seems to rely on considerations I find dubious (that any model with the potential of affecting gazillions of people in the far future if accurate is extremely unlikely despite being the currently best map available). Is it arrogant? Probably. Is it wrong? Not unless you prove the alternative right.
Maybe I was unclear. I don’t dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities.
He’s not saying that the leverage penalty might be correct because we might live in a certain type of Tegmark IV, he’s saying that the fact that the leverage penalty would be correct if we did live in Tegmark IV + some other assumptions shows (a) that it is a consistent decision procedure and¹ (b) it is the sort of decision procedure that emerges reasonably naturally and is thus a more reasonable hypothesis than if we didn’t know it comes up natuarally like that.
It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.
¹ The word ‘and’ isn’t really correct here. It’s very likely that EY means one of (a) and (b), and possibly both.
Huh. This whole exchange makes me more certain than I am missing something crucial, but reading and dissecting it repeatedly does not seem to help. And apparently it’s not the issue of not knowing enough math. I guess the mental block I can’t get over is “why TL4?”. Or maybe “what other mental constructs could one use in place of TL4 to make a similar argument?”
Maybe paper-machine or someone else on #lesswrong will be able to clarify this.
Not sure why you are asking, but yes, I pointed some out 5 levels up. They clearly have a complexity penalty, but I am not sure how much vs TL4. At least I know that the “sloppy programmer” construct is finite (though possibly circular). I am not sure how to even begin to estimate the Kolmogorov complexity of “everything mathematically possible exists physically”. What Turing machine would output all possible mathematical structures?
What Turing machine would output all possible mathematical structures?
“Loop infinitely, incrementing count from 1: [Let steps be count. Iterate all legal programs until steps = 0 into prog: [Load submachine state from “cache tape”. Execute one step of prog, writing output to “output tape”. Save machine state onto “cache tape”. Decrement steps.] ]”
The output of every program is found on the output tape (albeit at intervals). I’m sure one could design the Turing machine so that it reordered the output tape with every piece of data written so that they’re in order too, if you want that. Or make it copypaste the entire output so far to the end of the tape, so that every number of evaluation steps for every Turing machine has its own tape location. Seemed a little wasteful though.
It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.
You are right, I am out of my depth math-wise. Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).
TL4, or at least (TL4+some measure theory that gives calculable and sensible answers), is not entirely unfalsifiable. For instance, it predicts that a random observer (you) should live in a very “big” universe. Since we have plausible reasons to believe TL0-TL3 (or at least, I think we do), and I have a very hard time imagining specific laws of physics that give “bigger” causal webs than you get from TL0-TL3, that gives me some weak evidence for TL4; it could have been falsified but wasn’t.
It seems plausible that that’s the only evidence we’ll ever get regarding TL4. If so, I’m not sure that either of the terms “testable” or “untestable” apply. “Testable” means “susceptible to reproducible experiment”; “untestable” means “unsusceptible to experiment”; so what do you call something in between, which is susceptible only to limited and irreproducible evidence? Quasitestable?
Of course, you could still perhaps say “I ignore it as only quasitestable and therefore useless for justifying anything interesting”.
TL4 seems testable by asking what a ‘randomly chosen’ observer would expect to see. In fact, the simplest version seems falsified by the lack of observed discontinuities in physics (of the ‘clothes turn into a crocodile’ type).
Variants of TL4 that might hold seem untestable right now. But we could see them as ideas or directions for groping towards a theory, rather than complete hypotheses. Or it might happen that when we understand anthropics better, we’ll see an obvious test. (Or the original hypothesis might turn out to work, but I strongly doubt that.)
Maybe I was unclear. I don’t dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities. I agree that I am “arrogant” here, in the sense that I discount an opinion of a smart and popular MIT prof as misguided. The postulate “mathematical existence = physical existence” raises a category error exception for me, as one is, in your words, logic, the other is physics. In fact, I don’t understand why privilege math to begin with. Maybe the universe indeed does not run on math (man, I still chuckle every time I recall that omake). Maybe the trouble we have with understanding the world is that we rely on math too much (sorry, getting too Chopra here). Maybe the matrix lord was a sloppy programmer whose bugs and self-contradictory assumptions manifest themselves to us as black hole singularities, which are hidden from view only because the code maintainers did a passable job of acting on the QA reports. There are many ideas which are just as pretty and just as unjustifiable as TL4. I don’t pretend to fully grok the “complexity+leverage penalty” idea, except to say that your dark energy example makes me think less of it, as it seems to rely on considerations I find dubious (that any model with the potential of affecting gazillions of people in the far future if accurate is extremely unlikely despite being the currently best map available). Is it arrogant? Probably. Is it wrong? Not unless you prove the alternative right.
He’s not saying that the leverage penalty might be correct because we might live in a certain type of Tegmark IV, he’s saying that the fact that the leverage penalty would be correct if we did live in Tegmark IV + some other assumptions shows (a) that it is a consistent decision procedure and¹ (b) it is the sort of decision procedure that emerges reasonably naturally and is thus a more reasonable hypothesis than if we didn’t know it comes up natuarally like that.
It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.
¹ The word ‘and’ isn’t really correct here. It’s very likely that EY means one of (a) and (b), and possibly both.
(Yep. More a than b, it still feels pretty unnatural to me.)
Huh. This whole exchange makes me more certain than I am missing something crucial, but reading and dissecting it repeatedly does not seem to help. And apparently it’s not the issue of not knowing enough math. I guess the mental block I can’t get over is “why TL4?”. Or maybe “what other mental constructs could one use in place of TL4 to make a similar argument?”
Maybe paper-machine or someone else on #lesswrong will be able to clarify this.
Have you got one?
Not sure why you are asking, but yes, I pointed some out 5 levels up. They clearly have a complexity penalty, but I am not sure how much vs TL4. At least I know that the “sloppy programmer” construct is finite (though possibly circular). I am not sure how to even begin to estimate the Kolmogorov complexity of “everything mathematically possible exists physically”. What Turing machine would output all possible mathematical structures?
“Loop infinitely, incrementing
count
from 1: [Letsteps
becount
. Iterate all legal programs untilsteps
= 0 intoprog
: [Load submachine state from “cache tape”. Execute one step ofprog
, writing output to “output tape”. Save machine state onto “cache tape”. Decrementsteps
.] ]”The output of every program is found on the output tape (albeit at intervals). I’m sure one could design the Turing machine so that it reordered the output tape with every piece of data written so that they’re in order too, if you want that. Or make it copypaste the entire output so far to the end of the tape, so that every number of evaluation steps for every Turing machine has its own tape location. Seemed a little wasteful though.
edit: THANK YOU GWERN . This is indeed what I was thinking of :D
Hey, don’t look at me. I’m with you on “Existence of T4 is untestable therefore boring.”
You are right, I am out of my depth math-wise. Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).
(Also yep.)
TL4, or at least (TL4+some measure theory that gives calculable and sensible answers), is not entirely unfalsifiable. For instance, it predicts that a random observer (you) should live in a very “big” universe. Since we have plausible reasons to believe TL0-TL3 (or at least, I think we do), and I have a very hard time imagining specific laws of physics that give “bigger” causal webs than you get from TL0-TL3, that gives me some weak evidence for TL4; it could have been falsified but wasn’t.
It seems plausible that that’s the only evidence we’ll ever get regarding TL4. If so, I’m not sure that either of the terms “testable” or “untestable” apply. “Testable” means “susceptible to reproducible experiment”; “untestable” means “unsusceptible to experiment”; so what do you call something in between, which is susceptible only to limited and irreproducible evidence? Quasitestable?
Of course, you could still perhaps say “I ignore it as only quasitestable and therefore useless for justifying anything interesting”.
TL4 seems testable by asking what a ‘randomly chosen’ observer would expect to see. In fact, the simplest version seems falsified by the lack of observed discontinuities in physics (of the ‘clothes turn into a crocodile’ type).
Variants of TL4 that might hold seem untestable right now. But we could see them as ideas or directions for groping towards a theory, rather than complete hypotheses. Or it might happen that when we understand anthropics better, we’ll see an obvious test. (Or the original hypothesis might turn out to work, but I strongly doubt that.)