It’s just a fact that you endorse a very different theory of “reality” than Eliezer. Why disguise your reasonable disagreement with him by claiming that you don’t understand him?
You talk like you don’t notice when highly-qualified-physicist shminux is talking and when average-armchair-philosopher shminux is talking.
Which is annoying to me in particular because physicist shminux knows a lot more than I, and I should pay attention to what he says in order to be less wrong, while philosopher shminux is not entitled to the same weight. So I’d like some markers of which one is talking.
I thought I was pretty clear re the “markers of which one is talking”. But let me recap.
Eliezer has thought about metaethics, decision theories and AI design for much much longer time and much much more seriously than I have. I can see that when I read what he writes about the issues I have not even thought of. While I cannot tell if it is correct, I can certainly tell that there is a fair amount of learning I still have to do if I wanted to be interesting. This is the same feeling I used to get (and still get on occasion) when talking with an expert in, say, General Relativity, before I learned the subject in sufficient depth. Now that I have some expertise in the area, I see the situation from the other side, as well. I can often recognize a standard amateurish argument before the person making it has finished. I often know exactly what implicit false premises lead to this argument, because I had been there myself. If I am lucky, I can successfully point out the problematic assumptions to the amateur in question, provided I can simplify it to the proper level. If so, the reaction I get is “that’s so cool… so deep… I’ll go and ponder it, Thank you, Master!”, the same thing I used to feel when hearing an expert answer my amateurish questions.
As far as Eliezer’s area of expertise is concerned, I am on the wrong side of the gulf. Thus I am happy to learn what I can from him in this area and be gratified if my humble suggestions prove useful on occasion.
I am much more skeptical about his forays into Quantum Mechanics, Relativity and some other areas of physics I have more than passing familiarity with. I do not get the feeling that what he says is “deep”, and only occasionally that it is “interesting”. Hence I am happy to discount his musings about MWI as amateurish.
There is this grey area between the two, which could be thought of as philosophy of science. While I am far from an expert in the area, I have put in a fair amount of effort to understand what the leading edge is. What I find is warring camps of hand-waving “experts” with few interesting insights and no way to convince the rival school of anything. These interesting insights mostly happen in something more properly called math, linguistics or cognitive science, not philosophy proper. There is no feeling of awe you get from listening to a true expert in a certain field. Expert physicists who venture into philosophy, like Tegmark and Page, quickly lose their aura of expertise and seem mere mortals with little or no advantage over other amateurs.
When Eliezer talks about something metaphysical related to MWI and Tegmark IV, or any kind of anthropics, I suspect that he is out of his depth, because he sounds as such. However, knowing that he is an expert in a somewhat related area makes me think that I may well have missed something important, and so I give him the benefit of a doubt and try to figure out what I may have missed. If the only difference is that I “endorse a very different theory of “reality” than Eliezer”, and if this is indeed only the matter of endorsement, and there is no way to tell experimentally who is right, now or in the far future, then his “theory of reality” becomes much less relevant to me and therefore much less interesting. Oh, and here I don’t mean realism vs instrumentalism, I mean falsifiable models of the “real external world”, as opposed to anything Everett-like or Barbour-like.
Even if the field X is confused, to confidently dismiss subtheory Y you must know something confidently about Y from within this confusion, such as that Y is inconsistent or nonreductionist or something. I often occupy this mental state myself but I’m aware that it’s ‘arrogant’ and setting myself above everyone in field X who does think Y is plausible—for example, I am arrogant with respect to respected but elderly physicists who think single-world interpretations of QM are plausible, or anyone who thinks our confusion about the ultimate nature of reality can keep the God subtheory in the running. Our admitted confusion does not permit that particular answer to remain plausible.
I don’t think anyone I take seriously would deny that the field of anthropics / magical-reality-fluid is confused. What do you think you know about all computable processes, or all logical theories with models, existing, which makes that obviously impermitted? In case it’s not clear, I wasn’t endorsing Tegmark Level IV as the obvious truth the way I consider MWI obvious, nor yet endorsing it at all, rather I was pointing out that with some further specification a version of T4 could provide a model in which frequencies would go as the probabilities assigned by the complexity+leverage penalty, which would not necessarily make it true. It is not clear to me what epistemic state you could occupy from which this would justly disappoint you in me, unless you considered T4 obviously forbidden even from within our confusion. And of course I’m fine with your being arrogant about that, so long as you realize you’re being arrogant and so long as you have the epistemological firepower to back it up.
Even if the field X is confused, to confidently dismiss subtheory Y you must know something confidently about Y from within this confusion, such as that Y is inconsistent or nonreductionist or something.
Maybe I was unclear. I don’t dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities. I agree that I am “arrogant” here, in the sense that I discount an opinion of a smart and popular MIT prof as misguided. The postulate “mathematical existence = physical existence” raises a category error exception for me, as one is, in your words, logic, the other is physics. In fact, I don’t understand why privilege math to begin with. Maybe the universe indeed does not run on math (man, I still chuckle every time I recall that omake). Maybe the trouble we have with understanding the world is that we rely on math too much (sorry, getting too Chopra here). Maybe the matrix lord was a sloppy programmer whose bugs and self-contradictory assumptions manifest themselves to us as black hole singularities, which are hidden from view only because the code maintainers did a passable job of acting on the QA reports. There are many ideas which are just as pretty and just as unjustifiable as TL4. I don’t pretend to fully grok the “complexity+leverage penalty” idea, except to say that your dark energy example makes me think less of it, as it seems to rely on considerations I find dubious (that any model with the potential of affecting gazillions of people in the far future if accurate is extremely unlikely despite being the currently best map available). Is it arrogant? Probably. Is it wrong? Not unless you prove the alternative right.
Maybe I was unclear. I don’t dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities.
He’s not saying that the leverage penalty might be correct because we might live in a certain type of Tegmark IV, he’s saying that the fact that the leverage penalty would be correct if we did live in Tegmark IV + some other assumptions shows (a) that it is a consistent decision procedure and¹ (b) it is the sort of decision procedure that emerges reasonably naturally and is thus a more reasonable hypothesis than if we didn’t know it comes up natuarally like that.
It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.
¹ The word ‘and’ isn’t really correct here. It’s very likely that EY means one of (a) and (b), and possibly both.
Huh. This whole exchange makes me more certain than I am missing something crucial, but reading and dissecting it repeatedly does not seem to help. And apparently it’s not the issue of not knowing enough math. I guess the mental block I can’t get over is “why TL4?”. Or maybe “what other mental constructs could one use in place of TL4 to make a similar argument?”
Maybe paper-machine or someone else on #lesswrong will be able to clarify this.
Not sure why you are asking, but yes, I pointed some out 5 levels up. They clearly have a complexity penalty, but I am not sure how much vs TL4. At least I know that the “sloppy programmer” construct is finite (though possibly circular). I am not sure how to even begin to estimate the Kolmogorov complexity of “everything mathematically possible exists physically”. What Turing machine would output all possible mathematical structures?
What Turing machine would output all possible mathematical structures?
“Loop infinitely, incrementing count from 1: [Let steps be count. Iterate all legal programs until steps = 0 into prog: [Load submachine state from “cache tape”. Execute one step of prog, writing output to “output tape”. Save machine state onto “cache tape”. Decrement steps.] ]”
The output of every program is found on the output tape (albeit at intervals). I’m sure one could design the Turing machine so that it reordered the output tape with every piece of data written so that they’re in order too, if you want that. Or make it copypaste the entire output so far to the end of the tape, so that every number of evaluation steps for every Turing machine has its own tape location. Seemed a little wasteful though.
It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.
You are right, I am out of my depth math-wise. Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).
TL4, or at least (TL4+some measure theory that gives calculable and sensible answers), is not entirely unfalsifiable. For instance, it predicts that a random observer (you) should live in a very “big” universe. Since we have plausible reasons to believe TL0-TL3 (or at least, I think we do), and I have a very hard time imagining specific laws of physics that give “bigger” causal webs than you get from TL0-TL3, that gives me some weak evidence for TL4; it could have been falsified but wasn’t.
It seems plausible that that’s the only evidence we’ll ever get regarding TL4. If so, I’m not sure that either of the terms “testable” or “untestable” apply. “Testable” means “susceptible to reproducible experiment”; “untestable” means “unsusceptible to experiment”; so what do you call something in between, which is susceptible only to limited and irreproducible evidence? Quasitestable?
Of course, you could still perhaps say “I ignore it as only quasitestable and therefore useless for justifying anything interesting”.
TL4 seems testable by asking what a ‘randomly chosen’ observer would expect to see. In fact, the simplest version seems falsified by the lack of observed discontinuities in physics (of the ‘clothes turn into a crocodile’ type).
Variants of TL4 that might hold seem untestable right now. But we could see them as ideas or directions for groping towards a theory, rather than complete hypotheses. Or it might happen that when we understand anthropics better, we’ll see an obvious test. (Or the original hypothesis might turn out to work, but I strongly doubt that.)
shminux,
It’s just a fact that you endorse a very different theory of “reality” than Eliezer. Why disguise your reasonable disagreement with him by claiming that you don’t understand him?
You talk like you don’t notice when highly-qualified-physicist shminux is talking and when average-armchair-philosopher shminux is talking.
Which is annoying to me in particular because physicist shminux knows a lot more than I, and I should pay attention to what he says in order to be less wrong, while philosopher shminux is not entitled to the same weight. So I’d like some markers of which one is talking.
I thought I was pretty clear re the “markers of which one is talking”. But let me recap.
Eliezer has thought about metaethics, decision theories and AI design for much much longer time and much much more seriously than I have. I can see that when I read what he writes about the issues I have not even thought of. While I cannot tell if it is correct, I can certainly tell that there is a fair amount of learning I still have to do if I wanted to be interesting. This is the same feeling I used to get (and still get on occasion) when talking with an expert in, say, General Relativity, before I learned the subject in sufficient depth. Now that I have some expertise in the area, I see the situation from the other side, as well. I can often recognize a standard amateurish argument before the person making it has finished. I often know exactly what implicit false premises lead to this argument, because I had been there myself. If I am lucky, I can successfully point out the problematic assumptions to the amateur in question, provided I can simplify it to the proper level. If so, the reaction I get is “that’s so cool… so deep… I’ll go and ponder it, Thank you, Master!”, the same thing I used to feel when hearing an expert answer my amateurish questions.
As far as Eliezer’s area of expertise is concerned, I am on the wrong side of the gulf. Thus I am happy to learn what I can from him in this area and be gratified if my humble suggestions prove useful on occasion.
I am much more skeptical about his forays into Quantum Mechanics, Relativity and some other areas of physics I have more than passing familiarity with. I do not get the feeling that what he says is “deep”, and only occasionally that it is “interesting”. Hence I am happy to discount his musings about MWI as amateurish.
There is this grey area between the two, which could be thought of as philosophy of science. While I am far from an expert in the area, I have put in a fair amount of effort to understand what the leading edge is. What I find is warring camps of hand-waving “experts” with few interesting insights and no way to convince the rival school of anything. These interesting insights mostly happen in something more properly called math, linguistics or cognitive science, not philosophy proper. There is no feeling of awe you get from listening to a true expert in a certain field. Expert physicists who venture into philosophy, like Tegmark and Page, quickly lose their aura of expertise and seem mere mortals with little or no advantage over other amateurs.
When Eliezer talks about something metaphysical related to MWI and Tegmark IV, or any kind of anthropics, I suspect that he is out of his depth, because he sounds as such. However, knowing that he is an expert in a somewhat related area makes me think that I may well have missed something important, and so I give him the benefit of a doubt and try to figure out what I may have missed. If the only difference is that I “endorse a very different theory of “reality” than Eliezer”, and if this is indeed only the matter of endorsement, and there is no way to tell experimentally who is right, now or in the far future, then his “theory of reality” becomes much less relevant to me and therefore much less interesting. Oh, and here I don’t mean realism vs instrumentalism, I mean falsifiable models of the “real external world”, as opposed to anything Everett-like or Barbour-like.
Even if the field X is confused, to confidently dismiss subtheory Y you must know something confidently about Y from within this confusion, such as that Y is inconsistent or nonreductionist or something. I often occupy this mental state myself but I’m aware that it’s ‘arrogant’ and setting myself above everyone in field X who does think Y is plausible—for example, I am arrogant with respect to respected but elderly physicists who think single-world interpretations of QM are plausible, or anyone who thinks our confusion about the ultimate nature of reality can keep the God subtheory in the running. Our admitted confusion does not permit that particular answer to remain plausible.
I don’t think anyone I take seriously would deny that the field of anthropics / magical-reality-fluid is confused. What do you think you know about all computable processes, or all logical theories with models, existing, which makes that obviously impermitted? In case it’s not clear, I wasn’t endorsing Tegmark Level IV as the obvious truth the way I consider MWI obvious, nor yet endorsing it at all, rather I was pointing out that with some further specification a version of T4 could provide a model in which frequencies would go as the probabilities assigned by the complexity+leverage penalty, which would not necessarily make it true. It is not clear to me what epistemic state you could occupy from which this would justly disappoint you in me, unless you considered T4 obviously forbidden even from within our confusion. And of course I’m fine with your being arrogant about that, so long as you realize you’re being arrogant and so long as you have the epistemological firepower to back it up.
Maybe I was unclear. I don’t dismiss Y=TL4 as wrong, I ignore it as untestable and therefore useless for justifying anything interesting, like how an AI ought to deal with tiny probabilities of enormous utilities. I agree that I am “arrogant” here, in the sense that I discount an opinion of a smart and popular MIT prof as misguided. The postulate “mathematical existence = physical existence” raises a category error exception for me, as one is, in your words, logic, the other is physics. In fact, I don’t understand why privilege math to begin with. Maybe the universe indeed does not run on math (man, I still chuckle every time I recall that omake). Maybe the trouble we have with understanding the world is that we rely on math too much (sorry, getting too Chopra here). Maybe the matrix lord was a sloppy programmer whose bugs and self-contradictory assumptions manifest themselves to us as black hole singularities, which are hidden from view only because the code maintainers did a passable job of acting on the QA reports. There are many ideas which are just as pretty and just as unjustifiable as TL4. I don’t pretend to fully grok the “complexity+leverage penalty” idea, except to say that your dark energy example makes me think less of it, as it seems to rely on considerations I find dubious (that any model with the potential of affecting gazillions of people in the far future if accurate is extremely unlikely despite being the currently best map available). Is it arrogant? Probably. Is it wrong? Not unless you prove the alternative right.
He’s not saying that the leverage penalty might be correct because we might live in a certain type of Tegmark IV, he’s saying that the fact that the leverage penalty would be correct if we did live in Tegmark IV + some other assumptions shows (a) that it is a consistent decision procedure and¹ (b) it is the sort of decision procedure that emerges reasonably naturally and is thus a more reasonable hypothesis than if we didn’t know it comes up natuarally like that.
It is possible that it is hard to communicate here since Eliezer is making analogies to model theory, and I would assume that you are not familiar with model theory.
¹ The word ‘and’ isn’t really correct here. It’s very likely that EY means one of (a) and (b), and possibly both.
(Yep. More a than b, it still feels pretty unnatural to me.)
Huh. This whole exchange makes me more certain than I am missing something crucial, but reading and dissecting it repeatedly does not seem to help. And apparently it’s not the issue of not knowing enough math. I guess the mental block I can’t get over is “why TL4?”. Or maybe “what other mental constructs could one use in place of TL4 to make a similar argument?”
Maybe paper-machine or someone else on #lesswrong will be able to clarify this.
Have you got one?
Not sure why you are asking, but yes, I pointed some out 5 levels up. They clearly have a complexity penalty, but I am not sure how much vs TL4. At least I know that the “sloppy programmer” construct is finite (though possibly circular). I am not sure how to even begin to estimate the Kolmogorov complexity of “everything mathematically possible exists physically”. What Turing machine would output all possible mathematical structures?
“Loop infinitely, incrementing
count
from 1: [Letsteps
becount
. Iterate all legal programs untilsteps
= 0 intoprog
: [Load submachine state from “cache tape”. Execute one step ofprog
, writing output to “output tape”. Save machine state onto “cache tape”. Decrementsteps
.] ]”The output of every program is found on the output tape (albeit at intervals). I’m sure one could design the Turing machine so that it reordered the output tape with every piece of data written so that they’re in order too, if you want that. Or make it copypaste the entire output so far to the end of the tape, so that every number of evaluation steps for every Turing machine has its own tape location. Seemed a little wasteful though.
edit: THANK YOU GWERN . This is indeed what I was thinking of :D
Hey, don’t look at me. I’m with you on “Existence of T4 is untestable therefore boring.”
You are right, I am out of my depth math-wise. Maybe that’s why I can’t see the relevance of an untestable theory to AI design.
It seems to be the problem that is relevant to AI design. How does an expected utility maximising agent handle edge cases and infinitesimals given logical uncertainty and bounded capabilities? If you get that wrong then Rocks Fall and Everyone Dies. The relevance of any given theory of how such things can be modelled is then based on either suitability for use in an AI design (or conceivably the implications if an AI constructed and used said model).
(Also yep.)
TL4, or at least (TL4+some measure theory that gives calculable and sensible answers), is not entirely unfalsifiable. For instance, it predicts that a random observer (you) should live in a very “big” universe. Since we have plausible reasons to believe TL0-TL3 (or at least, I think we do), and I have a very hard time imagining specific laws of physics that give “bigger” causal webs than you get from TL0-TL3, that gives me some weak evidence for TL4; it could have been falsified but wasn’t.
It seems plausible that that’s the only evidence we’ll ever get regarding TL4. If so, I’m not sure that either of the terms “testable” or “untestable” apply. “Testable” means “susceptible to reproducible experiment”; “untestable” means “unsusceptible to experiment”; so what do you call something in between, which is susceptible only to limited and irreproducible evidence? Quasitestable?
Of course, you could still perhaps say “I ignore it as only quasitestable and therefore useless for justifying anything interesting”.
TL4 seems testable by asking what a ‘randomly chosen’ observer would expect to see. In fact, the simplest version seems falsified by the lack of observed discontinuities in physics (of the ‘clothes turn into a crocodile’ type).
Variants of TL4 that might hold seem untestable right now. But we could see them as ideas or directions for groping towards a theory, rather than complete hypotheses. Or it might happen that when we understand anthropics better, we’ll see an obvious test. (Or the original hypothesis might turn out to work, but I strongly doubt that.)