It’s an empirical fact (a meta-observation) that they do. You can postulate that there is a predictable universe that is the source of these observations, but this is a tautology: they are predictable because they originate in a predictable universe.
It’s an empirical fact (a meta-observation) that they do.
Right, and I’m asking why this particular meta-observation holds, as opposed to some other meta-observation, such as e.g. the meta-observation that the laws of physics change to something different every Sunday, or perhaps the meta-observation that there exists no regularity in our observations at all.
Again, without a certain regularity in our observations we would not be here talking about it. Or hallucinating talking about it. Or whatever. You can ask the “why” question all you want, but the only non-metaphysical answer can be another model, one more level deep. And then you can ask the “why” question again, and look for even deeper model. All. The. Way. Down.
That doesn’t seem to answer the question? You seem to be claiming that because any answer to the question will necessitate the asking of further questions, that means the question itself isn’t worth answering. If so, I think this is a claim that needs defending.
Maybe I misunderstand the question. My answer is that the only answer to any “why” question is constructing yet another model. Which is a very worthwhile undertaking, since the new model will hopefully make new testable predictions, in addition to explaining the known ones.
My actual question was “why are our observations structured rather than unstructured?”, which I don’t think you actually answered; the closest you got was
Again, without a certain regularity in our observations we would not be here talking about it. Or hallucinating talking about it. Or whatever.
which isn’t actually an explanation, so far as I can tell. I’d be more interested in hearing an object-level answer to the question.
why are our observations structured rather than unstructured?
are you asking why they are not random and unpredictable? That’s an observation in itself, as I pointed out… One might use the idea of predictable objective reality to make oneself feel better. It does not do much in terms of predictive power. Or you can think of yourself as a Boltzmann brain hallucinating a reality. Physicists actually talk about those as if they were more than idle musings.
are you asking why they are not random and unpredictable?
Yes, I am. I don’t see why the fact that that’s an “observation in itself” makes it an invalid question to ask. The fact of the matter is, there are many possible observation sequences, and the supermajority of those sequences contain nothing resembling structure or regularity. So the fact that we appear to be recording an observation sequence that is ordered introduces an improbability that needs to be addressed. How do you propose to address this improbability?
My answer is, as before, conditional on our ability to observe anything, the observations are guaranteed to be somewhat predictable. One can imagine completely random sequences of observation, of course, but those models are not self-consistent, as there have to be some regularities for the models to be constructed. In the usual speak those models refer to other potential universes, not to ours.
Hm. Interesting; I hadn’t realized you intended that to be your answer. In that case, however, the question simply gets kicked one level back:
conditional on our ability to observe anything
Why do we have this ability in the first place?
(Also, even granting that our ability to make observations implies some level of predictability—which I’m not fully convinced of—I don’t think it implies the level of predictability we actually observe. For one thing, it doesn’t rule out the possibility of the laws of physics changing every Sunday. I’m curious to know, on your model, why don’t we observe anything like that?)
our ability to make observations implies some level of predictability—which I’m not fully convinced of
Maybe we can focus on this one first, before tackling a harder question of what degree of predictability is observed, what it depends on, and what “the laws of physics changing every Sunday” would actually mean observationally.
Please describe a world in which there is no predictability at all, yet where agents “exist”. How they survive without being able to find food, interact, or even breathe, because there breathing means you have a body that can anticipate that breathing keeps it alive.
Please describe a world in which there is no predictability at all, yet where agents “exist”. How they survive without being able to find food, interact, or even breathe, because there breathing means you have a body that can anticipate that breathing keeps it alive.
I can write a computer program which trains some kind of learner (perhaps a neural network; I hear those are all the rage these days). I can then hook that program up to a quantum RNG, feeding it input bits that are random in the purest sense of the term. It seems to me that my learner would then exist in a “world” where no predictability exists, where the next input bit has absolutely nothing to do with previous input bits, etc. Perhaps not coincidentally, the learner in question would find that no hypothesis (if we’re dealing with a neural network, “hypothesis” will of course refer to a particular configuration of weights) provides a predictive edge over any other, and hence has no reason to prefer or disprefer any particular hypothesis.
You may protest that this example does not count—that even though the program’s input bits are random, it is nonetheless embedded in hardware whose behavior is lawfully determined—and thus that the program’s very existence is proof of at least some predictability. But what good is this assertion to the learner? Even if it manages to deduce its own existence (which is impossible for at least some types of learners—for example, a simple feed-forward neural net cannot ever learn to reflect on its own existence no matter how long it trains), this does not help it predict the next bit of input. (In fact, if I understood your position correctly, shminux, I suspect you would argue that such a learner would do well not to start making assumptions about its own existence, since such assumptions do not provide predictive value—just as you seem to believe the existence of a “territory” does not provide predictive value.)
But to tie this back to the original topic of conversation: empirically, we are not in the position of the unfortunate learner I just described. We do not appear to be receiving random input data; our observations are highly structured in a way that strongly suggests (to me, at least) that there is something forcing them to be so. Perhaps our input bits come from a lawful external reality; that would certainly qualify as “something forcing them to be [structured]”. This “external reality” hypothesis successfully explains what would otherwise be a gigantic improbability, and I don’t think there are any competing hypotheses at this stage—unless of course you consider “there is no external reality, and our observations are only structured due to a giant cosmic coincidence” to be an alternative hypothesis worth putting forth. (As some of my comments so far might imply, I do not consider this alternative hypothesis very probable.)
can write a computer program which trains some kind of learner
Uh. To write a program one needs at least a little bit of predictability. So I am assuming the program is external to the unpredictable world you are describing. Is that a fair assumption?
And what about the learner program? Does it exist in that unpredictable world?
You may protest that this example does not count—that even though the program’s input bits are random, it is nonetheless embedded in hardware whose behavior is lawfully determined—and thus that the program’s very existence is proof of at least some predictability.
Exactly. So you are saying that that universe’s predictability only applies to one specific algorithm, the learner program, right? It’s a bit contrived and somewhat solipsistic, but, sure, it’s interesting to explore. Not something I had seriously considered before.
We do not appear to be receiving random input data; our observations are highly structured in a way that strongly suggests (to me, at least) that there is something forcing them to be so.
Yep, it’s a good model at times. But just that, a model. Not all observed inputs fit well into the “objective reality” framework. Consider the occurrences where insisting on objective reality actually leads you away from useful models. E.g. “are numbers real?”
unless of course you consider “there is no external reality, and our observations are only structured due to a giant cosmic coincidence”
No. This sentence already presumes external reality, right there in the words “cosmic coincidence,” so, as far as I can tell, the logic there is circular.
This sentence already presumes external reality, right there in the words “cosmic coincidence,”
I’m not sure what you mean by this. The most straightforward interpretation of your words seems to imply that you think the word “coincidence”—which (in usual usage) refers simply to an improbable occurrence—presumes the existence of an external reality, but I’m not sure why that would be so.
(Unless it’s the word “cosmic” that you object to? If so, that word can be dropped without issue, I think.)
Yes, “cosmic coincidence”. What does it mean? Coincidence, interpreted as a low probability event, presumes a probability distribution over… something, I am not sure what in your case, if not an external reality.
a probability distribution over… something, I am not sure what in your case, if not an external reality.
I confess to being quite confused by this statement. Probability distributions can be constructed without making any reference to an “external reality”; perhaps the purest example would simply be some kind of prior over different input sequences. At this point, I suspect you and I may be taking the phrase “external reality” to mean very different things—so if you don’t mind, could I ask you to rephrase the quoted statement after Tabooing “external reality” and all synonyms?
EDIT: I suppose if I’m going to ask you to Taboo “external reality”, I may as well do the same thing for “cosmic coincidence”, just to try and help bridge the gap more quickly. The original statement (for reference):
There is no external reality, and our observations are only structured due to a giant cosmic coincidence.
And here is the Tabooed version (which is, as expected, much longer):
Although there is a model in our hypothesis space with an excellent compression ratio on our past observations, we should not expect this model to continue performing well on future observations. That is, we should not expect there to be a model in our hypothesis space that outperforms the max-entropy distribution (which assigns equal probability to all possible future observation sequences), and although we currently have a model that appears to be significantly outperforming the max-entropy distribution, this is merely an artifact of our finite dataset, which we may safely expect to disappear shortly.
Taken literally, the “coincidence hypothesis” predicts that our observations ought to dissolve into a mess of random chaos, which as far as I can tell is not happening. To me, this suffices to establish the (probable) existence of some kind of fixed reality.
Thank you for rephrasing. Let me try my version. Notice how it doesn’t assume anything about probabilities of coincidences, as I don’t see those contributing to better predictions.
A certain set of past inputs has proven fruitful for constructing models that reasonably accurately predict similar sets of future inputs. Some of these models cover an especially wide range of input sets. This seemingly near universal applicability of some models makes it tempting to privilege such a set of models over others, more narrowly applicable, and call this set the source of all inputs we can possibly receive, a “reality”.
In other words, sometimes observations can be used to make good predictions, for a time. Then we assume that these predictions have a single source, the external reality. I guess I don’t get your point about needing to regress to unpredictability without postulating that reality thing.
(Okay, I’ve been meaning to get back to you on this for a while, but for some reason haven’t until now.)
It seems, based on what you’re saying, that you’re taking “reality” to mean some preferred set of models. If so, then I think I was correct that you and I were using the same term to refer to different concepts. I still have some questions for you regarding your position on “reality” as you understand the term, but I think it may be better to defer those until after I give a basic rundown of my position.
Essentially, my belief in an external reality, if we phrase it in the same terms we’ve been using (namely, the language of models and predictions), can be summarized as the belief that there is some (reachable) model within our hypothesis space that can perfectly predict further inputs. This can be further repackaged into a empirical prediction: I expect that (barring an existential catastrophe that erases us entirely) there will eventually come a point when we have the “full picture” of physics, such that no further experiments we perform will ever produce a result we find surprising. If we arrive at such a model, I would be comfortable referring to that model as “true”, and the phenomena it describes as “reality”.
Initially, I took you to be asserting the negation of the above statement—namely, that we will never stop being surprised by the universe, and that our models, though they might asymptotically approach a rate of 100% predictive success, will never quite get there. It is this claim that I find implausible, since it seems to imply that there is no model in our hypothesis space capable of predicting further inputs with 100% accuracy—but if that is the case, why do we currently have a model with >99% predictive accuracy? Is the success of this model a mere coincidence? It must be, since (by assumption) there is no model actually capable of describing the universe. This is what I was gesturing at with the “coincidence” hypothesis I kept mentioning.
Now, perhaps you actually do hold the position described in the above paragraph. (If you do, please let me know.) But based on what you wrote, it doesn’t seem necessary for me to assume that you do. Rather, you seem to be saying something along the lines of, “It may be tempting to take our current set of models as describing how reality ultimately is, but in fact we have no way of knowing this for sure, so it’s best not to assume anything.”
If that’s all you’re saying, it doesn’t necessarily conflict with my view (although I’d suggest that “reality doesn’t exist” is a rather poor way to go about expressing this sentiment). Nonetheless, if I’m correct about your position, then I’m curious as to what you think it’s useful for? Presumably it doesn’t help make any predictions (almost by definition), so I assume you’d say it’s useful for dissolving certain kinds of confusion. Any examples, if so?
t seems, based on what you’re saying, that you’re taking “reality” to mean some preferred set of models.
Depending on the meaning of the word preferred. I tend to use “useful” instead.
my belief in an external reality, if we phrase it in the same terms we’ve been using (namely, the language of models and predictions), can be summarized as the belief that there is some (reachable) model within our hypothesis space that can perfectly predict further inputs.
It’s a common belief, but it appears to me quite unfounded, since it hasn’t happened in millennia of trying. So, a direct observation speaks against this model.
I expect that (barring an existential catastrophe that erases us entirely) there will eventually come a point when we have the “full picture” of physics, such that no experiment we perform will produce a result we find surprising.
It’s another common belief, though separate from the belief of reality. It is a belief that this reality is efficiently knowable, a bold prediction that is not supported by evidence and has hints to the contrary from the complexity theory.
If we arrive at such a model, I would be comfortable referring to that model as “true”, and the phenomena it describes as “reality”.
Yes, in this highly hypothetical case I would agree.
Initially, I took you to be asserting the negation of the above statement—namely, that we will never stop being surprised by the universe, and that our models, though they might asymptotically approach a rate of 100% predictive success, will never quite get there.
I make no claims one way or the other. We tend to get better at predicting observations in certain limited areas, though it tends to come at a cost. In high-energy physics the progress has slowed to a standstill, no interesting observations has been predicted since last millennium. General Relativity plus the standard model of the particle physics have stood unchanged and unchallenged for decades, the magic numbers they require remaining unexplained since the Higgs mass was predicted a long time ago. While this suggests that, yes, we will probably never stop being surprised by the -universe- (no strike through markup here?) observations, I make no such claims.
It is this claim that I find implausible, since it seems to imply that there is no model in our hypothesis space capable of predicting further inputs with 100% accuracy—but if that is the case, why do we currently have a model with >99% predictive accuracy?
Yes we do have a good handle on many isolated sets of observations, though what you mean by 99% is not clear to me. Similarly, I don’t know what you mean by 100% accuracy here. I can imagine that in some limited areas 100% accuracy can be achievable, though we often get surprised even there. Say, in math the Hilbert Program had a surprising twist. Feel free to give examples of 100% predictability, and we can discuss them. I find this model (of no universal perfect predictability) very plausible and confirmed by observations. I am still unsure what you mean by coincidence here. The dictionary defines it as “A remarkable concurrence of events or circumstances without apparent causal connection.” and that open a whole new can of worms about what “apparent” and “causal” mean in the situation we are describing, and we soon will be back to a circular argument of implying some underlying reality to explain why we need to postulate reality.
Now, perhaps you actually do hold the position described in the above paragraph. (If you do, please let me know.) But based on what you wrote, it doesn’t seem necessary for me to assume that you do. Rather, you seem to be saying something along the lines of, “It may be tempting to take our current set of models as describing how reality ultimately is, but in fact we have no way of knowing this for sure, so it’s best not to assume anything.”
I don’t disagree with the quoted part, it’s a decent description.
If that’s all you’re saying, it doesn’t necessarily conflict with my view (although I’d suggest that “reality doesn’t exist” is a rather poor way to go about expressing this sentiment). Nonetheless, if I’m correct about your position, then I’m curious as to what you think it’s useful for? Presumably it doesn’t help make any predictions (almost by definition), so I assume you’d say it’s useful for dissolving certain kinds of confusion. Any examples, if so?
“reality doesn’t exist” was not my original statement, it was “models all the way down”, a succinct way to express the current state of knowledge, where all we get is observations and layers of models based on them predicting future observations. It is useful to avoid getting astray with questions about existence or non-existence of something, like numbers, multiverse or qualia. If you stick to models, these questions are dissolved as meaningless (not useful for predicting future observations). Just like the question of counting angels on the head of a pin. Tegmark Level X, the hard problem of consciousness, MWI vs Copenhagen, none of these are worth arguing over until and unless you suggest something that can be potentially observable.
It’s a common belief, but it appears to me quite unfounded, since it hasn’t happened in millennia of trying. So, a direct observation speaks against this model.
...
It’s another common belief, though separate from the belief of reality. It is a belief that this reality is efficiently knowable, a bold prediction that is not supported by evidence and has hints to the contrary from the complexity theory.
...
General Relativity plus the standard model of the particle physics have stood unchanged and unchallenged for decades, the magic numbers they require remaining unexplained since the Higgs mass was predicted a long time ago. While this suggests that, yes, we will probably never stop being surprised by the universeobservations, I make no such claims.
I think at this stage we have finally hit upon a point of concrete disagreement. If I’m interpreting you correctly, you seem to be suggesting that because humans have not yet converged on a “Theory of Everything” after millennia of trying, this is evidence against the existence of such a theory.
It seems to me, on the other hand, that our theories have steadily improved over those millennia (in terms of objectively verifiable metrics like their ability to predict the results of increasingly esoteric experiments), and that this is evidence in favor of an eventual theory of everything. That we haven’t converged on such a theory yet is simply a consequence, in my view, of the fact that the correct theory is in some sense hard to find. But to postulate that no such theory exists is, I think, not only unsupported by the evidence, but actually contradicted by it—unless you’re interpreting the state of scientific progress quite differently than I am.*
That’s the argument from empirical evidence, which (hopefully) allows for a more productive disagreement than the relatively abstract subject matter we’ve discussed so far. However, I think one of those abstract subjects still deserves some attention—in particular, you expressed further confusion about my use of the word “coincidence”:
I am still unsure what you mean by coincidence here. The dictionary defines it as “A remarkable concurrence of events or circumstances without apparent causal connection.” and that open a whole new can of worms about what “apparent” and “causal” mean in the situation we are describing, and we soon will be back to a circular argument of implying some underlying reality to explain why we need to postulate reality.
I had previously provided a Tabooed version of my statement, but perhaps even that was insufficiently clear. (If so, I apologize.) This time, instead of attempting to make my statement even more abstract, I’ll try taking a different tack and making things more concrete:
I don’t think that, if our observations really were impossible to model completely accurately, we would be able to achieve the level of predictive success we have. The fact that we have managed to achieve some level of predictive accuracy (not 100%, but some!) strongly suggests to me that our observations are not impossible to model—and I say this for a very simple reason:
How can it be possible to achieve even partial accuracy at predicting something that is purportedly impossible to model? We can’t have done it by actually modeling the thing, of course, because we’re assuming that the thing cannot be modeled by hypothesis. So our seeming success at predicting the thing, must not actually be due to any kind of successful modeling of said thing. Then how is it that our model is producing seemingly accurate predictions? It seems as though we are in a similar position to a lazy student who, upon being presented with a test they didn’t study for, is forced to guess the right answers—except that in our case, the student somehow gets lucky enough to choose the correct answer every time, despite the fact that they are merely guessing, rather than working out the answer the way they should.
I think that the word “coincidence” is a decent way of describing the student’s situation in this case, even if it doesn’t fully accord with your dictionary’s definition (after all, whoever said the dictionary editors determine have the sole power to determine a word’s usage?)--and analogously, our model of the thing must also only be making correct predictions by coincidence, since we’ve ruled out the possibility, a priori, that it might actually be correctly modeling the way the thing works.
I find it implausible that our models are actually behaving this way with respect to the “thing”/the universe, in precisely the same way I would find it implausible that a student who scored 95% on a test had simply guessed on all of the questions. I hope that helps clarify what I meant by “coincidence” in this context.
*You did say, of course, that you weren’t making any claims or postulates to that effect. But it certainly seems to me that you’re not completely agnostic on the issue—after all, your initial claim was “it’s models all the way down”, and you’ve fairly consistently stuck to defending that claim throughout not just this thread, but your entire tenure on LW. So I think it’s fair to treat you as holding that position, at least for the sake of a discussion like this.
It seems to me, on the other hand, that our theories have steadily improved over those millennia (in terms of objectively verifiable metrics like their ability to predict the results of increasingly esoteric experiments)
Yes, definitely.
and that this is evidence in favor of an eventual theory of everything.
I don’t see why it would be. Just because one one is able to march forward doesn’t mean that there is a destination. There are many possible alternatives. One is that we will keep making more accurate models (in a sense of making more detailed confirmed predictions in more areas) without ever ending anywhere. Another is that we will stall in our predictive abilities and stop making measurable progress, get stuck in a swamp, so to speak. This could happen, for example, if the computational power required to make better predictions grows exponentially with accuracy. Yet another alternative is that the act of making a better model actually creates new observations (in your language, changes the laws of the universe). After all, if you believe that we are agents embedded in the universe, then our actions change the universe, and who is to say that at some point they won’t change even what we think are the fundamental laws. There is an amusing novel about the universe protecting itself from overly inquisitive humans: https://en.wikipedia.org/wiki/Definitely_Maybe_(novel)
How can it be possible to achieve even partial accuracy at predicting something that is purportedly impossible to model?
I don’t believe I have said anything of the sort. Of course we are able to build models. Without predictability life, let alone consciousness would be impossible, and that was one of my original statements. I don’t know what is it I said that gave you the impression that abandoning the concept of objective reality means we ought to lose predictability in any way.
Again:
But to postulate that no such theory exists is, I think, not only unsupported by the evidence, but actually contradicted by it—unless you’re interpreting the state of scientific progress quite differently than I am.*
I don’t postulate it. You postulate that there is something at the bottom. I’m simply saying that there is no need for this postulate, and, given what we see so far, every prediction of absolute knowledge in a given area turned out to be wrong, so, odds are, whether or not there is something at the bottom or not, at this point this postulate is harmful, rather than useful, and is wholly unnecessary. Our current experience suggests that it is all models, and if this ever changes, that would be a surprise.
It’s an empirical fact (a meta-observation) that they do. You can postulate that there is a predictable universe that is the source of these observations, but this is a tautology: they are predictable because they originate in a predictable universe.
Right, and I’m asking why this particular meta-observation holds, as opposed to some other meta-observation, such as e.g. the meta-observation that the laws of physics change to something different every Sunday, or perhaps the meta-observation that there exists no regularity in our observations at all.
Again, without a certain regularity in our observations we would not be here talking about it. Or hallucinating talking about it. Or whatever. You can ask the “why” question all you want, but the only non-metaphysical answer can be another model, one more level deep. And then you can ask the “why” question again, and look for even deeper model. All. The. Way. Down.
That doesn’t seem to answer the question? You seem to be claiming that because any answer to the question will necessitate the asking of further questions, that means the question itself isn’t worth answering. If so, I think this is a claim that needs defending.
Maybe I misunderstand the question. My answer is that the only answer to any “why” question is constructing yet another model. Which is a very worthwhile undertaking, since the new model will hopefully make new testable predictions, in addition to explaining the known ones.
My actual question was “why are our observations structured rather than unstructured?”, which I don’t think you actually answered; the closest you got was
which isn’t actually an explanation, so far as I can tell. I’d be more interested in hearing an object-level answer to the question.
I am still not sure what you mean.
are you asking why they are not random and unpredictable? That’s an observation in itself, as I pointed out… One might use the idea of predictable objective reality to make oneself feel better. It does not do much in terms of predictive power. Or you can think of yourself as a Boltzmann brain hallucinating a reality. Physicists actually talk about those as if they were more than idle musings.
Yes, I am. I don’t see why the fact that that’s an “observation in itself” makes it an invalid question to ask. The fact of the matter is, there are many possible observation sequences, and the supermajority of those sequences contain nothing resembling structure or regularity. So the fact that we appear to be recording an observation sequence that is ordered introduces an improbability that needs to be addressed. How do you propose to address this improbability?
My answer is, as before, conditional on our ability to observe anything, the observations are guaranteed to be somewhat predictable. One can imagine completely random sequences of observation, of course, but those models are not self-consistent, as there have to be some regularities for the models to be constructed. In the usual speak those models refer to other potential universes, not to ours.
Hm. Interesting; I hadn’t realized you intended that to be your answer. In that case, however, the question simply gets kicked one level back:
Why do we have this ability in the first place?
(Also, even granting that our ability to make observations implies some level of predictability—which I’m not fully convinced of—I don’t think it implies the level of predictability we actually observe. For one thing, it doesn’t rule out the possibility of the laws of physics changing every Sunday. I’m curious to know, on your model, why don’t we observe anything like that?)
Maybe we can focus on this one first, before tackling a harder question of what degree of predictability is observed, what it depends on, and what “the laws of physics changing every Sunday” would actually mean observationally.
Please describe a world in which there is no predictability at all, yet where agents “exist”. How they survive without being able to find food, interact, or even breathe, because there breathing means you have a body that can anticipate that breathing keeps it alive.
I can write a computer program which trains some kind of learner (perhaps a neural network; I hear those are all the rage these days). I can then hook that program up to a quantum RNG, feeding it input bits that are random in the purest sense of the term. It seems to me that my learner would then exist in a “world” where no predictability exists, where the next input bit has absolutely nothing to do with previous input bits, etc. Perhaps not coincidentally, the learner in question would find that no hypothesis (if we’re dealing with a neural network, “hypothesis” will of course refer to a particular configuration of weights) provides a predictive edge over any other, and hence has no reason to prefer or disprefer any particular hypothesis.
You may protest that this example does not count—that even though the program’s input bits are random, it is nonetheless embedded in hardware whose behavior is lawfully determined—and thus that the program’s very existence is proof of at least some predictability. But what good is this assertion to the learner? Even if it manages to deduce its own existence (which is impossible for at least some types of learners—for example, a simple feed-forward neural net cannot ever learn to reflect on its own existence no matter how long it trains), this does not help it predict the next bit of input. (In fact, if I understood your position correctly, shminux, I suspect you would argue that such a learner would do well not to start making assumptions about its own existence, since such assumptions do not provide predictive value—just as you seem to believe the existence of a “territory” does not provide predictive value.)
But to tie this back to the original topic of conversation: empirically, we are not in the position of the unfortunate learner I just described. We do not appear to be receiving random input data; our observations are highly structured in a way that strongly suggests (to me, at least) that there is something forcing them to be so. Perhaps our input bits come from a lawful external reality; that would certainly qualify as “something forcing them to be [structured]”. This “external reality” hypothesis successfully explains what would otherwise be a gigantic improbability, and I don’t think there are any competing hypotheses at this stage—unless of course you consider “there is no external reality, and our observations are only structured due to a giant cosmic coincidence” to be an alternative hypothesis worth putting forth. (As some of my comments so far might imply, I do not consider this alternative hypothesis very probable.)
Uh. To write a program one needs at least a little bit of predictability. So I am assuming the program is external to the unpredictable world you are describing. Is that a fair assumption?
And what about the learner program? Does it exist in that unpredictable world?
Exactly. So you are saying that that universe’s predictability only applies to one specific algorithm, the learner program, right? It’s a bit contrived and somewhat solipsistic, but, sure, it’s interesting to explore. Not something I had seriously considered before.
Yep, it’s a good model at times. But just that, a model. Not all observed inputs fit well into the “objective reality” framework. Consider the occurrences where insisting on objective reality actually leads you away from useful models. E.g. “are numbers real?”
No. This sentence already presumes external reality, right there in the words “cosmic coincidence,” so, as far as I can tell, the logic there is circular.
I’m not sure what you mean by this. The most straightforward interpretation of your words seems to imply that you think the word “coincidence”—which (in usual usage) refers simply to an improbable occurrence—presumes the existence of an external reality, but I’m not sure why that would be so.
(Unless it’s the word “cosmic” that you object to? If so, that word can be dropped without issue, I think.)
Yes, “cosmic coincidence”. What does it mean? Coincidence, interpreted as a low probability event, presumes a probability distribution over… something, I am not sure what in your case, if not an external reality.
I confess to being quite confused by this statement. Probability distributions can be constructed without making any reference to an “external reality”; perhaps the purest example would simply be some kind of prior over different input sequences. At this point, I suspect you and I may be taking the phrase “external reality” to mean very different things—so if you don’t mind, could I ask you to rephrase the quoted statement after Tabooing “external reality” and all synonyms?
EDIT: I suppose if I’m going to ask you to Taboo “external reality”, I may as well do the same thing for “cosmic coincidence”, just to try and help bridge the gap more quickly. The original statement (for reference):
And here is the Tabooed version (which is, as expected, much longer):
Taken literally, the “coincidence hypothesis” predicts that our observations ought to dissolve into a mess of random chaos, which as far as I can tell is not happening. To me, this suffices to establish the (probable) existence of some kind of fixed reality.
Thank you for rephrasing. Let me try my version. Notice how it doesn’t assume anything about probabilities of coincidences, as I don’t see those contributing to better predictions.
In other words, sometimes observations can be used to make good predictions, for a time. Then we assume that these predictions have a single source, the external reality. I guess I don’t get your point about needing to regress to unpredictability without postulating that reality thing.
(Okay, I’ve been meaning to get back to you on this for a while, but for some reason haven’t until now.)
It seems, based on what you’re saying, that you’re taking “reality” to mean some preferred set of models. If so, then I think I was correct that you and I were using the same term to refer to different concepts. I still have some questions for you regarding your position on “reality” as you understand the term, but I think it may be better to defer those until after I give a basic rundown of my position.
Essentially, my belief in an external reality, if we phrase it in the same terms we’ve been using (namely, the language of models and predictions), can be summarized as the belief that there is some (reachable) model within our hypothesis space that can perfectly predict further inputs. This can be further repackaged into a empirical prediction: I expect that (barring an existential catastrophe that erases us entirely) there will eventually come a point when we have the “full picture” of physics, such that no further experiments we perform will ever produce a result we find surprising. If we arrive at such a model, I would be comfortable referring to that model as “true”, and the phenomena it describes as “reality”.
Initially, I took you to be asserting the negation of the above statement—namely, that we will never stop being surprised by the universe, and that our models, though they might asymptotically approach a rate of 100% predictive success, will never quite get there. It is this claim that I find implausible, since it seems to imply that there is no model in our hypothesis space capable of predicting further inputs with 100% accuracy—but if that is the case, why do we currently have a model with >99% predictive accuracy? Is the success of this model a mere coincidence? It must be, since (by assumption) there is no model actually capable of describing the universe. This is what I was gesturing at with the “coincidence” hypothesis I kept mentioning.
Now, perhaps you actually do hold the position described in the above paragraph. (If you do, please let me know.) But based on what you wrote, it doesn’t seem necessary for me to assume that you do. Rather, you seem to be saying something along the lines of, “It may be tempting to take our current set of models as describing how reality ultimately is, but in fact we have no way of knowing this for sure, so it’s best not to assume anything.”
If that’s all you’re saying, it doesn’t necessarily conflict with my view (although I’d suggest that “reality doesn’t exist” is a rather poor way to go about expressing this sentiment). Nonetheless, if I’m correct about your position, then I’m curious as to what you think it’s useful for? Presumably it doesn’t help make any predictions (almost by definition), so I assume you’d say it’s useful for dissolving certain kinds of confusion. Any examples, if so?
Depending on the meaning of the word preferred. I tend to use “useful” instead.
It’s a common belief, but it appears to me quite unfounded, since it hasn’t happened in millennia of trying. So, a direct observation speaks against this model.
It’s another common belief, though separate from the belief of reality. It is a belief that this reality is efficiently knowable, a bold prediction that is not supported by evidence and has hints to the contrary from the complexity theory.
Yes, in this highly hypothetical case I would agree.
I make no claims one way or the other. We tend to get better at predicting observations in certain limited areas, though it tends to come at a cost. In high-energy physics the progress has slowed to a standstill, no interesting observations has been predicted since last millennium. General Relativity plus the standard model of the particle physics have stood unchanged and unchallenged for decades, the magic numbers they require remaining unexplained since the Higgs mass was predicted a long time ago. While this suggests that, yes, we will probably never stop being surprised by the -universe- (no strike through markup here?) observations, I make no such claims.
Yes we do have a good handle on many isolated sets of observations, though what you mean by 99% is not clear to me. Similarly, I don’t know what you mean by 100% accuracy here. I can imagine that in some limited areas 100% accuracy can be achievable, though we often get surprised even there. Say, in math the Hilbert Program had a surprising twist. Feel free to give examples of 100% predictability, and we can discuss them. I find this model (of no universal perfect predictability) very plausible and confirmed by observations. I am still unsure what you mean by coincidence here. The dictionary defines it as “A remarkable concurrence of events or circumstances without apparent causal connection.” and that open a whole new can of worms about what “apparent” and “causal” mean in the situation we are describing, and we soon will be back to a circular argument of implying some underlying reality to explain why we need to postulate reality.
I don’t disagree with the quoted part, it’s a decent description.
“reality doesn’t exist” was not my original statement, it was “models all the way down”, a succinct way to express the current state of knowledge, where all we get is observations and layers of models based on them predicting future observations. It is useful to avoid getting astray with questions about existence or non-existence of something, like numbers, multiverse or qualia. If you stick to models, these questions are dissolved as meaningless (not useful for predicting future observations). Just like the question of counting angels on the head of a pin. Tegmark Level X, the hard problem of consciousness, MWI vs Copenhagen, none of these are worth arguing over until and unless you suggest something that can be potentially observable.
...
...
I think at this stage we have finally hit upon a point of concrete disagreement. If I’m interpreting you correctly, you seem to be suggesting that because humans have not yet converged on a “Theory of Everything” after millennia of trying, this is evidence against the existence of such a theory.
It seems to me, on the other hand, that our theories have steadily improved over those millennia (in terms of objectively verifiable metrics like their ability to predict the results of increasingly esoteric experiments), and that this is evidence in favor of an eventual theory of everything. That we haven’t converged on such a theory yet is simply a consequence, in my view, of the fact that the correct theory is in some sense hard to find. But to postulate that no such theory exists is, I think, not only unsupported by the evidence, but actually contradicted by it—unless you’re interpreting the state of scientific progress quite differently than I am.*
That’s the argument from empirical evidence, which (hopefully) allows for a more productive disagreement than the relatively abstract subject matter we’ve discussed so far. However, I think one of those abstract subjects still deserves some attention—in particular, you expressed further confusion about my use of the word “coincidence”:
I had previously provided a Tabooed version of my statement, but perhaps even that was insufficiently clear. (If so, I apologize.) This time, instead of attempting to make my statement even more abstract, I’ll try taking a different tack and making things more concrete:
I don’t think that, if our observations really were impossible to model completely accurately, we would be able to achieve the level of predictive success we have. The fact that we have managed to achieve some level of predictive accuracy (not 100%, but some!) strongly suggests to me that our observations are not impossible to model—and I say this for a very simple reason:
How can it be possible to achieve even partial accuracy at predicting something that is purportedly impossible to model? We can’t have done it by actually modeling the thing, of course, because we’re assuming that the thing cannot be modeled by hypothesis. So our seeming success at predicting the thing, must not actually be due to any kind of successful modeling of said thing. Then how is it that our model is producing seemingly accurate predictions? It seems as though we are in a similar position to a lazy student who, upon being presented with a test they didn’t study for, is forced to guess the right answers—except that in our case, the student somehow gets lucky enough to choose the correct answer every time, despite the fact that they are merely guessing, rather than working out the answer the way they should.
I think that the word “coincidence” is a decent way of describing the student’s situation in this case, even if it doesn’t fully accord with your dictionary’s definition (after all, whoever said the dictionary editors determine have the sole power to determine a word’s usage?)--and analogously, our model of the thing must also only be making correct predictions by coincidence, since we’ve ruled out the possibility, a priori, that it might actually be correctly modeling the way the thing works.
I find it implausible that our models are actually behaving this way with respect to the “thing”/the universe, in precisely the same way I would find it implausible that a student who scored 95% on a test had simply guessed on all of the questions. I hope that helps clarify what I meant by “coincidence” in this context.
*You did say, of course, that you weren’t making any claims or postulates to that effect. But it certainly seems to me that you’re not completely agnostic on the issue—after all, your initial claim was “it’s models all the way down”, and you’ve fairly consistently stuck to defending that claim throughout not just this thread, but your entire tenure on LW. So I think it’s fair to treat you as holding that position, at least for the sake of a discussion like this.
Sadly, I don’t think we are converging at all.
Yes, definitely.
I don’t see why it would be. Just because one one is able to march forward doesn’t mean that there is a destination. There are many possible alternatives. One is that we will keep making more accurate models (in a sense of making more detailed confirmed predictions in more areas) without ever ending anywhere. Another is that we will stall in our predictive abilities and stop making measurable progress, get stuck in a swamp, so to speak. This could happen, for example, if the computational power required to make better predictions grows exponentially with accuracy. Yet another alternative is that the act of making a better model actually creates new observations (in your language, changes the laws of the universe). After all, if you believe that we are agents embedded in the universe, then our actions change the universe, and who is to say that at some point they won’t change even what we think are the fundamental laws. There is an amusing novel about the universe protecting itself from overly inquisitive humans: https://en.wikipedia.org/wiki/Definitely_Maybe_(novel)
I don’t believe I have said anything of the sort. Of course we are able to build models. Without predictability life, let alone consciousness would be impossible, and that was one of my original statements. I don’t know what is it I said that gave you the impression that abandoning the concept of objective reality means we ought to lose predictability in any way.
Again:
I don’t postulate it. You postulate that there is something at the bottom. I’m simply saying that there is no need for this postulate, and, given what we see so far, every prediction of absolute knowledge in a given area turned out to be wrong, so, odds are, whether or not there is something at the bottom or not, at this point this postulate is harmful, rather than useful, and is wholly unnecessary. Our current experience suggests that it is all models, and if this ever changes, that would be a surprise.
That’s all.