a probability distribution over… something, I am not sure what in your case, if not an external reality.
I confess to being quite confused by this statement. Probability distributions can be constructed without making any reference to an “external reality”; perhaps the purest example would simply be some kind of prior over different input sequences. At this point, I suspect you and I may be taking the phrase “external reality” to mean very different things—so if you don’t mind, could I ask you to rephrase the quoted statement after Tabooing “external reality” and all synonyms?
EDIT: I suppose if I’m going to ask you to Taboo “external reality”, I may as well do the same thing for “cosmic coincidence”, just to try and help bridge the gap more quickly. The original statement (for reference):
There is no external reality, and our observations are only structured due to a giant cosmic coincidence.
And here is the Tabooed version (which is, as expected, much longer):
Although there is a model in our hypothesis space with an excellent compression ratio on our past observations, we should not expect this model to continue performing well on future observations. That is, we should not expect there to be a model in our hypothesis space that outperforms the max-entropy distribution (which assigns equal probability to all possible future observation sequences), and although we currently have a model that appears to be significantly outperforming the max-entropy distribution, this is merely an artifact of our finite dataset, which we may safely expect to disappear shortly.
Taken literally, the “coincidence hypothesis” predicts that our observations ought to dissolve into a mess of random chaos, which as far as I can tell is not happening. To me, this suffices to establish the (probable) existence of some kind of fixed reality.
Thank you for rephrasing. Let me try my version. Notice how it doesn’t assume anything about probabilities of coincidences, as I don’t see those contributing to better predictions.
A certain set of past inputs has proven fruitful for constructing models that reasonably accurately predict similar sets of future inputs. Some of these models cover an especially wide range of input sets. This seemingly near universal applicability of some models makes it tempting to privilege such a set of models over others, more narrowly applicable, and call this set the source of all inputs we can possibly receive, a “reality”.
In other words, sometimes observations can be used to make good predictions, for a time. Then we assume that these predictions have a single source, the external reality. I guess I don’t get your point about needing to regress to unpredictability without postulating that reality thing.
(Okay, I’ve been meaning to get back to you on this for a while, but for some reason haven’t until now.)
It seems, based on what you’re saying, that you’re taking “reality” to mean some preferred set of models. If so, then I think I was correct that you and I were using the same term to refer to different concepts. I still have some questions for you regarding your position on “reality” as you understand the term, but I think it may be better to defer those until after I give a basic rundown of my position.
Essentially, my belief in an external reality, if we phrase it in the same terms we’ve been using (namely, the language of models and predictions), can be summarized as the belief that there is some (reachable) model within our hypothesis space that can perfectly predict further inputs. This can be further repackaged into a empirical prediction: I expect that (barring an existential catastrophe that erases us entirely) there will eventually come a point when we have the “full picture” of physics, such that no further experiments we perform will ever produce a result we find surprising. If we arrive at such a model, I would be comfortable referring to that model as “true”, and the phenomena it describes as “reality”.
Initially, I took you to be asserting the negation of the above statement—namely, that we will never stop being surprised by the universe, and that our models, though they might asymptotically approach a rate of 100% predictive success, will never quite get there. It is this claim that I find implausible, since it seems to imply that there is no model in our hypothesis space capable of predicting further inputs with 100% accuracy—but if that is the case, why do we currently have a model with >99% predictive accuracy? Is the success of this model a mere coincidence? It must be, since (by assumption) there is no model actually capable of describing the universe. This is what I was gesturing at with the “coincidence” hypothesis I kept mentioning.
Now, perhaps you actually do hold the position described in the above paragraph. (If you do, please let me know.) But based on what you wrote, it doesn’t seem necessary for me to assume that you do. Rather, you seem to be saying something along the lines of, “It may be tempting to take our current set of models as describing how reality ultimately is, but in fact we have no way of knowing this for sure, so it’s best not to assume anything.”
If that’s all you’re saying, it doesn’t necessarily conflict with my view (although I’d suggest that “reality doesn’t exist” is a rather poor way to go about expressing this sentiment). Nonetheless, if I’m correct about your position, then I’m curious as to what you think it’s useful for? Presumably it doesn’t help make any predictions (almost by definition), so I assume you’d say it’s useful for dissolving certain kinds of confusion. Any examples, if so?
t seems, based on what you’re saying, that you’re taking “reality” to mean some preferred set of models.
Depending on the meaning of the word preferred. I tend to use “useful” instead.
my belief in an external reality, if we phrase it in the same terms we’ve been using (namely, the language of models and predictions), can be summarized as the belief that there is some (reachable) model within our hypothesis space that can perfectly predict further inputs.
It’s a common belief, but it appears to me quite unfounded, since it hasn’t happened in millennia of trying. So, a direct observation speaks against this model.
I expect that (barring an existential catastrophe that erases us entirely) there will eventually come a point when we have the “full picture” of physics, such that no experiment we perform will produce a result we find surprising.
It’s another common belief, though separate from the belief of reality. It is a belief that this reality is efficiently knowable, a bold prediction that is not supported by evidence and has hints to the contrary from the complexity theory.
If we arrive at such a model, I would be comfortable referring to that model as “true”, and the phenomena it describes as “reality”.
Yes, in this highly hypothetical case I would agree.
Initially, I took you to be asserting the negation of the above statement—namely, that we will never stop being surprised by the universe, and that our models, though they might asymptotically approach a rate of 100% predictive success, will never quite get there.
I make no claims one way or the other. We tend to get better at predicting observations in certain limited areas, though it tends to come at a cost. In high-energy physics the progress has slowed to a standstill, no interesting observations has been predicted since last millennium. General Relativity plus the standard model of the particle physics have stood unchanged and unchallenged for decades, the magic numbers they require remaining unexplained since the Higgs mass was predicted a long time ago. While this suggests that, yes, we will probably never stop being surprised by the -universe- (no strike through markup here?) observations, I make no such claims.
It is this claim that I find implausible, since it seems to imply that there is no model in our hypothesis space capable of predicting further inputs with 100% accuracy—but if that is the case, why do we currently have a model with >99% predictive accuracy?
Yes we do have a good handle on many isolated sets of observations, though what you mean by 99% is not clear to me. Similarly, I don’t know what you mean by 100% accuracy here. I can imagine that in some limited areas 100% accuracy can be achievable, though we often get surprised even there. Say, in math the Hilbert Program had a surprising twist. Feel free to give examples of 100% predictability, and we can discuss them. I find this model (of no universal perfect predictability) very plausible and confirmed by observations. I am still unsure what you mean by coincidence here. The dictionary defines it as “A remarkable concurrence of events or circumstances without apparent causal connection.” and that open a whole new can of worms about what “apparent” and “causal” mean in the situation we are describing, and we soon will be back to a circular argument of implying some underlying reality to explain why we need to postulate reality.
Now, perhaps you actually do hold the position described in the above paragraph. (If you do, please let me know.) But based on what you wrote, it doesn’t seem necessary for me to assume that you do. Rather, you seem to be saying something along the lines of, “It may be tempting to take our current set of models as describing how reality ultimately is, but in fact we have no way of knowing this for sure, so it’s best not to assume anything.”
I don’t disagree with the quoted part, it’s a decent description.
If that’s all you’re saying, it doesn’t necessarily conflict with my view (although I’d suggest that “reality doesn’t exist” is a rather poor way to go about expressing this sentiment). Nonetheless, if I’m correct about your position, then I’m curious as to what you think it’s useful for? Presumably it doesn’t help make any predictions (almost by definition), so I assume you’d say it’s useful for dissolving certain kinds of confusion. Any examples, if so?
“reality doesn’t exist” was not my original statement, it was “models all the way down”, a succinct way to express the current state of knowledge, where all we get is observations and layers of models based on them predicting future observations. It is useful to avoid getting astray with questions about existence or non-existence of something, like numbers, multiverse or qualia. If you stick to models, these questions are dissolved as meaningless (not useful for predicting future observations). Just like the question of counting angels on the head of a pin. Tegmark Level X, the hard problem of consciousness, MWI vs Copenhagen, none of these are worth arguing over until and unless you suggest something that can be potentially observable.
It’s a common belief, but it appears to me quite unfounded, since it hasn’t happened in millennia of trying. So, a direct observation speaks against this model.
...
It’s another common belief, though separate from the belief of reality. It is a belief that this reality is efficiently knowable, a bold prediction that is not supported by evidence and has hints to the contrary from the complexity theory.
...
General Relativity plus the standard model of the particle physics have stood unchanged and unchallenged for decades, the magic numbers they require remaining unexplained since the Higgs mass was predicted a long time ago. While this suggests that, yes, we will probably never stop being surprised by the universeobservations, I make no such claims.
I think at this stage we have finally hit upon a point of concrete disagreement. If I’m interpreting you correctly, you seem to be suggesting that because humans have not yet converged on a “Theory of Everything” after millennia of trying, this is evidence against the existence of such a theory.
It seems to me, on the other hand, that our theories have steadily improved over those millennia (in terms of objectively verifiable metrics like their ability to predict the results of increasingly esoteric experiments), and that this is evidence in favor of an eventual theory of everything. That we haven’t converged on such a theory yet is simply a consequence, in my view, of the fact that the correct theory is in some sense hard to find. But to postulate that no such theory exists is, I think, not only unsupported by the evidence, but actually contradicted by it—unless you’re interpreting the state of scientific progress quite differently than I am.*
That’s the argument from empirical evidence, which (hopefully) allows for a more productive disagreement than the relatively abstract subject matter we’ve discussed so far. However, I think one of those abstract subjects still deserves some attention—in particular, you expressed further confusion about my use of the word “coincidence”:
I am still unsure what you mean by coincidence here. The dictionary defines it as “A remarkable concurrence of events or circumstances without apparent causal connection.” and that open a whole new can of worms about what “apparent” and “causal” mean in the situation we are describing, and we soon will be back to a circular argument of implying some underlying reality to explain why we need to postulate reality.
I had previously provided a Tabooed version of my statement, but perhaps even that was insufficiently clear. (If so, I apologize.) This time, instead of attempting to make my statement even more abstract, I’ll try taking a different tack and making things more concrete:
I don’t think that, if our observations really were impossible to model completely accurately, we would be able to achieve the level of predictive success we have. The fact that we have managed to achieve some level of predictive accuracy (not 100%, but some!) strongly suggests to me that our observations are not impossible to model—and I say this for a very simple reason:
How can it be possible to achieve even partial accuracy at predicting something that is purportedly impossible to model? We can’t have done it by actually modeling the thing, of course, because we’re assuming that the thing cannot be modeled by hypothesis. So our seeming success at predicting the thing, must not actually be due to any kind of successful modeling of said thing. Then how is it that our model is producing seemingly accurate predictions? It seems as though we are in a similar position to a lazy student who, upon being presented with a test they didn’t study for, is forced to guess the right answers—except that in our case, the student somehow gets lucky enough to choose the correct answer every time, despite the fact that they are merely guessing, rather than working out the answer the way they should.
I think that the word “coincidence” is a decent way of describing the student’s situation in this case, even if it doesn’t fully accord with your dictionary’s definition (after all, whoever said the dictionary editors determine have the sole power to determine a word’s usage?)--and analogously, our model of the thing must also only be making correct predictions by coincidence, since we’ve ruled out the possibility, a priori, that it might actually be correctly modeling the way the thing works.
I find it implausible that our models are actually behaving this way with respect to the “thing”/the universe, in precisely the same way I would find it implausible that a student who scored 95% on a test had simply guessed on all of the questions. I hope that helps clarify what I meant by “coincidence” in this context.
*You did say, of course, that you weren’t making any claims or postulates to that effect. But it certainly seems to me that you’re not completely agnostic on the issue—after all, your initial claim was “it’s models all the way down”, and you’ve fairly consistently stuck to defending that claim throughout not just this thread, but your entire tenure on LW. So I think it’s fair to treat you as holding that position, at least for the sake of a discussion like this.
It seems to me, on the other hand, that our theories have steadily improved over those millennia (in terms of objectively verifiable metrics like their ability to predict the results of increasingly esoteric experiments)
Yes, definitely.
and that this is evidence in favor of an eventual theory of everything.
I don’t see why it would be. Just because one one is able to march forward doesn’t mean that there is a destination. There are many possible alternatives. One is that we will keep making more accurate models (in a sense of making more detailed confirmed predictions in more areas) without ever ending anywhere. Another is that we will stall in our predictive abilities and stop making measurable progress, get stuck in a swamp, so to speak. This could happen, for example, if the computational power required to make better predictions grows exponentially with accuracy. Yet another alternative is that the act of making a better model actually creates new observations (in your language, changes the laws of the universe). After all, if you believe that we are agents embedded in the universe, then our actions change the universe, and who is to say that at some point they won’t change even what we think are the fundamental laws. There is an amusing novel about the universe protecting itself from overly inquisitive humans: https://en.wikipedia.org/wiki/Definitely_Maybe_(novel)
How can it be possible to achieve even partial accuracy at predicting something that is purportedly impossible to model?
I don’t believe I have said anything of the sort. Of course we are able to build models. Without predictability life, let alone consciousness would be impossible, and that was one of my original statements. I don’t know what is it I said that gave you the impression that abandoning the concept of objective reality means we ought to lose predictability in any way.
Again:
But to postulate that no such theory exists is, I think, not only unsupported by the evidence, but actually contradicted by it—unless you’re interpreting the state of scientific progress quite differently than I am.*
I don’t postulate it. You postulate that there is something at the bottom. I’m simply saying that there is no need for this postulate, and, given what we see so far, every prediction of absolute knowledge in a given area turned out to be wrong, so, odds are, whether or not there is something at the bottom or not, at this point this postulate is harmful, rather than useful, and is wholly unnecessary. Our current experience suggests that it is all models, and if this ever changes, that would be a surprise.
I confess to being quite confused by this statement. Probability distributions can be constructed without making any reference to an “external reality”; perhaps the purest example would simply be some kind of prior over different input sequences. At this point, I suspect you and I may be taking the phrase “external reality” to mean very different things—so if you don’t mind, could I ask you to rephrase the quoted statement after Tabooing “external reality” and all synonyms?
EDIT: I suppose if I’m going to ask you to Taboo “external reality”, I may as well do the same thing for “cosmic coincidence”, just to try and help bridge the gap more quickly. The original statement (for reference):
And here is the Tabooed version (which is, as expected, much longer):
Taken literally, the “coincidence hypothesis” predicts that our observations ought to dissolve into a mess of random chaos, which as far as I can tell is not happening. To me, this suffices to establish the (probable) existence of some kind of fixed reality.
Thank you for rephrasing. Let me try my version. Notice how it doesn’t assume anything about probabilities of coincidences, as I don’t see those contributing to better predictions.
In other words, sometimes observations can be used to make good predictions, for a time. Then we assume that these predictions have a single source, the external reality. I guess I don’t get your point about needing to regress to unpredictability without postulating that reality thing.
(Okay, I’ve been meaning to get back to you on this for a while, but for some reason haven’t until now.)
It seems, based on what you’re saying, that you’re taking “reality” to mean some preferred set of models. If so, then I think I was correct that you and I were using the same term to refer to different concepts. I still have some questions for you regarding your position on “reality” as you understand the term, but I think it may be better to defer those until after I give a basic rundown of my position.
Essentially, my belief in an external reality, if we phrase it in the same terms we’ve been using (namely, the language of models and predictions), can be summarized as the belief that there is some (reachable) model within our hypothesis space that can perfectly predict further inputs. This can be further repackaged into a empirical prediction: I expect that (barring an existential catastrophe that erases us entirely) there will eventually come a point when we have the “full picture” of physics, such that no further experiments we perform will ever produce a result we find surprising. If we arrive at such a model, I would be comfortable referring to that model as “true”, and the phenomena it describes as “reality”.
Initially, I took you to be asserting the negation of the above statement—namely, that we will never stop being surprised by the universe, and that our models, though they might asymptotically approach a rate of 100% predictive success, will never quite get there. It is this claim that I find implausible, since it seems to imply that there is no model in our hypothesis space capable of predicting further inputs with 100% accuracy—but if that is the case, why do we currently have a model with >99% predictive accuracy? Is the success of this model a mere coincidence? It must be, since (by assumption) there is no model actually capable of describing the universe. This is what I was gesturing at with the “coincidence” hypothesis I kept mentioning.
Now, perhaps you actually do hold the position described in the above paragraph. (If you do, please let me know.) But based on what you wrote, it doesn’t seem necessary for me to assume that you do. Rather, you seem to be saying something along the lines of, “It may be tempting to take our current set of models as describing how reality ultimately is, but in fact we have no way of knowing this for sure, so it’s best not to assume anything.”
If that’s all you’re saying, it doesn’t necessarily conflict with my view (although I’d suggest that “reality doesn’t exist” is a rather poor way to go about expressing this sentiment). Nonetheless, if I’m correct about your position, then I’m curious as to what you think it’s useful for? Presumably it doesn’t help make any predictions (almost by definition), so I assume you’d say it’s useful for dissolving certain kinds of confusion. Any examples, if so?
Depending on the meaning of the word preferred. I tend to use “useful” instead.
It’s a common belief, but it appears to me quite unfounded, since it hasn’t happened in millennia of trying. So, a direct observation speaks against this model.
It’s another common belief, though separate from the belief of reality. It is a belief that this reality is efficiently knowable, a bold prediction that is not supported by evidence and has hints to the contrary from the complexity theory.
Yes, in this highly hypothetical case I would agree.
I make no claims one way or the other. We tend to get better at predicting observations in certain limited areas, though it tends to come at a cost. In high-energy physics the progress has slowed to a standstill, no interesting observations has been predicted since last millennium. General Relativity plus the standard model of the particle physics have stood unchanged and unchallenged for decades, the magic numbers they require remaining unexplained since the Higgs mass was predicted a long time ago. While this suggests that, yes, we will probably never stop being surprised by the -universe- (no strike through markup here?) observations, I make no such claims.
Yes we do have a good handle on many isolated sets of observations, though what you mean by 99% is not clear to me. Similarly, I don’t know what you mean by 100% accuracy here. I can imagine that in some limited areas 100% accuracy can be achievable, though we often get surprised even there. Say, in math the Hilbert Program had a surprising twist. Feel free to give examples of 100% predictability, and we can discuss them. I find this model (of no universal perfect predictability) very plausible and confirmed by observations. I am still unsure what you mean by coincidence here. The dictionary defines it as “A remarkable concurrence of events or circumstances without apparent causal connection.” and that open a whole new can of worms about what “apparent” and “causal” mean in the situation we are describing, and we soon will be back to a circular argument of implying some underlying reality to explain why we need to postulate reality.
I don’t disagree with the quoted part, it’s a decent description.
“reality doesn’t exist” was not my original statement, it was “models all the way down”, a succinct way to express the current state of knowledge, where all we get is observations and layers of models based on them predicting future observations. It is useful to avoid getting astray with questions about existence or non-existence of something, like numbers, multiverse or qualia. If you stick to models, these questions are dissolved as meaningless (not useful for predicting future observations). Just like the question of counting angels on the head of a pin. Tegmark Level X, the hard problem of consciousness, MWI vs Copenhagen, none of these are worth arguing over until and unless you suggest something that can be potentially observable.
...
...
I think at this stage we have finally hit upon a point of concrete disagreement. If I’m interpreting you correctly, you seem to be suggesting that because humans have not yet converged on a “Theory of Everything” after millennia of trying, this is evidence against the existence of such a theory.
It seems to me, on the other hand, that our theories have steadily improved over those millennia (in terms of objectively verifiable metrics like their ability to predict the results of increasingly esoteric experiments), and that this is evidence in favor of an eventual theory of everything. That we haven’t converged on such a theory yet is simply a consequence, in my view, of the fact that the correct theory is in some sense hard to find. But to postulate that no such theory exists is, I think, not only unsupported by the evidence, but actually contradicted by it—unless you’re interpreting the state of scientific progress quite differently than I am.*
That’s the argument from empirical evidence, which (hopefully) allows for a more productive disagreement than the relatively abstract subject matter we’ve discussed so far. However, I think one of those abstract subjects still deserves some attention—in particular, you expressed further confusion about my use of the word “coincidence”:
I had previously provided a Tabooed version of my statement, but perhaps even that was insufficiently clear. (If so, I apologize.) This time, instead of attempting to make my statement even more abstract, I’ll try taking a different tack and making things more concrete:
I don’t think that, if our observations really were impossible to model completely accurately, we would be able to achieve the level of predictive success we have. The fact that we have managed to achieve some level of predictive accuracy (not 100%, but some!) strongly suggests to me that our observations are not impossible to model—and I say this for a very simple reason:
How can it be possible to achieve even partial accuracy at predicting something that is purportedly impossible to model? We can’t have done it by actually modeling the thing, of course, because we’re assuming that the thing cannot be modeled by hypothesis. So our seeming success at predicting the thing, must not actually be due to any kind of successful modeling of said thing. Then how is it that our model is producing seemingly accurate predictions? It seems as though we are in a similar position to a lazy student who, upon being presented with a test they didn’t study for, is forced to guess the right answers—except that in our case, the student somehow gets lucky enough to choose the correct answer every time, despite the fact that they are merely guessing, rather than working out the answer the way they should.
I think that the word “coincidence” is a decent way of describing the student’s situation in this case, even if it doesn’t fully accord with your dictionary’s definition (after all, whoever said the dictionary editors determine have the sole power to determine a word’s usage?)--and analogously, our model of the thing must also only be making correct predictions by coincidence, since we’ve ruled out the possibility, a priori, that it might actually be correctly modeling the way the thing works.
I find it implausible that our models are actually behaving this way with respect to the “thing”/the universe, in precisely the same way I would find it implausible that a student who scored 95% on a test had simply guessed on all of the questions. I hope that helps clarify what I meant by “coincidence” in this context.
*You did say, of course, that you weren’t making any claims or postulates to that effect. But it certainly seems to me that you’re not completely agnostic on the issue—after all, your initial claim was “it’s models all the way down”, and you’ve fairly consistently stuck to defending that claim throughout not just this thread, but your entire tenure on LW. So I think it’s fair to treat you as holding that position, at least for the sake of a discussion like this.
Sadly, I don’t think we are converging at all.
Yes, definitely.
I don’t see why it would be. Just because one one is able to march forward doesn’t mean that there is a destination. There are many possible alternatives. One is that we will keep making more accurate models (in a sense of making more detailed confirmed predictions in more areas) without ever ending anywhere. Another is that we will stall in our predictive abilities and stop making measurable progress, get stuck in a swamp, so to speak. This could happen, for example, if the computational power required to make better predictions grows exponentially with accuracy. Yet another alternative is that the act of making a better model actually creates new observations (in your language, changes the laws of the universe). After all, if you believe that we are agents embedded in the universe, then our actions change the universe, and who is to say that at some point they won’t change even what we think are the fundamental laws. There is an amusing novel about the universe protecting itself from overly inquisitive humans: https://en.wikipedia.org/wiki/Definitely_Maybe_(novel)
I don’t believe I have said anything of the sort. Of course we are able to build models. Without predictability life, let alone consciousness would be impossible, and that was one of my original statements. I don’t know what is it I said that gave you the impression that abandoning the concept of objective reality means we ought to lose predictability in any way.
Again:
I don’t postulate it. You postulate that there is something at the bottom. I’m simply saying that there is no need for this postulate, and, given what we see so far, every prediction of absolute knowledge in a given area turned out to be wrong, so, odds are, whether or not there is something at the bottom or not, at this point this postulate is harmful, rather than useful, and is wholly unnecessary. Our current experience suggests that it is all models, and if this ever changes, that would be a surprise.
That’s all.