If you take part in many successive experiments with observer-splitting, you will observe some limiting frequencies
No. Different instances of the agent that processed different sequences of observations will observe different limiting frequencies. Some will even observe secret messages from Lords of the Matrix encoded in Morse.
And once again, I cannot understand whether your comment expresses some deep idea or you’re missing some obvious point. Flipping a coin is also observer-splitting (in a sense), and we do observe something pretty damn similar to “limiting frequencies” instead of the utter chaos that you seem to predict. Yeah it’s true that different instances of you will see different sequences of heads and tails. But it’s not the whole truth.
An observer-splitting setup that did not give rise to subjective limiting frequencies would be something quite new under the sun. I have no idea if it’s even possible, yet you sound so certain...
Observing certain frequencies is probable and anticipated. Normative probability is given by prior, and normative anticipation is calculated from the prior, possibly along the lines of what I described here. The resulting probabilities explain the observations we’re likely to see in the trivial sense of being the theories which hold these observations probable. It is an example of a circular justification, an answer to a heuristic “why” question that short-circuits in this particular case where you ask not about a phenomenon with non-trivial definition, but the whole of your experience.
I think you’ll agree that there are other versions of yourself that observed all the chaos allowed by the laws of physics. What is the sense in which you’re special, compared to them, what is the regularity that wants explaining? You’re much, much more probable and hence more relevant for decision-making heuristics. You remember expecting normality and not chaos, and remember having that expectation met. That expectation was formed under the same considerations that define the corresponding past experiences as probable, even if that probability is logically non-transparent in mere psychological expectation, and becomes apparent mostly in retrospect and on reflection. But there are other instances of yourself out there, unimportant in their measure, that have had some strange experiences not explained by their normative anticipation.
Your explanation still doesn’t work for me, I’m afraid.
Do you mean “prior” as part of my mind’s software, or “prior” as something ethereal and universal? If the former, how can my tiny brain have beliefs about all elementary particles in the universe, why did evolution build such a thing if robots using ordinary software can survive just fine, and where should I tweak my mind if I want to win the lottery? If the latter, what makes you believe that there is such a prior, and isn’t this “measure” just reality-fluid by another name, which is a well-known antipattern? Or is there some third alternative that I missed?
The disparity between the level of detail in reality/prior, and imprecision and mutability of psychological anticipation was an open problem for my attack at the problem which I made in autumn (and discussed previously here).
This problem is solved by identifying prior (notion of reality) not with explicit data given by psychological anticipation, but with normative anticipation. That is, reality is explained as that which we should expect, where the shouldness of expectation is not a line from Litany of Tarski, suggesting how one ought to keep an accurate map of reality, but literally explanation of what reality is.
The multi-level conceptual models that humans build are models of uncertainty, expressing logical uncertainty about the conclusions that should be drawn from past observations. There is only one level of reality in the same sense there is only one mathematical structure behind the many axiomatic definitions that specify it. Reality is, in a sense, what a Bayesian superintelligence would conclude given the knowledge and observations that humans have. But as with morality, we don’t have that definition explicitly anywhere, and can only learn more and more detail, and as with morality, the notion is normative, so you can’t solve any problems by changing the question (“where should I tweak my mind if I want to win the lottery”).
A big question remaining is how do we learn from observations, in what sense do the observations confer knowledge, what is the difference between such knowledge and other kinds of knowledge. This requires facing some problems that UDT avoided by refusing to treat observations as knowledge.
This problem is solved by identifying prior (notion of reality) not with explicit data given by psychological anticipation, but with normative anticipation. That is, reality is explained as that which we should expect, where the shouldness of expectation is not a line from Litany of Tarski, suggesting how one ought to keep an accurate map of reality, but literally explanation of what reality is.
I don’t understand how this is different from believing in reality-fluid. If it’s the same thing, I cannot accept that. If it’s different, could you explain how?
This is an explanation of reality in terms of decision-theoretic heuristics we carry in our heads, as a notion similar to morality and platonic truth. This is of course a mere conceptual step, it doesn’t hand you much explanatory power, but I hope it can make reality a bit less mysterious. Like saying that Boeing 747 is made out of atoms, but not pointing out any specific details about its systems.
I don’t understand what exactly you refer to by reality-fluid, in what sense you see an analogy, and what problem that points out. The errors and confusions of evaluating one’s anticipation in practice have little bearing on how anticipation should be evaluated.
This problem is solved by identifying prior (notion of reality) not with explicit data given by psychological anticipation, but with normative anticipation. That is, reality is explained as that which we should expect, where the shouldness of expectation is not a line from Litany of Tarski, suggesting how one ought to keep an accurate map of reality, but literally explanation of what reality is.
I don’t understand how this is different from believing in reality-fluid. If it’s the same thing, I cannot accept that. If it’s different, could you explain how?
No. Different instances of the agent that processed different sequences of observations will observe different limiting frequencies. Some will even observe secret messages from Lords of the Matrix encoded in Morse.
And once again, I cannot understand whether your comment expresses some deep idea or you’re missing some obvious point. Flipping a coin is also observer-splitting (in a sense), and we do observe something pretty damn similar to “limiting frequencies” instead of the utter chaos that you seem to predict. Yeah it’s true that different instances of you will see different sequences of heads and tails. But it’s not the whole truth.
An observer-splitting setup that did not give rise to subjective limiting frequencies would be something quite new under the sun. I have no idea if it’s even possible, yet you sound so certain...
Observing certain frequencies is probable and anticipated. Normative probability is given by prior, and normative anticipation is calculated from the prior, possibly along the lines of what I described here. The resulting probabilities explain the observations we’re likely to see in the trivial sense of being the theories which hold these observations probable. It is an example of a circular justification, an answer to a heuristic “why” question that short-circuits in this particular case where you ask not about a phenomenon with non-trivial definition, but the whole of your experience.
I think you’ll agree that there are other versions of yourself that observed all the chaos allowed by the laws of physics. What is the sense in which you’re special, compared to them, what is the regularity that wants explaining? You’re much, much more probable and hence more relevant for decision-making heuristics. You remember expecting normality and not chaos, and remember having that expectation met. That expectation was formed under the same considerations that define the corresponding past experiences as probable, even if that probability is logically non-transparent in mere psychological expectation, and becomes apparent mostly in retrospect and on reflection. But there are other instances of yourself out there, unimportant in their measure, that have had some strange experiences not explained by their normative anticipation.
Your explanation still doesn’t work for me, I’m afraid.
Do you mean “prior” as part of my mind’s software, or “prior” as something ethereal and universal? If the former, how can my tiny brain have beliefs about all elementary particles in the universe, why did evolution build such a thing if robots using ordinary software can survive just fine, and where should I tweak my mind if I want to win the lottery? If the latter, what makes you believe that there is such a prior, and isn’t this “measure” just reality-fluid by another name, which is a well-known antipattern? Or is there some third alternative that I missed?
The disparity between the level of detail in reality/prior, and imprecision and mutability of psychological anticipation was an open problem for my attack at the problem which I made in autumn (and discussed previously here).
This problem is solved by identifying prior (notion of reality) not with explicit data given by psychological anticipation, but with normative anticipation. That is, reality is explained as that which we should expect, where the shouldness of expectation is not a line from Litany of Tarski, suggesting how one ought to keep an accurate map of reality, but literally explanation of what reality is.
The multi-level conceptual models that humans build are models of uncertainty, expressing logical uncertainty about the conclusions that should be drawn from past observations. There is only one level of reality in the same sense there is only one mathematical structure behind the many axiomatic definitions that specify it. Reality is, in a sense, what a Bayesian superintelligence would conclude given the knowledge and observations that humans have. But as with morality, we don’t have that definition explicitly anywhere, and can only learn more and more detail, and as with morality, the notion is normative, so you can’t solve any problems by changing the question (“where should I tweak my mind if I want to win the lottery”).
A big question remaining is how do we learn from observations, in what sense do the observations confer knowledge, what is the difference between such knowledge and other kinds of knowledge. This requires facing some problems that UDT avoided by refusing to treat observations as knowledge.
I don’t understand how this is different from believing in reality-fluid. If it’s the same thing, I cannot accept that. If it’s different, could you explain how?
This is an explanation of reality in terms of decision-theoretic heuristics we carry in our heads, as a notion similar to morality and platonic truth. This is of course a mere conceptual step, it doesn’t hand you much explanatory power, but I hope it can make reality a bit less mysterious. Like saying that Boeing 747 is made out of atoms, but not pointing out any specific details about its systems.
I don’t understand what exactly you refer to by reality-fluid, in what sense you see an analogy, and what problem that points out. The errors and confusions of evaluating one’s anticipation in practice have little bearing on how anticipation should be evaluated.
I don’t understand how this is different from believing in reality-fluid. If it’s the same thing, I cannot accept that. If it’s different, could you explain how?