An important thing to realize is that people working on anthropics are trying to come up with a precise inferential methodology. They’re not trying to draw conclusions about the state of the world, they’re trying to draw conclusions about how one should draw conclusions about the state of the world. Think of it as akin to Bayesianism. If someone read an introduction to Bayesian epistemology, and said “This is just a mess of tautologies (Bayes’ theorem) and thought experiments (Dutch book arguments) that pays no rent in anticipated experience. Why should I care?”, how would you respond? Presumably you’d tell them that they should care because understanding the Bayesian methodology helps people make sounder inferences about the world, even if it doesn’t predict specific experiences. Understanding anthropics does the same thing (except perhaps not as ubiquitously).
So the point of understanding anthropics is not so much to directly predict experiences but to appreciate how exactly one should update on certain pieces of evidence. It’s like understanding any other selection effect—in order to properly interpret the significance of pieces of evidence you collect, you need to have a proper understanding of the tools you use to collect them. To use Eddington’s much-cited example, if your net can’t catch fish smaller than six inches, then the fact that you haven’t caught any such fish doesn’t tell you anything about the state of the lake you’re fishing. Understanding the limitations of your data-gathering mechanism prevents you from making bad updates. And if the particular limitation you’re considering is the fact that observations can only be made in regimes accessible to observers, then you’re engaged in anthropic reasoning.
Paul Dirac came up with a pretty revisionary cosmological theory based on several apparent “large number coincidences”—important large (and some small) numbers in physics that all seem to be approximate integer powers of the Hubble age of the universe. He argued that it is implausible that we just happen to find ourselves at a time when these simple relationships hold, so they must be law-like. Based on this he concluded that certain physical constants aren’t really constant; they change as the universe ages. R. H. Dicke showed (or purported to show) that at least some of these coincidences can be explained when one realizes that observers can only exist during a certain temporal window in the universe’s existence, and that the timing of this window is related to a number of other physical constants (since it depends on facts about the formation and destruction of stars, etc.). If it’s true that observers can only exist in an environment where these large number relationships hold, then it’s a mistake to update our beliefs about natural laws based on these relationships. So that’s an example of how understanding the anthropic selection effect might save us (and not just us, but also superhumans like Dirac) from bad updates.
So much for anthropics in general, but what about the esoteric particulars—SSA, SIA and all that. Well, here’s the basic thought: Dirac’s initial (non-anthropic) move to his new cosmological theory was motivated by the belief that it is extraordinarily unlikely that the large number coincidences are purely due to chance, that we just happen to be around at a time when they hold. This kind of argument has a venerable history in physics (and other sciences, I’m sure) -- if your theory classifies your observed evidence as highly atypical, that’s a significant strike against the theory. Anthropic reasoning like Dicke’s adds a wrinkle—our theory is allowed to classify evidence as atypical, as long as it is not atypical for observers. In other words, even if the theory says phenomenon X occurs very rarely in our universe, an observation of phenomenon X doesn’t count against it, as long as the theory also says (based on good reason, not ad hoc stipulation) that observers can only exist in those few parts of the universe where phenomenon X occurs. Atypicality is allowed as long as it is correlated with the presence of observers.
But only that much atypicality is allowed. If your theory posits significant atypicality that goes beyond what selection effects can explain, then you’re in trouble. This is the insight that SSA, SIA, etc seek to precisify. They are basically attempts to update the Diracian “no atypicality” strategy to allow for the kind of atypicality that anthropic reasoning explains, but no more atypicality than that. Perhaps they are misguided attempts for various reasons, but the search for a mathematical codification of the “no atypicality” move is important, I think, because the move gets used imprecisely all the time anyway (without explicit evocation, most of the time) and it gets used without regard for important observation selection effects.
In other words, even if the theory says phenomenon X occurs very rarely in our universe, an observation of phenomenon X doesn’t count against it[...]Atypicality is allowed as long as it is correlated with the presence of observers.
I read this as: Rather than judging our theory based on p(X), judge it based on p(X) | exists(observers). Am I interpreting you right?
It’s a bit more complicated than that, I think. We’re usually dealing with a situation where p(X occurs somewhere | T) -- where T is the theory—is high. However, the probability of X occurring in a particular human-scale space-time region (or wave-function branch or global time-slice or universe or...) given T is very low. This is what I mean by X being rare. An example might be life-supporting planets or (in a multiversal context) fundamental constants apparently fine-tuned for life.
So the naïve view might be that an observation of X disconfirms the theory, based on the Copernican assumption that there is nothing very special about our place in the universe, whereas the theory seems to suggest that our place is special—it’s one of those rare places where we can see X.
But this disconfirmation only works if you assume that the space-time regions (or branches or universes or...) inhabited by observers are uncorrelated with those in which X occurs. If our theory tells us that those regions are highly correlated—if p(X occurs in region Y | T & observers exist in region Y) >> p(X occurs in region Y | T) -- then our observation of X doesn’t run afoul of the Copernican assumption, or at least a reasonably modified version of the Copernican assumption which allows for specialness only in so far as that specialness is required for the existence of observers.
If you taboo “anthropics” and replace by “observation selection effects” then there are all sorts of practical consequences. See the start of Nick Bostrom’s book for some examples.
The other big reason for caring is the “Doomsday argument” and the fact that all attempts to refute it have so far failed. Almost everyone who’s heard of the argument thinks there’s something trivially wrong with it, but all the obvious objections can be dealt with e.g. look later in Bostrom’s book. Further, alternative approaches to anthropics (such as the “self indication assumption”), or attempts to completely bypass anthropics (such as “full non-indexical conditioning”), have been developed to avoid the Doomsday conclusion. But very surprisingly, they end up reproducing it. See Katja Grace’s theisis.
Given the (observed) information that you are a 21st century human, the argument predicts that there will be a limited number of those. Well, that hardly seems news—our descendants will evolve into something different soon enough. That’s not much of a “Doomsday”.
I described some problems with Tallinn’s attempt here—under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.
Also, any analysis which predicts we are in a simulation runs into its own version of doomsday: unless there are strictly infinite computational resources, our own simulation is very likely to come to an end before we get to run simulations ourselves. (Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)
I described some problems with Tallinn’s attempt here—under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.
We seem pretty damn close to me! A decade or so is not very long.
(Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)
In a binary tree (for example), the internal nodes and the leaves are roughly equal in number.
Remember that in Tallinn’s analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially). I suppose Tallinn’s model could be adjusted so that they only explore “branch-points” in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.
On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn’s and Bostrom’s analysis, m is very much bigger than 2.
I suppose Tallinn’s model could be adjusted so that they only explore “branch-points” in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.
More likely that there are a range of historical “tipping points” that they might want to explore—perhaps including the invention of language and the origin of humans.
On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn’s and Bostrom’s analysis, m is very much bigger than 2.
Surely the chance of being in a simulated world depends somewhat on its size. Also the chance of a sim running simulations also depends on its size. A large world might have a high chance of running simulations, while a small world might have a low chance. Averaging over worlds of such very different sizes seems pretty useless—but any average of number of simulations run per-world would probably be low—since so many sims would be leaf nodes—and so would run no simulations themselves. Leaves might be more numerous, but they will also be smaller—and less likely to contain many observers.
Remember that in Tallinn’s analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially).
What substrate are they running these simulations on?
I had another look at Tallinn’s presentation, and it seems he is rather vague on this… rather difficult to know what computing designs super-intelligences would come up with! However, presumably they would use quantum computers to maximize the number of simulations they could create, which is how they could get branch-points every simulated second (or even more rapidly). Bostrom’s original simulation argument provides some lower bounds—and references—on what could be done using just classical computation.
I don’t need a refutation. The doomsday argument doesn’t affect anything I can or will do. I simply don’t care about it. It’s like a claim that I will probably be eaten at any point in the next 100 years by a random giant tiger.
Take Bayes’ theorem: P(H|O) = P(O|H) × P(H) / P(O). If H is a hypothesis and O is an observation, P(O|H) means “what is the probability of making that observation if the hypothesis is true?”
If a hypothesis has as consequence “nobody can observe O” (say, because no humans can exist), then that P(O|H) is 0 (actually, it’s about the probability that you didn’t get the consequence right). Which means that, once you made the observation, you will probably decide that the hypothesis is unlikely. However, if you don’t notice that consequence, you might decide that P(O|H) is large, and incorrectly assign high likelihood to the hypothesis.
For a completely ridiculous example, imagine that there’s a deadly cat-flu epidemic; it gives 90% of cats that catch it a runny nose. Your cat’s nose becomes runny. You might be justified to think that it’s likely your cat got cat-flu. However, if you know that all cases, the cat’s owner dies of the flu before the cat has any symptoms, the conclusion would be the opposite. (Since, if it were the flu, you wouldn’t see the cat’s runny nose, because you’d be dead.) The same evidence, opposite effect.
Anthropics is kind of the same thing, except you’re mostly guessing about the flu.
The obvious application (to me) is figuring out how to make decisions once mind uploading is possible. This point is made, for example, in Scott Aaronson’s The Ghost in the Quantum Turing Machine. What do you anticipate experiencing if someone uploads your mind while you’re still conscious?
Anthropics also seems to me to be relevant to the question of how to do Bayesian updates using reference classes, a subject I’m still very confused about and which seems pretty fundamental. Sometimes we treat ourselves as randomly sampled from the population of all humans similar to us (e.g. when diagnosing the probability that we have a disease given that we have some symptoms) and sometimes we don’t (e.g. when rejecting the Doomsday argument, if that’s an argument we reject). Which cases are which?
Possible example of an anthropic idea paying rent in anticipated experiences: anthropic shadowing of intermittent observer-killing catastrophes of variable size.
It seems like a mess of tautologies and thought experiments
My own view is that this is precisely correct and exactly why anthropics is interesting, we really should have a good, clear approach to it and the fact we don’t suggests there is still work to be done.
Not sure about anthropics, but we need decision theories that work correctly with copies, because we want to build AIs, and AIs can make copies of themselves.
This question has been bugging me for the last couple of years here. Clearly Eliezer believes in the power of anthropics, otherwise he would not bother with MWI as much, or with some of his other ideas, like the recent writeup about leverage. Some of the reasonably smart people out there discuss SSA and SIA. And the Doomsday argument. And don’t get me started on Boltzmann brains…
My current guess that in the fields where experimental testing is not readily available, people settle for what they can get. Maybe anthropics help one pick a promising research direction, I suppose. Just trying (unsuccessfully) to steelman the idea.
I care about anthropics because from a few intuitive principles that I find interesting for partially unrelated reasons (mostly having to do with wanting to understand the nature of justification so as to build an AGI that can do the right thing) I conclude that I should expect monads (programs, processes; think algorithmic information theory) with the most decision-theoretic significance (an objective property because of assumed theistic pansychism; think Neoplatonism or Berkeleyan idealism) to also have the most let’s-call-it-conscious-experience. So I expect to find myself as the most important decision process in the multiverse. Then at various moments the process that is “me” looks around and asks, “do my experiences in fact confirm that I am plausibly the most important agent-thingy in the multiverse?”, and if the answer is no, then I know something is wrong with at least one of my intuitive principles, and if the answer is yes, well then I’m probably psychotically narcissistic and that’s its own set of problems.
Why does anyone care about anthropics? It seems like a mess of tautologies and thought experiments that pays no rent in anticipated experiences.
An important thing to realize is that people working on anthropics are trying to come up with a precise inferential methodology. They’re not trying to draw conclusions about the state of the world, they’re trying to draw conclusions about how one should draw conclusions about the state of the world. Think of it as akin to Bayesianism. If someone read an introduction to Bayesian epistemology, and said “This is just a mess of tautologies (Bayes’ theorem) and thought experiments (Dutch book arguments) that pays no rent in anticipated experience. Why should I care?”, how would you respond? Presumably you’d tell them that they should care because understanding the Bayesian methodology helps people make sounder inferences about the world, even if it doesn’t predict specific experiences. Understanding anthropics does the same thing (except perhaps not as ubiquitously).
So the point of understanding anthropics is not so much to directly predict experiences but to appreciate how exactly one should update on certain pieces of evidence. It’s like understanding any other selection effect—in order to properly interpret the significance of pieces of evidence you collect, you need to have a proper understanding of the tools you use to collect them. To use Eddington’s much-cited example, if your net can’t catch fish smaller than six inches, then the fact that you haven’t caught any such fish doesn’t tell you anything about the state of the lake you’re fishing. Understanding the limitations of your data-gathering mechanism prevents you from making bad updates. And if the particular limitation you’re considering is the fact that observations can only be made in regimes accessible to observers, then you’re engaged in anthropic reasoning.
Paul Dirac came up with a pretty revisionary cosmological theory based on several apparent “large number coincidences”—important large (and some small) numbers in physics that all seem to be approximate integer powers of the Hubble age of the universe. He argued that it is implausible that we just happen to find ourselves at a time when these simple relationships hold, so they must be law-like. Based on this he concluded that certain physical constants aren’t really constant; they change as the universe ages. R. H. Dicke showed (or purported to show) that at least some of these coincidences can be explained when one realizes that observers can only exist during a certain temporal window in the universe’s existence, and that the timing of this window is related to a number of other physical constants (since it depends on facts about the formation and destruction of stars, etc.). If it’s true that observers can only exist in an environment where these large number relationships hold, then it’s a mistake to update our beliefs about natural laws based on these relationships. So that’s an example of how understanding the anthropic selection effect might save us (and not just us, but also superhumans like Dirac) from bad updates.
So much for anthropics in general, but what about the esoteric particulars—SSA, SIA and all that. Well, here’s the basic thought: Dirac’s initial (non-anthropic) move to his new cosmological theory was motivated by the belief that it is extraordinarily unlikely that the large number coincidences are purely due to chance, that we just happen to be around at a time when they hold. This kind of argument has a venerable history in physics (and other sciences, I’m sure) -- if your theory classifies your observed evidence as highly atypical, that’s a significant strike against the theory. Anthropic reasoning like Dicke’s adds a wrinkle—our theory is allowed to classify evidence as atypical, as long as it is not atypical for observers. In other words, even if the theory says phenomenon X occurs very rarely in our universe, an observation of phenomenon X doesn’t count against it, as long as the theory also says (based on good reason, not ad hoc stipulation) that observers can only exist in those few parts of the universe where phenomenon X occurs. Atypicality is allowed as long as it is correlated with the presence of observers.
But only that much atypicality is allowed. If your theory posits significant atypicality that goes beyond what selection effects can explain, then you’re in trouble. This is the insight that SSA, SIA, etc seek to precisify. They are basically attempts to update the Diracian “no atypicality” strategy to allow for the kind of atypicality that anthropic reasoning explains, but no more atypicality than that. Perhaps they are misguided attempts for various reasons, but the search for a mathematical codification of the “no atypicality” move is important, I think, because the move gets used imprecisely all the time anyway (without explicit evocation, most of the time) and it gets used without regard for important observation selection effects.
I read this as: Rather than judging our theory based on p(X), judge it based on p(X) | exists(observers). Am I interpreting you right?
It’s a bit more complicated than that, I think. We’re usually dealing with a situation where p(X occurs somewhere | T) -- where T is the theory—is high. However, the probability of X occurring in a particular human-scale space-time region (or wave-function branch or global time-slice or universe or...) given T is very low. This is what I mean by X being rare. An example might be life-supporting planets or (in a multiversal context) fundamental constants apparently fine-tuned for life.
So the naïve view might be that an observation of X disconfirms the theory, based on the Copernican assumption that there is nothing very special about our place in the universe, whereas the theory seems to suggest that our place is special—it’s one of those rare places where we can see X.
But this disconfirmation only works if you assume that the space-time regions (or branches or universes or...) inhabited by observers are uncorrelated with those in which X occurs. If our theory tells us that those regions are highly correlated—if p(X occurs in region Y | T & observers exist in region Y) >> p(X occurs in region Y | T) -- then our observation of X doesn’t run afoul of the Copernican assumption, or at least a reasonably modified version of the Copernican assumption which allows for specialness only in so far as that specialness is required for the existence of observers.
If you taboo “anthropics” and replace by “observation selection effects” then there are all sorts of practical consequences. See the start of Nick Bostrom’s book for some examples.
The other big reason for caring is the “Doomsday argument” and the fact that all attempts to refute it have so far failed. Almost everyone who’s heard of the argument thinks there’s something trivially wrong with it, but all the obvious objections can be dealt with e.g. look later in Bostrom’s book. Further, alternative approaches to anthropics (such as the “self indication assumption”), or attempts to completely bypass anthropics (such as “full non-indexical conditioning”), have been developed to avoid the Doomsday conclusion. But very surprisingly, they end up reproducing it. See Katja Grace’s theisis.
Jaan Tallinn’s attempt: Why Now? A Quest in Metaphysics. The “Doomsday argument” is far from certain.
Given the (observed) information that you are a 21st century human, the argument predicts that there will be a limited number of those. Well, that hardly seems news—our descendants will evolve into something different soon enough. That’s not much of a “Doomsday”.
I described some problems with Tallinn’s attempt here—under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.
Also, any analysis which predicts we are in a simulation runs into its own version of doomsday: unless there are strictly infinite computational resources, our own simulation is very likely to come to an end before we get to run simulations ourselves. (Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)
We seem pretty damn close to me! A decade or so is not very long.
In a binary tree (for example), the internal nodes and the leaves are roughly equal in number.
Remember that in Tallinn’s analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially). I suppose Tallinn’s model could be adjusted so that they only explore “branch-points” in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.
On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn’s and Bostrom’s analysis, m is very much bigger than 2.
More likely that there are a range of historical “tipping points” that they might want to explore—perhaps including the invention of language and the origin of humans.
Surely the chance of being in a simulated world depends somewhat on its size. Also the chance of a sim running simulations also depends on its size. A large world might have a high chance of running simulations, while a small world might have a low chance. Averaging over worlds of such very different sizes seems pretty useless—but any average of number of simulations run per-world would probably be low—since so many sims would be leaf nodes—and so would run no simulations themselves. Leaves might be more numerous, but they will also be smaller—and less likely to contain many observers.
What substrate are they running these simulations on?
I had another look at Tallinn’s presentation, and it seems he is rather vague on this… rather difficult to know what computing designs super-intelligences would come up with! However, presumably they would use quantum computers to maximize the number of simulations they could create, which is how they could get branch-points every simulated second (or even more rapidly). Bostrom’s original simulation argument provides some lower bounds—and references—on what could be done using just classical computation.
Well. The claims that it’s relevant to our current information state have been refuted pretty well.
Citation needed (please link to a refutation).
I’m not aware of any really good treatments. I can link to myself claiming that I’m right, though. :D
I think there may be a selection effect—once the doomsday argument seems not very exciting, you’re less likely to talk about it.
The doomsday argument is itself anthropic thinking of the most useless sort.
Citation needed (please link to a refutation).
I don’t need a refutation. The doomsday argument doesn’t affect anything I can or will do. I simply don’t care about it. It’s like a claim that I will probably be eaten at any point in the next 100 years by a random giant tiger.
Take Bayes’ theorem: P(H|O) = P(O|H) × P(H) / P(O). If H is a hypothesis and O is an observation, P(O|H) means “what is the probability of making that observation if the hypothesis is true?”
If a hypothesis has as consequence “nobody can observe O” (say, because no humans can exist), then that P(O|H) is 0 (actually, it’s about the probability that you didn’t get the consequence right). Which means that, once you made the observation, you will probably decide that the hypothesis is unlikely. However, if you don’t notice that consequence, you might decide that P(O|H) is large, and incorrectly assign high likelihood to the hypothesis.
For a completely ridiculous example, imagine that there’s a deadly cat-flu epidemic; it gives 90% of cats that catch it a runny nose. Your cat’s nose becomes runny. You might be justified to think that it’s likely your cat got cat-flu. However, if you know that all cases, the cat’s owner dies of the flu before the cat has any symptoms, the conclusion would be the opposite. (Since, if it were the flu, you wouldn’t see the cat’s runny nose, because you’d be dead.) The same evidence, opposite effect.
Anthropics is kind of the same thing, except you’re mostly guessing about the flu.
The obvious application (to me) is figuring out how to make decisions once mind uploading is possible. This point is made, for example, in Scott Aaronson’s The Ghost in the Quantum Turing Machine. What do you anticipate experiencing if someone uploads your mind while you’re still conscious?
Anthropics also seems to me to be relevant to the question of how to do Bayesian updates using reference classes, a subject I’m still very confused about and which seems pretty fundamental. Sometimes we treat ourselves as randomly sampled from the population of all humans similar to us (e.g. when diagnosing the probability that we have a disease given that we have some symptoms) and sometimes we don’t (e.g. when rejecting the Doomsday argument, if that’s an argument we reject). Which cases are which?
Or even: deciding how much to care about experiencing pain during an operation if I’ll just forget about it afterwards. This has the flavor of an anthropics question to me.
Possible example of an anthropic idea paying rent in anticipated experiences: anthropic shadowing of intermittent observer-killing catastrophes of variable size.
I’d add that the Doomsday argument in specific seems like it should be demolished by even the slightest evidence as to how long we have left.
There’s a story about anthropic reasoning being used to predict properties of the processes which produce carbon in stars, before these processes were known. (apparently there’s some debate about whether or not this actually happened)
My own view is that this is precisely correct and exactly why anthropics is interesting, we really should have a good, clear approach to it and the fact we don’t suggests there is still work to be done.
Not sure about anthropics, but we need decision theories that work correctly with copies, because we want to build AIs, and AIs can make copies of themselves.
This question has been bugging me for the last couple of years here. Clearly Eliezer believes in the power of anthropics, otherwise he would not bother with MWI as much, or with some of his other ideas, like the recent writeup about leverage. Some of the reasonably smart people out there discuss SSA and SIA. And the Doomsday argument. And don’t get me started on Boltzmann brains…
My current guess that in the fields where experimental testing is not readily available, people settle for what they can get. Maybe anthropics help one pick a promising research direction, I suppose. Just trying (unsuccessfully) to steelman the idea.
I care about anthropics because from a few intuitive principles that I find interesting for partially unrelated reasons (mostly having to do with wanting to understand the nature of justification so as to build an AGI that can do the right thing) I conclude that I should expect monads (programs, processes; think algorithmic information theory) with the most decision-theoretic significance (an objective property because of assumed theistic pansychism; think Neoplatonism or Berkeleyan idealism) to also have the most let’s-call-it-conscious-experience. So I expect to find myself as the most important decision process in the multiverse. Then at various moments the process that is “me” looks around and asks, “do my experiences in fact confirm that I am plausibly the most important agent-thingy in the multiverse?”, and if the answer is no, then I know something is wrong with at least one of my intuitive principles, and if the answer is yes, well then I’m probably psychotically narcissistic and that’s its own set of problems.
It tells you when to expect the end of the world.