The two are incompatible. Anthropic reasoning makes explicit use of first-person experience in their question formulation. E.g. in the sleeping beauty problem, “what is the probability that now is the first awakening?” or “today is Monday?” The meaning of “now”, and “today” is considered to be apparent, it is based on their immediacy to the subjective experience. Just like which person “I” am is inherently obvious based on a first-person experience. Denying first-person experience would make anthropic problems undefined.
Another example is the doomsday argument. Which says my birth rank, or the current generation’s birth rank, is evidence for doom-soon. Without a first-person experience who “me” or “the current generation” refers to would be unclear.
The two are unrelated. Illusionism is specifically about consciousness (or rather its absence), while anthropics is about particular types of conditional probabilities and does not require any reference to consciousness or its absence. Denying first person experience does not make anthropic problems any more undefined than they already are.
One way to understand the anthropic debate is to consider them as different ways of interpreting the indexicals (such as “I” “now” “today” “our generation” etc) in probability calculation. And they are based on the first-person perspective. Furthermore, there is the looming question of “what should be considered observers?”. Which lacks any logical indicator, unless we bring in the concept of consciousness.
We can easily make the sleeping beauty problem more undefined. For example, by asking “Is the day Monday?”. Before attempting to answer it one would have to ask: “which day exactly are we talking about?”. Compare that question to “is today Monday?”, the latter is obviously more defined. Even though by using “now” or “today” no physical feature is used, we inherently think the latter question is clear because we can imagine being in Beauty’s perspective as she wakes up during the experiment: “today” is the one most closely connected to the first-person experience.
If I program a simulation of the SBP and run it under illusionist principles, aren’t the simulated Halfers going to inevitably win on average? After all, it’s a fair coin.
Can you explain what you mean by “underdetermined” in this context? How is there any ambiguity in resolving the payouts if the game is run as a third person simulation?
they’re perfectly compatible, they don’t even say anything about each other [edit: invalidated]. anthropics is just a question of what systems are likely. illusionism is a claim about whether systems have an ethereal self that they expose themselves to by acting; I am viciously agnostic about anything epiphenomenal like that, I would instead assert that all epiphenomenal confusions seem to me to be the confusion “why does [universe-aka-self] exist”, and then there’s a separate additional question of the surprise any highly efficient chemical processing system has at having information entering it, a rare thing made rarer still by the level of specifity and coherence we meat-piloting skin-encased neural systems called humans seem to find occurring in our brains.
there’s no need to assert that we are separately existenceful and selfful from the walls, or the chair, or the energy burn in the screen displaying—they are also physical objects. their physical shapes don’t encode as much fact about the world around them though; our senses are, at present, much better integrators of knowledge. and it is the knowledge that defines our agency as systems that encodes our moral worth. none of this requires seperate privileged existence different from the environment around us; it is our access consciousness that makes us special, not our hard consciousness.
Try this for practice, reasoning purely objectively and physically, can you recreate the anthropic paradoxes such as the Sleeping Beauty Problem?
That means without resorting to any particular first-person perspective, nor using words such as “I” “now” “here”, or putting them in a unique logical position.
none of this requires seperate privileged existence different from the environment around us; it is our access consciousness that makes us special, not our hard consciousness.
That sounds like a plausible theory. But, if we reject that there is a separate 1st person perspective, doesn’t that entail that we should be Halfers in the SBP? Not saying it’s wrong. But it does seem to me like illusionism/elimitivism has anthropic consequences.
hmm. it seems to me that the sleeping mechanism problem is missing a perspective—there are more types of question you could ask the sleeping mechanism that are of interest. I’d say the measure increased by waking is not able to make predictions about what universe it is; but that, given waking, the mechanism should estimate the average of the two universe’s wake counts, and assume the mechanism has 1.5 wakings of causal impact on the environment around the awoken mechanism. In other words, it seems to me that the decision-relevant anthropic question is how many places a symmetric process exists; inferring the properties of the universe around you, it is invalid to update about likely causal processes based on the fact that you exist; but on finding out you exist, you can update about where your actions are likely to impact, a different measure that does not allow making inferences about, eg, universal constants.
if, for example, the sleeping beauty problem is run ten times, and each time the being wakes, it is written to a log; after the experiment, there will be on average 1.5x as many logs as there are samples. but the agent should still predict 50%, because the predictive accuracy score is a question of whether the bet the agent makes can be beaten by other knowledge. when the mechanism wakes, it should know it has more action weight in one world than the other, but that doesn’t allow it to update about what bet most accurately predicts the most recent sample. two thirds of the mechanism’s actions occur in one world, one third in the other, but the mechanism can’t use that knowledge to infer about the past.
I get the sense that I might be missing something here. the thirder position makes intuitive sense on some level. but my intuition is that it’s conflating things. I’ve encountered the sleeping beauty problem before and something about it unsettles me—it feels like a confused question, and I might be wrong about this attempted deconfusion.
but this explanation matches my intuition that simulating a billion more copies of myself would be great, but not make me more likely to have existed.
The two are incompatible. Anthropic reasoning makes explicit use of first-person experience in their question formulation. E.g. in the sleeping beauty problem, “what is the probability that now is the first awakening?” or “today is Monday?” The meaning of “now”, and “today” is considered to be apparent, it is based on their immediacy to the subjective experience. Just like which person “I” am is inherently obvious based on a first-person experience. Denying first-person experience would make anthropic problems undefined.
Another example is the doomsday argument. Which says my birth rank, or the current generation’s birth rank, is evidence for doom-soon. Without a first-person experience who “me” or “the current generation” refers to would be unclear.
The two are unrelated. Illusionism is specifically about consciousness (or rather its absence), while anthropics is about particular types of conditional probabilities and does not require any reference to consciousness or its absence. Denying first person experience does not make anthropic problems any more undefined than they already are.
One way to understand the anthropic debate is to consider them as different ways of interpreting the indexicals (such as “I” “now” “today” “our generation” etc) in probability calculation. And they are based on the first-person perspective. Furthermore, there is the looming question of “what should be considered observers?”. Which lacks any logical indicator, unless we bring in the concept of consciousness.
We can easily make the sleeping beauty problem more undefined. For example, by asking “Is the day Monday?”. Before attempting to answer it one would have to ask: “which day exactly are we talking about?”. Compare that question to “is today Monday?”, the latter is obviously more defined. Even though by using “now” or “today” no physical feature is used, we inherently think the latter question is clear because we can imagine being in Beauty’s perspective as she wakes up during the experiment: “today” is the one most closely connected to the first-person experience.
So you’d say that it’s coherent to be an illusionist who rejects the Halfer position in the SBP?
Sure. Also coherent to be an illusionist who accepts the Halfer position in the SBP. It’s an underdetermined problem.
If I program a simulation of the SBP and run it under illusionist principles, aren’t the simulated Halfers going to inevitably win on average? After all, it’s a fair coin.
It depends upon how you score it, which is why both the original problem and various decision-problem variants are underdetermined.
Can you explain what you mean by “underdetermined” in this context? How is there any ambiguity in resolving the payouts if the game is run as a third person simulation?
they’re perfectly compatible,
they don’t even say anything about each other[edit: invalidated]. anthropics is just a question of what systems are likely. illusionism is a claim about whether systems have an ethereal self that they expose themselves to by acting; I am viciously agnostic about anything epiphenomenal like that, I would instead assert that all epiphenomenal confusions seem to me to be the confusion “why does [universe-aka-self] exist”, and then there’s a separate additional question of the surprise any highly efficient chemical processing system has at having information entering it, a rare thing made rarer still by the level of specifity and coherence we meat-piloting skin-encased neural systems called humans seem to find occurring in our brains.there’s no need to assert that we are separately existenceful and selfful from the walls, or the chair, or the energy burn in the screen displaying—they are also physical objects. their physical shapes don’t encode as much fact about the world around them though; our senses are, at present, much better integrators of knowledge. and it is the knowledge that defines our agency as systems that encodes our moral worth. none of this requires seperate privileged existence different from the environment around us; it is our access consciousness that makes us special, not our hard consciousness.
Try this for practice, reasoning purely objectively and physically, can you recreate the anthropic paradoxes such as the Sleeping Beauty Problem?
That means without resorting to any particular first-person perspective, nor using words such as “I” “now” “here”, or putting them in a unique logical position.
That sounds like a plausible theory. But, if we reject that there is a separate 1st person perspective, doesn’t that entail that we should be Halfers in the SBP? Not saying it’s wrong. But it does seem to me like illusionism/elimitivism has anthropic consequences.
hmm. it seems to me that the sleeping mechanism problem is missing a perspective—there are more types of question you could ask the sleeping mechanism that are of interest. I’d say the measure increased by waking is not able to make predictions about what universe it is; but that, given waking, the mechanism should estimate the average of the two universe’s wake counts, and assume the mechanism has 1.5 wakings of causal impact on the environment around the awoken mechanism. In other words, it seems to me that the decision-relevant anthropic question is how many places a symmetric process exists; inferring the properties of the universe around you, it is invalid to update about likely causal processes based on the fact that you exist; but on finding out you exist, you can update about where your actions are likely to impact, a different measure that does not allow making inferences about, eg, universal constants.
if, for example, the sleeping beauty problem is run ten times, and each time the being wakes, it is written to a log; after the experiment, there will be on average 1.5x as many logs as there are samples. but the agent should still predict 50%, because the predictive accuracy score is a question of whether the bet the agent makes can be beaten by other knowledge. when the mechanism wakes, it should know it has more action weight in one world than the other, but that doesn’t allow it to update about what bet most accurately predicts the most recent sample. two thirds of the mechanism’s actions occur in one world, one third in the other, but the mechanism can’t use that knowledge to infer about the past.
I get the sense that I might be missing something here. the thirder position makes intuitive sense on some level. but my intuition is that it’s conflating things. I’ve encountered the sleeping beauty problem before and something about it unsettles me—it feels like a confused question, and I might be wrong about this attempted deconfusion.
but this explanation matches my intuition that simulating a billion more copies of myself would be great, but not make me more likely to have existed.