Because it’s much simpler that way, and you need to be able to handle trivial cases before you can deal with more complicated ones.
Besides, what is hostile about making a million copies of you? I’d take getting knocked out for that, as long as the copies don’t all have brain damage for it.
Okay, fair point. It is indeed important to start from simple cases. I guess I didn’t say what I really meant there.
My real concern is this: posters are trying to develop the limits of e.g. anthropic reasoning. Anthropic reasoning takes the form of, “I observe that I exist. Therefore, it follows that...”
But then to attack that problem, they posit scenarios of a completely different form: “I have been fed solid evidence from elsewhere that {x, y, and z} and then placed in {specific scenario}. Then I observe E. What should I infer?”
That does not generalize to anthropic reasoning: it’s just reasoning from arbitrarily selected premises.
I figured that wasn’t your real objection, but I guessed wrong about what it was.
I figured you were going for something like “you need to include sufficient information so that we know we’re not positing an impossible world”, which is a fair point, since, for example, at first glance newcombs problem appears to violate causality.
Are you suggesting that we deal with more general problems where we know even less, or are you just saying that these problems aren’t even related to anthropic reasoning?
are you just saying that these problems aren’t even related to anthropic reasoning?
This. This is what I’m saying.
These posts I’m referring to start out with “Assume you’re in a situation where [...]. And you know that that’s the situation. Then what you can you infer from evidence E?”
But when you do that, there’s nothing anthropic about that—it’s just a usual logical puzzle, unrelated to reasoning about what you can know from your existence in this universe.
Do you consider the original presumptuous philosopher problem to involve anthropic reasoning? What is it that’s required to be undefined for reasoning to be anthropic?
Anthropic reasoning is any reasoning based on the fact that you (believe you) exist, and any condition necessary for you to reach that state, including suppositions about what such conditions include. It can be supplemented by observations of the world as it is.
In this problem, most of the problems that purport to use anthropic reasoning, and the original presumptuous philosopher problems, they are just reasoning from arbitrary givens, which don’t even generalize to anthropic reasoning. Each time, someone is able to point out a problem isomorphic to the one given, but lacking a characteristically anthropic component to the reasoning.
Anthropic reasoning is simply not the same as “hey, what if someone did this to you, where these things had this frequency, what would you conclude upon seeing this?” That’s just a normal inference problem.
Just to show that I’m being reasonable, here is what I would consider a real case of anthropic reasoning.
“I notice that I exist. The noticer seems to be the same as that which exists. So, whatever the computational process is for generating my observations must either permit self-reflection, or the thing I notice existing isn’t really the same thing having these thoughts.”
Well, that just means that you’re doing ordinary reasoning, of which anthropic reasoning is a subset. It does not follow that this (and topics like it) is anthropic reasoning. And no, you don’t get to define words however you like: the term “anthropic reasoning” is supposed to carve out a natural category in conceptspace, yet when you use it to mean “any reasoning from arbitrary premises”, you’re making the term less helpful.
the term “anthropic reasoning” is supposed to carve out a natural category in conceptspace
If it doesn’t carve out such a category, maybe that’s because it’s a malformed concept, not because we’re using it wrong. Off the top of my head, I see no reason why the existence of the observer should be a special data point that needs to be fed into the data processing system in a special way.
Strangely enough, that’s actually pretty close to what I believe—see my comment here.
So, despite all this arguing, we seem to have almost the same view!
Still, given that it’s a malformed concept, you still need to remain as faithful as possible to what it purports to mean, or at least note that your example can be converted into a clearly non-anthropic one without loss of generality.
That does not generalize to anthropic reasoning: it’s just reasoning from arbitrarily selected premises.
Which is interesting enough, so long as I only have to write trivial replies and not waste time writing up the trivial scenarios! (You make a good point.)
Why do we spend so much time thinking about how to reason on problems in which
a) you know what’s going on while you’re not conscious, and
b) you take at face value information fed to you by a hostile entity?
Because it’s much simpler that way, and you need to be able to handle trivial cases before you can deal with more complicated ones.
Besides, what is hostile about making a million copies of you? I’d take getting knocked out for that, as long as the copies don’t all have brain damage for it.
Okay, fair point. It is indeed important to start from simple cases. I guess I didn’t say what I really meant there.
My real concern is this: posters are trying to develop the limits of e.g. anthropic reasoning. Anthropic reasoning takes the form of, “I observe that I exist. Therefore, it follows that...”
But then to attack that problem, they posit scenarios of a completely different form: “I have been fed solid evidence from elsewhere that {x, y, and z} and then placed in {specific scenario}. Then I observe E. What should I infer?”
That does not generalize to anthropic reasoning: it’s just reasoning from arbitrarily selected premises.
I figured that wasn’t your real objection, but I guessed wrong about what it was.
I figured you were going for something like “you need to include sufficient information so that we know we’re not positing an impossible world”, which is a fair point, since, for example, at first glance newcombs problem appears to violate causality.
Are you suggesting that we deal with more general problems where we know even less, or are you just saying that these problems aren’t even related to anthropic reasoning?
This. This is what I’m saying.
These posts I’m referring to start out with “Assume you’re in a situation where [...]. And you know that that’s the situation. Then what you can you infer from evidence E?”
But when you do that, there’s nothing anthropic about that—it’s just a usual logical puzzle, unrelated to reasoning about what you can know from your existence in this universe.
Do you consider the original presumptuous philosopher problem to involve anthropic reasoning? What is it that’s required to be undefined for reasoning to be anthropic?
Anthropic reasoning is any reasoning based on the fact that you (believe you) exist, and any condition necessary for you to reach that state, including suppositions about what such conditions include. It can be supplemented by observations of the world as it is.
In this problem, most of the problems that purport to use anthropic reasoning, and the original presumptuous philosopher problems, they are just reasoning from arbitrary givens, which don’t even generalize to anthropic reasoning. Each time, someone is able to point out a problem isomorphic to the one given, but lacking a characteristically anthropic component to the reasoning.
Anthropic reasoning is simply not the same as “hey, what if someone did this to you, where these things had this frequency, what would you conclude upon seeing this?” That’s just a normal inference problem.
Just to show that I’m being reasonable, here is what I would consider a real case of anthropic reasoning.
“I notice that I exist. The noticer seems to be the same as that which exists. So, whatever the computational process is for generating my observations must either permit self-reflection, or the thing I notice existing isn’t really the same thing having these thoughts.”
To me, that just indicates that anthropic reasoning is valid, or at least that what we’re calling anthropic reasoning is valid.
Well, that just means that you’re doing ordinary reasoning, of which anthropic reasoning is a subset. It does not follow that this (and topics like it) is anthropic reasoning. And no, you don’t get to define words however you like: the term “anthropic reasoning” is supposed to carve out a natural category in conceptspace, yet when you use it to mean “any reasoning from arbitrary premises”, you’re making the term less helpful.
If it doesn’t carve out such a category, maybe that’s because it’s a malformed concept, not because we’re using it wrong. Off the top of my head, I see no reason why the existence of the observer should be a special data point that needs to be fed into the data processing system in a special way.
Strangely enough, that’s actually pretty close to what I believe—see my comment here.
So, despite all this arguing, we seem to have almost the same view!
Still, given that it’s a malformed concept, you still need to remain as faithful as possible to what it purports to mean, or at least note that your example can be converted into a clearly non-anthropic one without loss of generality.
Fair enough!
Which is interesting enough, so long as I only have to write trivial replies and not waste time writing up the trivial scenarios! (You make a good point.)