I don’t actually see the argument here, just assertions. The assertions are: there’s no reason why you are who you are; and, you shouldn’t regard yourself as a typical conscious being.
Well, you are who you are, because of the causes that made you; and you’re either a typical conscious being or an atypical conscious being. I don’t see the problem.
Do you believe you are a typical conscious being or an atypical conscious being?
Let’s try this question out on some other examples of conscious beings first.
Walking this morning, I noticed a small bird on the ground that hopped a few times like a kangaroo before it took off.
I just searched google news for the words “indian farmer”. This was the first article. I ask you to consider the person at the top and center of the picture, standing thigh-deep in water.
OK, I’ve singled out two quasi-arbitrary examples of specific conscious beings: the bird I saw this morning; the person in the Bloomberg news photo.
We can ask about each of them in turn: is this a typical or an atypical conscious being?
The way we answer the question will depend on a lot of things, such as what beings we think are conscious. We might decide that they are typical in some respects and atypical in others. We might even go meta and ask, is the mix of typicality and atypicality, itself typical or atypical.
My point is, these are questions that can be posed and tentatively answered. Is there some reason we can’t ask the same questions about ourselves?
Consciousness is a property of the first-person: e.g. To me I am conscious but inherently can’t know you are. Whether or not something is conscious is asking if you think from that thing’s perspective. So there is no typical or atypical conscious being, from my perspective I am “the” conscious being, if I reason from something else’s perspective, then that thing is “the” conscious being instead.
Our usual notion of considering ourselves as a typical conscious being is because we are more used to thinking from the perspectives of things similar to us. e.g. we are more apt to think from the perspective of another person than a cat, and from the perspective of a cat than a chair. In other words, we tend to ascribe the property of consciousness to things more like ourselves, instead of the other way around: that we are typical in some sense.
The part where I know I’m conscious while not you is an assertion. It is not based on reasoning or logic but simply because it feels so. The rest are arguments which depend on said assertion.
Thought the reply was addressed to me. But nonetheless, it’s a good opportunity to delineate and inspect my own argument. So leaving the comment here.
Well, you are who you are, because of the causes that made you; and you’re either a typical conscious being or an atypical conscious being. I don’t see the problem.
The causes that made you didn’t randomly select your soul amoung all possible souls from a specific reference class. But that’s the fundamental assumption of antropic reasoning.
By definition of probability, we can consider ourselves a random member of some reference class. (Otherwise, we couldn’t make probabilistic predictions about ourselves.) The question is picking the right reference class.
That’s true, but the definition of probability isn’t inapplicable to everything. From that, in conjunction with us being able to make probabilistic predictions about ourselves, follows that we are a random member of at least one reference class, which means that our soul has been selected at random from all possible souls from a specific reference class (if that’s what you meant by that).
In anthropic questions, the probability predictions about ourselves (self-locating probabilities) lead to paradoxes. At the same time, they also have no operational value such as decision-making. In a practical sense, we really shouldn’t make such probabilistic predictions. Here in this post I’m trying to explain the theoretical reason against it.
Let’s take the AI driving problem in your paper as an example. The better strategy is regarded as the one that gives the better overall reward from all drivers. Whether the rewards of the two instances of a bad driver should be cumulatively or just count once is what divides halfers and thirders. Once that is determined the optimal decision can be calculated from the relative fractions of good/bad drivers/instances. It doesn’t involve taking the AI’s perspective in a particular instance and deciding the best decision for that particular instance, which requires self-locating probability. The “right decision” is justified by averaging out all drivers/instances, which does not depend on the particularity of self and now.
Self-locating probability would be useful for decision-making if the decision is evaluated by its effect on the self, not the collective effect on a reference class. But no rational strategy exists for this goal
I found two statements in the article that I think are well-defined enough and go into your argument:
“The birth rank discussion isn’t about if I am born slightly earlier or later.”
How do you know? I think it’s exactly about that. I have x% probability of being born within the first x% of all humans (assuming all humans are the correct reference class—if they’re not, the problem isn’t in considering ourselves a random person from a reference class, but choosing the wrong reference class).
2. “Nobody can be born more than a few months away from their actual birthday.”
When reasoning probabilistically, we can imagine other possible worlds. We’re not talking about something being the case while at the same time not being the case. We imagine other possible worlds (created by the same sampling process that created our world) and compare them to ours. In some of those possible worlds, we were born sooner or later.
If you are born a month earlier as a preemie instead of full-term, it can be quite convincingly said you are still the same person. But if you are born a year earlier are you still the same person you are now? There are obviously going to be substantial physical differences, different sperm and egg, maybe different gender. If you are the first few human beings born, there will be few similarities between the physical person that’s you in that case and the physical person you are now. So the birth rank discussion is not about if this physical person you regard as yourself is born slightly earlier or later. But among all the people in the entire human history which one is you, i.e. from which one of those person’s perspectives do you experience the world?
The anthropic problem is not about possible worlds but instead centered worlds. Different events in anthropic problems can correspond to the exact same possible world while differing in which perspective you experience it. This circles back to point 1, and the decoupling between the first-person “I” and the physical particular person.
That’s seemingly quite a convincing reason why you can’t be born too early. But what occurs to me now is that the problem can be about where you are, temporally, in relation to other people. (So you were still born on the same day, but depending on the entire size of the civilization (m), the probability of you having n people precede you is nm⋅100%.)
Depending on how “anthropic problem” is defined, that could potentially be true either for all, or for some anthropic problems.
I don’t actually see the argument here, just assertions. The assertions are: there’s no reason why you are who you are; and, you shouldn’t regard yourself as a typical conscious being.
Well, you are who you are, because of the causes that made you; and you’re either a typical conscious being or an atypical conscious being. I don’t see the problem.
Do you believe you are a typical conscious being or an atypical conscious being? And does that belief follow from an argument or an assertion?
Let’s try this question out on some other examples of conscious beings first.
Walking this morning, I noticed a small bird on the ground that hopped a few times like a kangaroo before it took off.
I just searched google news for the words “indian farmer”. This was the first article. I ask you to consider the person at the top and center of the picture, standing thigh-deep in water.
OK, I’ve singled out two quasi-arbitrary examples of specific conscious beings: the bird I saw this morning; the person in the Bloomberg news photo.
We can ask about each of them in turn: is this a typical or an atypical conscious being?
The way we answer the question will depend on a lot of things, such as what beings we think are conscious. We might decide that they are typical in some respects and atypical in others. We might even go meta and ask, is the mix of typicality and atypicality, itself typical or atypical.
My point is, these are questions that can be posed and tentatively answered. Is there some reason we can’t ask the same questions about ourselves?
Consciousness is a property of the first-person: e.g. To me I am conscious but inherently can’t know you are. Whether or not something is conscious is asking if you think from that thing’s perspective. So there is no typical or atypical conscious being, from my perspective I am “the” conscious being, if I reason from something else’s perspective, then that thing is “the” conscious being instead.
Our usual notion of considering ourselves as a typical conscious being is because we are more used to thinking from the perspectives of things similar to us. e.g. we are more apt to think from the perspective of another person than a cat, and from the perspective of a cat than a chair. In other words, we tend to ascribe the property of consciousness to things more like ourselves, instead of the other way around: that we are typical in some sense.
The part where I know I’m conscious while not you is an assertion. It is not based on reasoning or logic but simply because it feels so. The rest are arguments which depend on said assertion.
Thought the reply was addressed to me. But nonetheless, it’s a good opportunity to delineate and inspect my own argument. So leaving the comment here.
The causes that made you didn’t randomly select your soul amoung all possible souls from a specific reference class. But that’s the fundamental assumption of antropic reasoning.
By definition of probability, we can consider ourselves a random member of some reference class. (Otherwise, we couldn’t make probabilistic predictions about ourselves.) The question is picking the right reference class.
Definitions are part of a map and maps can be inaplicable to the territory.
That’s true, but the definition of probability isn’t inapplicable to everything. From that, in conjunction with us being able to make probabilistic predictions about ourselves, follows that we are a random member of at least one reference class, which means that our soul has been selected at random from all possible souls from a specific reference class (if that’s what you meant by that).
In anthropic questions, the probability predictions about ourselves (self-locating probabilities) lead to paradoxes. At the same time, they also have no operational value such as decision-making. In a practical sense, we really shouldn’t make such probabilistic predictions. Here in this post I’m trying to explain the theoretical reason against it.
Not the Doomsday Argument, but self-locating probabilities can certainly be useful in decision making, as Caspar Oesterheld and I argue for example here: http://www.cs.cmu.edu/~conitzer/FOCALAAAI23.pdf and show can be done consistently in various ways here: https://www.andrew.cmu.edu/user/coesterh/DeSeVsExAnte.pdf
Let’s take the AI driving problem in your paper as an example. The better strategy is regarded as the one that gives the better overall reward from all drivers. Whether the rewards of the two instances of a bad driver should be cumulatively or just count once is what divides halfers and thirders. Once that is determined the optimal decision can be calculated from the relative fractions of good/bad drivers/instances. It doesn’t involve taking the AI’s perspective in a particular instance and deciding the best decision for that particular instance, which requires self-locating probability. The “right decision” is justified by averaging out all drivers/instances, which does not depend on the particularity of self and now.
Self-locating probability would be useful for decision-making if the decision is evaluated by its effect on the self, not the collective effect on a reference class. But no rational strategy exists for this goal
I found two statements in the article that I think are well-defined enough and go into your argument:
“The birth rank discussion isn’t about if I am born slightly earlier or later.”
How do you know? I think it’s exactly about that. I have x% probability of being born within the first x% of all humans (assuming all humans are the correct reference class—if they’re not, the problem isn’t in considering ourselves a random person from a reference class, but choosing the wrong reference class).
2. “Nobody can be born more than a few months away from their actual birthday.”
When reasoning probabilistically, we can imagine other possible worlds. We’re not talking about something being the case while at the same time not being the case. We imagine other possible worlds (created by the same sampling process that created our world) and compare them to ours. In some of those possible worlds, we were born sooner or later.
If you are born a month earlier as a preemie instead of full-term, it can be quite convincingly said you are still the same person. But if you are born a year earlier are you still the same person you are now? There are obviously going to be substantial physical differences, different sperm and egg, maybe different gender. If you are the first few human beings born, there will be few similarities between the physical person that’s you in that case and the physical person you are now. So the birth rank discussion is not about if this physical person you regard as yourself is born slightly earlier or later. But among all the people in the entire human history which one is you, i.e. from which one of those person’s perspectives do you experience the world?
The anthropic problem is not about possible worlds but instead centered worlds. Different events in anthropic problems can correspond to the exact same possible world while differing in which perspective you experience it. This circles back to point 1, and the decoupling between the first-person “I” and the physical particular person.
That’s seemingly quite a convincing reason why you can’t be born too early. But what occurs to me now is that the problem can be about where you are, temporally, in relation to other people. (So you were still born on the same day, but depending on the entire size of the civilization (m), the probability of you having n people precede you is nm⋅100%.)
Depending on how “anthropic problem” is defined, that could potentially be true either for all, or for some anthropic problems.