It is possible to create similar to DA situation and check if using DA will give a correct prediction about probabilities in not so extreme situation as human extinction.
The experiment if following. Imagine, that I don’t know the length of a year in months, but I could ask just one person about his date of birth. In my case, it is September, 9th month. Assuming that date of birth is random from the full length of the year, I could calculate that expected length of a year is 18 months with 50 per cent probability, which is not far from 12, and is not 0.01 or 10 000 000. And it like I use mediocrity principle without any application of my utilitarian position, but to predict actual value.
Yes. When all the priors are fixed, and what you’re updating is known to you, then the style of reasoning behind the DA works (see some of Nick Bostrom’s examples).
However, in ADT, I reject the very notion that anthropic probabilities make sense. Things like “which of these almost identical agents are actually you” are things that do not make sense in terms of probability. However, despite that, ADT (simplified anthropic CDT) can still let you make good decisions.
Since we are in the real world, it is a possibility that there is a copy of me, e.g. as a boltzmann brain, or a copy of the simulation I’m in.
Does your refusal to assign probabilities to these situations infect everyday life? Doesn’t betting on a coin flip require conditioning on whether I’m a boltzmann brain, or am in a simulation that replaces coins with potatoes if I flip them? You seem to be giving up on probabilities altogether.
Suppose that there is 50% ehance of there being a boltzmann brain copy of you—that’s fine, that is a respectable probability. What ADT ignores are questions like “am I the boltzmann brain or the real me on Earth?” The answer to that is “yes. You are both. And you currently control the actions of both. It is not meaningful to ask ‘which’ one you are.”
Give me a preference and a decision, and that I can answer, though. So the answer to “what is the probability of being which one” is “what do you need to know this for?”
I agree with this: “yes. You are both. And you currently control the actions of both. It is not meaningful to ask ‘which’ one you are.”
But have the following problem: what if the best course of action for me depends on am I Boltzmann brain or real person? It looks like I still have to update according to which group is larger: real me of Boltzmann brain me.
It also looks like we use “all decision computation processes like mine process” as something like what I called before “natural reference class”. And in case of DA it is all beings who thinks about DA.
I’ll deal with the non-selfish case, which is much easier.
In that case, Earth you and Boltzmann brain you have the same objectives. And most of the time, these objectives make “Boltzmann brain you” irrelevant, as their actions have so consequences (one exception could be “ensure everyone has a life that is on average happy, in which case Earth you should try and always be happy, for the sake of the Boltzmann brain yous). So most of the time, you can just ignore Boltzmann brains in ADT.
Yes, that is a natural reference class in ADT (note that it’s a reference class of agents-moments making decisions, not of agents in general; it’s possible that someone else is in your reference class for one decision, but not for another).
But “all beings who think about DA” is not a natural reference class, as you can see when you start questioning it (“to what extent do they think about DA? Under what name? Does it matter what conclusions they draw?...)
That’s not quite what I was talking about, but I managed to resolve my question to my own satisfaction anyhow. The problem of conditionalization can be worked around fairly easily.
Suppose that there is 50% ehance of there being a boltzmann brain copy of you
Actually, the probability that you should assign to there being a copy of you is not defined under your system—otherwise you’d be able to conceive of a solution to the sleeping beauty problem—the entire schtick is that Sleeping Beauty is not merely ignorant about whether another copy of her exists, but that it is supposedly a bad question.
Hm, okay, I think this might cause trouble in a different way that I was originally thinking of. Because all sorts of things are possibilities, and it’s not obvious to me how ADT is able to treat reasonable anthropic possibilities different from astronomically-unlikely ones, if it throws out any measure of unlikeliness. You might try to resolve this by putting in some “outside perspective” probabilities, e.g. that an outside observer in our universe would see me as normal most of the time and me as a Boltzmann brain less of the time, but this requires making drastic assumptions about what the “outside observer” is actually outside, observing. If I really was a Boltzmann brain in a thermal universe, an outside observer would think I was more likely to be a Boltzmann brain. So postulating an outside perspective is just an awkward way of sneaking in probabilities gained in a different way.
This seems to leave the option of really treating all apparent possibilities similarly. But then the benefit of good actions in the real world gets drowned out by all the noise from all the unlikely possibilities—after all, for every action, one can construct a possibility where it’s both good and bad. If there’s no way to break ties between possibilities, no ties get broken.
Actually, the probability that you should assign to there being a copy of you is not defined under your system—otherwise you’d be able to conceive of a solution to the sleeping beauty problem
Non-anthropic (“outside observer”) probabilities are well defined in the sleeping beauty problem—the probability of heads/tails is exactly 1⁄2 (most of the time, you can think of these as the SSA probabilities over universes—the only difference being in universes where you don’t exist at all). You can use a universal prior or whatever you prefer; the “outside observer” doesn’t need to observe anything or be present in any way.
I note that you need these initial probabilities in order for SSA or SIA to make any sense at all (pre-updating on your existence), so I have no qualms claiming them for ADT as well.
And what if the universe is probably different for the two possible copies of you, as in the case of the boltzmann brain? Presumably you have to take some weighted average of the “non-anthropic probabilities” produced by the two different universes.
Re: note. This use of SSA and SIA can also be wrong. If there is a correct method for assigning subjective probabilities to what S.B. will see when she looks at outside, it should not be an additional thing on top of predicting the world, it should be a natural part of the process by which S.B. predicts the world.
EDIT: Okay, getting a better understanding of what you mean now. So you’d probably just say that the weight on the different universes should be exactly this non-anthropic probability, assigned by some universal prior or however one assigns probability to universes. My problem with this is that when assigning probabilities in a principled, subjective way—i.e. trying to figure out what your information about the world really implies, rather than starting by assuming some model of the world, there is not necessarily an easily-identifiable thing that is the non-anthropic probability of a boltzmann brain copy of me existing, and this needs to be cleared up in a way that isn’t just about assuming a model of the world. If anthropic reasoning is, as I said above, not some add-on to the process of assigning probabilities, but a part of it, then it makes less sense to think something like “just assign probabilities, but don’t do that last anthropic step.”
But I suspect this problem actually can be resolved. Maybe by interpreting the non-anthropic number as something like the probability that the universe is a certain way (i.e. assuming some sort of physicalist prior), conditional on there only being at least one copy of me, and then assuming that this resolves all anthropic problems?
That answer might be fine for copies, but not for situations where copies are involved in no way, like the Doomsday Argument. It is nonsense to say that you are both early and late in the series of human beings.
Copies are involved in DA. To use anthropics, you have to “update on your position on your reference class” (or some similar construction). At that very moment, just before you update, you can be any person at all—if not, you can’t update. You can be anyone equally.
(of course, nobody really “updates” that way, because people first realise who they are, then long after that learn about the DA. But if SSA people are allowed to “update” like that, I’m allowed to look at the hypothetical before such an update)
“I’m allowed to look at the hypothetical before such an update”
In the Doomsday argument as I understand it, you are allowed to do that. Nothing about our present knowledge gives any strong suggestion that human race will last millions of years.
As I said about using many lists, it is obvious that Doomsday style reasoning will in general be roughly correct. Arguments to the contrary are just wishful thinking.
DA style reasoning in non-anthropic situations are fine. I reject the notion that anthropic probabilities are meaningful. The fact that SIA doesn’t have DA, and is in most ways a better probability theory than SSA, is enough to indicate (ha!) that something odd is going on.
We’ve had this discussion before. I see no reason to think anthropic probabilities are meaningless, and I see every reason to think DA style reason will generally work in anthropic situations just as well as in other situations.
I also don’t see that you actually reject with probabilities, as I still have to behave as if they were true.
(However, I understand the similar logic in the voting example: I have to go to vote for my candidate and should reject any updates that my personal vote very unlikely change result of the election.)
Something like this example may help: I don’t believe that the world will end soon, but I have to invest more in x-risks prevention after I learned about DA (and given that I average utilitarian). I think some more concrete example will be useful for understanding here.
It is possible to create similar to DA situation and check if using DA will give a correct prediction about probabilities in not so extreme situation as human extinction.
The experiment if following. Imagine, that I don’t know the length of a year in months, but I could ask just one person about his date of birth. In my case, it is September, 9th month. Assuming that date of birth is random from the full length of the year, I could calculate that expected length of a year is 18 months with 50 per cent probability, which is not far from 12, and is not 0.01 or 10 000 000. And it like I use mediocrity principle without any application of my utilitarian position, but to predict actual value.
Yes. When all the priors are fixed, and what you’re updating is known to you, then the style of reasoning behind the DA works (see some of Nick Bostrom’s examples).
However, in ADT, I reject the very notion that anthropic probabilities make sense. Things like “which of these almost identical agents are actually you” are things that do not make sense in terms of probability. However, despite that, ADT (simplified anthropic CDT) can still let you make good decisions.
Since we are in the real world, it is a possibility that there is a copy of me, e.g. as a boltzmann brain, or a copy of the simulation I’m in.
Does your refusal to assign probabilities to these situations infect everyday life? Doesn’t betting on a coin flip require conditioning on whether I’m a boltzmann brain, or am in a simulation that replaces coins with potatoes if I flip them? You seem to be giving up on probabilities altogether.
Suppose that there is 50% ehance of there being a boltzmann brain copy of you—that’s fine, that is a respectable probability. What ADT ignores are questions like “am I the boltzmann brain or the real me on Earth?” The answer to that is “yes. You are both. And you currently control the actions of both. It is not meaningful to ask ‘which’ one you are.”
Give me a preference and a decision, and that I can answer, though. So the answer to “what is the probability of being which one” is “what do you need to know this for?”
I agree with this: “yes. You are both. And you currently control the actions of both. It is not meaningful to ask ‘which’ one you are.”
But have the following problem: what if the best course of action for me depends on am I Boltzmann brain or real person? It looks like I still have to update according to which group is larger: real me of Boltzmann brain me.
It also looks like we use “all decision computation processes like mine process” as something like what I called before “natural reference class”. And in case of DA it is all beings who thinks about DA.
I’ll deal with the non-selfish case, which is much easier.
In that case, Earth you and Boltzmann brain you have the same objectives. And most of the time, these objectives make “Boltzmann brain you” irrelevant, as their actions have so consequences (one exception could be “ensure everyone has a life that is on average happy, in which case Earth you should try and always be happy, for the sake of the Boltzmann brain yous). So most of the time, you can just ignore Boltzmann brains in ADT.
Yes, that is a natural reference class in ADT (note that it’s a reference class of agents-moments making decisions, not of agents in general; it’s possible that someone else is in your reference class for one decision, but not for another).
But “all beings who think about DA” is not a natural reference class, as you can see when you start questioning it (“to what extent do they think about DA? Under what name? Does it matter what conclusions they draw?...)
That’s not quite what I was talking about, but I managed to resolve my question to my own satisfaction anyhow. The problem of conditionalization can be worked around fairly easily.
Actually, the probability that you should assign to there being a copy of you is not defined under your system—otherwise you’d be able to conceive of a solution to the sleeping beauty problem—the entire schtick is that Sleeping Beauty is not merely ignorant about whether another copy of her exists, but that it is supposedly a bad question.
Hm, okay, I think this might cause trouble in a different way that I was originally thinking of. Because all sorts of things are possibilities, and it’s not obvious to me how ADT is able to treat reasonable anthropic possibilities different from astronomically-unlikely ones, if it throws out any measure of unlikeliness. You might try to resolve this by putting in some “outside perspective” probabilities, e.g. that an outside observer in our universe would see me as normal most of the time and me as a Boltzmann brain less of the time, but this requires making drastic assumptions about what the “outside observer” is actually outside, observing. If I really was a Boltzmann brain in a thermal universe, an outside observer would think I was more likely to be a Boltzmann brain. So postulating an outside perspective is just an awkward way of sneaking in probabilities gained in a different way.
This seems to leave the option of really treating all apparent possibilities similarly. But then the benefit of good actions in the real world gets drowned out by all the noise from all the unlikely possibilities—after all, for every action, one can construct a possibility where it’s both good and bad. If there’s no way to break ties between possibilities, no ties get broken.
Non-anthropic (“outside observer”) probabilities are well defined in the sleeping beauty problem—the probability of heads/tails is exactly 1⁄2 (most of the time, you can think of these as the SSA probabilities over universes—the only difference being in universes where you don’t exist at all). You can use a universal prior or whatever you prefer; the “outside observer” doesn’t need to observe anything or be present in any way.
I note that you need these initial probabilities in order for SSA or SIA to make any sense at all (pre-updating on your existence), so I have no qualms claiming them for ADT as well.
And what if the universe is probably different for the two possible copies of you, as in the case of the boltzmann brain? Presumably you have to take some weighted average of the “non-anthropic probabilities” produced by the two different universes.
Re: note. This use of SSA and SIA can also be wrong. If there is a correct method for assigning subjective probabilities to what S.B. will see when she looks at outside, it should not be an additional thing on top of predicting the world, it should be a natural part of the process by which S.B. predicts the world.
EDIT: Okay, getting a better understanding of what you mean now. So you’d probably just say that the weight on the different universes should be exactly this non-anthropic probability, assigned by some universal prior or however one assigns probability to universes. My problem with this is that when assigning probabilities in a principled, subjective way—i.e. trying to figure out what your information about the world really implies, rather than starting by assuming some model of the world, there is not necessarily an easily-identifiable thing that is the non-anthropic probability of a boltzmann brain copy of me existing, and this needs to be cleared up in a way that isn’t just about assuming a model of the world. If anthropic reasoning is, as I said above, not some add-on to the process of assigning probabilities, but a part of it, then it makes less sense to think something like “just assign probabilities, but don’t do that last anthropic step.”
But I suspect this problem actually can be resolved. Maybe by interpreting the non-anthropic number as something like the probability that the universe is a certain way (i.e. assuming some sort of physicalist prior), conditional on there only being at least one copy of me, and then assuming that this resolves all anthropic problems?
That answer might be fine for copies, but not for situations where copies are involved in no way, like the Doomsday Argument. It is nonsense to say that you are both early and late in the series of human beings.
Copies are involved in DA. To use anthropics, you have to “update on your position on your reference class” (or some similar construction). At that very moment, just before you update, you can be any person at all—if not, you can’t update. You can be anyone equally.
(of course, nobody really “updates” that way, because people first realise who they are, then long after that learn about the DA. But if SSA people are allowed to “update” like that, I’m allowed to look at the hypothetical before such an update)
“I’m allowed to look at the hypothetical before such an update”
In the Doomsday argument as I understand it, you are allowed to do that. Nothing about our present knowledge gives any strong suggestion that human race will last millions of years.
As I said about using many lists, it is obvious that Doomsday style reasoning will in general be roughly correct. Arguments to the contrary are just wishful thinking.
DA style reasoning in non-anthropic situations are fine. I reject the notion that anthropic probabilities are meaningful. The fact that SIA doesn’t have DA, and is in most ways a better probability theory than SSA, is enough to indicate (ha!) that something odd is going on.
We’ve had this discussion before. I see no reason to think anthropic probabilities are meaningless, and I see every reason to think DA style reason will generally work in anthropic situations just as well as in other situations.
SIA has its own DA via Fermi paradox as K.Grace showed. https://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/
I also don’t see that you actually reject with probabilities, as I still have to behave as if they were true. (However, I understand the similar logic in the voting example: I have to go to vote for my candidate and should reject any updates that my personal vote very unlikely change result of the election.)
Something like this example may help: I don’t believe that the world will end soon, but I have to invest more in x-risks prevention after I learned about DA (and given that I average utilitarian). I think some more concrete example will be useful for understanding here.
I looked at the SAI DA in my previous post on DA, and I feel I got that one right:
http://lesswrong.com/lw/mqg/doomsday_argument_for_anthropic_decision_theory/