Both SSA and SIA depend on priors, so you can’t argue for them based on maximal entropy grounds. If the coin is biased, they will have different probabilities (so SSA+biased coin can have the same probabilities as SIA+unbiased coin and vice versa).
I definitely agree that SSA + belief in a biased coin can have the same probabilities as SIA + belief in an unbiased coin. (I’m just calling them beliefs to reinforce that the thing that affects the probability directly is the belief, not that coin itself). But I think you’re making an implied argument here—check if I’m right.
The implied argument would go like “because the biasedness of the coin is a prior, you can’t say what the probabilities will be just from the information, because you can always change the prior.”
The short answer is that the probabilities I calculated are simply for agents who “assume SSA” and “assume SIA” and have no other information.
The long answer is to explain how this interacts with priors. By the way, have you re-read the first three chapters of Jaynes recently? I have done so several times, and found it helpful.
Prior probabilities still reflect a state of information. Specifically, they reflect one’s aptly named prior information. Then you learn something new, and you update, and now your probabilities are posterior probabilities and reflect your posterior information. Agents with different priors have different states of prior information.
Perhaps there was an implied argument that there’s some problem with the fact that two states with different information (SSA+unbiased and SIA+biased) are giving the same probabilities for events relevant to the problem? Well, there’s no problem. If we conserve information there must be differences somewhere, but they don’t have to be in the probabilities used in decision-making.
a few tweaks.
Predictably, I’d prefer descriptions in terms of probability theory to mechanistic descriptions of how to get the results.
I have to disagree with your conversation, however. Both SIA and SSA consider all statements of type “I exist in universe X and am the person in location Y” to be mutually exclusive and exhaustive. It’s just that SIA stratifies by location only (and then deduces the probability of a universe by combining different locations in the same universe), while SSA first stratifies by universe and then by location.
Whoops. Good point, I got SSA quite wrong. Hm. That’s troubling. I think I made this mistake way back in the ambitious yet confused post I mentioned, and have been lugging it around ever since.
Consider an analogous game where a coin is flipped. If heads I get a white marble. If tails, somehow (so that this ‘somehow’ has a label, let’s call it ‘luck’) I get either a white marble or a black marble. This is SSA with different labels. How does one get the probabilities from a specification like the one I gave for SIA in the sleeping beauty problem?
I think it’s a causal condition, possibly because of something equivalent to “the coin flip does not affect what day it is.” And I’m bad at doing this translation.
But I need to think a lot, so I’ll get back to you later.
Since the decision is the same, this means that all the powerful arguments for using probability
And I’m still not seeing what that either assumption gives you, if your decision is already determined (by UDT, for instance) in a way that makes the assumption irrelevant.
Just not a fan of Cox’s theorem, eh?
Very much a fan. Anything that’s probability-like needs to be an actual probability. I’m disputing whether anthropic probabilities are meaningful at all.
And I’m still not seeing what that either assumption gives you, if your decision is already determined
I’ll delay talking about the point of all of this until later.
whether anthropic probabilities are meaningful at all.
Probabilities are a function that represents what we know about events (where “events” is a technical term meaning things we don’t control, in the context of Cox’s theorem—for different formulations of probability this can take on somewhat different meanings). This is “what they mean.”
As I said to lackofcheese:
Probabilities have a foundation independent of decision theory, as encoding beliefs about events. They’re what you really do expect to see when you look outside.
This is an important note about the absent-minded driver problem et al, that can get lost if one gets comfortable in the effectiveness of UDT. The agent’s probabilities are still accurate, and still correspond to the frequency with which they see things (truly!) - but they’re no longer related to decision-making in quite the same way.
“The use” is then to predict, as accurately as ever, what you’ll see when you look outside yourself.
If you accept that the events you’re trying to predict are meaningful (e.g. “whether it’s Monday or Tuesday when you look outside”), and you know Cox’s theorem, then P(Monday) is meaningful, because it encodes your information about a meaningful event.
In the Sleeping Beauty problem, the answer still happens to be straightforward in terms of logical probabilities, but step one is definitely agreeing that this is not a meaningless statement.
(side note: If all your information is meaningless, that’s no problem—then it’s just like not knowing anything and it gets P=0.5)
Probabilities are a function that represents what we know about events
As I said to lackofcheese:
If we create 10 identical copies of me and expose 9 of them one stimuli and 1 to another, what is my subjective anticipation of seeing one stimuli over the other? 10% is one obvious answer, but I might take a view of personal identity that fails to distinguish between identical copies of me, in which case 50% is correct. What if identical copies will be recombined later? Eliezer had a thought experiment where agents were two dimensional, and could get glued or separated from each other, and wondered whether this made any difference. I do to. And I’m also very confused about quantum measure, for similar reasons.
In general, the question “how many copies are there” may not be answerable in certain weird situations (or can be answered only arbitrarily).
EDIT: with copying and merging and similar, you get odd scenarios like “the probability of seeing something is x, the probability of remembering seeing it is y, the probability of remembering remembering it is z, and x y and z are all different.” Objectively it’s clear what’s going on, but in terms of “subjective anticipation”, it’s not clear at all.
Or put more simply: there are two identical copies of you. They will be merged soon. Do you currently have a 50% chance of dying soon?
In general, the question “how many copies are there” may not be answerable in certain weird situations (or can be answered only arbitrarily).
I agree with this. In probability terms, this is saying that P(there are 9 copies of me) is not necessarily meaningful because the event is not necessarily well defined.
My first response is / was that the event “the internet says it’s Monday” seems a lot better-defined than “there are 9 of me,” and should therefore still have a meaningful probability, even in anthropic situations. But an example may be necessary here.
I think you’d agree that a good example of “certain weird situations” is the divisible brain. Suppose we ran a mind on transistors and wires of macroscopic size. That is, we could make them half as big and they’d still run the same program. Then one can imagine splitting this mind down the middle into two half-sized copies. If this single amount of material counts as two people when split, does it also count as two people when it’s together?
Whether it does or doesn’t is, to some extent, mere semantics. If we set up a Sleeping Beauty problem except that there’s the same amount of total width on both sides, it then becomes semantics whether there is equal anthropic probability on both sides, or unequal. So the “anthropic probabilities are meaningless” argument is looking pretty good. And if it’s okay to define amount of personhood based on thickness, why not define it however you like and make probability pointless?
But I don’t think it’s quite as bad as all that, because of the restriction that your definition of personhood is part of how you view the world, not a free parameter. You don’t try to change your mind about the gravitational constant so that you can jump higher. So agents can have this highly arbitrary factor in what they expect to see, but still behave somewhat reasonably. (Of course, any time an agent has some arbitrary-seeming information, I’d like to ask “how do you know what you think you know?” Exploring the possibilities better in this case would be a bit of a rabbit hole, though.)
Then, if I’m pretending to be Stuart Armstrong, I note that there’s an equivalence in the aforementioned equal-total-width sleeping beauty problem between e.g. agents who think that anthropic probability is proportional to total width but have the same payoffs in both worlds (“width-selfish agents”), and agents who ignore anthropic probability, but weight the payoffs to agents by their total widths, per total width (“width-average-utilitarian outside perspective [UDT] predictors”).
Sure, these two different agents have different information/probabilities and different internal experience, but to the extent that we only care about the actions in this game, they’re the same.
Even if an agent starts in multiple identical copies that then diverge into non-identical versions, a selfish agent will want to self-modify to be an average utilitarian between non-identical versions. But this is a bit different from the typical usage of “average utilitarianism” in population ethics. A population-ethics average utilitarian would feed one of their copies to hungry alligators if it paid of for the other copies. But a reflectively-selfish average utilitarian would expect some chance of being the one fed to the alligators, and wouldn’t like that plan at all.
Actually, I think the cause of this departure from average utilitarianism over copies is the starting state. When you start already defined as one of multiple copies, like in the divisible brain case, the UDT agent that naive selfish agents want to self-modify to be no longer looks just like an average utilitarian.
So that’s one caveat about this equivalence—that it might not apply to all problems, and to get these other problems right, the proper thing to do is to go back and derive the best strategy in terms of selfish preferences.
Which is sort of the general closing thought I have: your arguments make a lot more sense to me than they did before, but as long as you have some preferences that are indexically selfish, there will be cases where you need to do anthropic reasoning just to go from the selfish preferences to the “outside perspective” payoffs that generate the same behavior. And it doesn’t particularly matter if you have some contrived state of information that tells you you’re one person on Mondays and ten people on Tuesdays.
Man, I haven’t had a journey like this since DWFTTW. I was so sure that thing couldn’t be going downwind faster than the wind.
P.S. So I have this written down somewhere, the causal buzzword important for an abstract description of the game with the marbles is “factorizable probability distribution.” I may check out a causality textbook and try and figure the application of this out with less handwaving, then write a post on it.
Hi, “factorization” is just taking a thing and expressing it as a product of simpler things. For example, a composite integer is a product of powers of primes.
In probability theory, we get a simple factorization via the chain rule of probability. If we have independence, some things drop out, but factorization is basically intellectually content-free. Of course, I also think Bayes rule is an intellectually content-free consequence of the chain rule of probability. And of course this may be hindsight bias operating...
You are welcome to message or email me if you want to talk about it more.
You definitely don’t have a 50% chance of dying in the sense of “experiencing dying”. In the sense of “ceasing to exist” I guess you could argue for it, but I think that it’s much more reasonable to say that both past selves continue to exist as a single future self.
Regardless, this stuff may be confusing, but it’s entirely conceivable that with the correct theory of personal identity we would have a single correct answer to each of these questions.
Conceivable. But it doesn’t seem to me that such a theory is necessary, as it’s role seems merely to be able to state probabilities that don’t influence actions.
I definitely agree that SSA + belief in a biased coin can have the same probabilities as SIA + belief in an unbiased coin. (I’m just calling them beliefs to reinforce that the thing that affects the probability directly is the belief, not that coin itself). But I think you’re making an implied argument here—check if I’m right.
The implied argument would go like “because the biasedness of the coin is a prior, you can’t say what the probabilities will be just from the information, because you can always change the prior.”
The short answer is that the probabilities I calculated are simply for agents who “assume SSA” and “assume SIA” and have no other information.
The long answer is to explain how this interacts with priors. By the way, have you re-read the first three chapters of Jaynes recently? I have done so several times, and found it helpful.
Prior probabilities still reflect a state of information. Specifically, they reflect one’s aptly named prior information. Then you learn something new, and you update, and now your probabilities are posterior probabilities and reflect your posterior information. Agents with different priors have different states of prior information.
Perhaps there was an implied argument that there’s some problem with the fact that two states with different information (SSA+unbiased and SIA+biased) are giving the same probabilities for events relevant to the problem? Well, there’s no problem. If we conserve information there must be differences somewhere, but they don’t have to be in the probabilities used in decision-making.
Predictably, I’d prefer descriptions in terms of probability theory to mechanistic descriptions of how to get the results.
Whoops. Good point, I got SSA quite wrong. Hm. That’s troubling. I think I made this mistake way back in the ambitious yet confused post I mentioned, and have been lugging it around ever since.
Consider an analogous game where a coin is flipped. If heads I get a white marble. If tails, somehow (so that this ‘somehow’ has a label, let’s call it ‘luck’) I get either a white marble or a black marble. This is SSA with different labels. How does one get the probabilities from a specification like the one I gave for SIA in the sleeping beauty problem?
I think it’s a causal condition, possibly because of something equivalent to “the coin flip does not affect what day it is.” And I’m bad at doing this translation.
But I need to think a lot, so I’ll get back to you later.
Just not a fan of Cox’s theorem, eh?
And I’m still not seeing what that either assumption gives you, if your decision is already determined (by UDT, for instance) in a way that makes the assumption irrelevant.
Very much a fan. Anything that’s probability-like needs to be an actual probability. I’m disputing whether anthropic probabilities are meaningful at all.
I’ll delay talking about the point of all of this until later.
Probabilities are a function that represents what we know about events (where “events” is a technical term meaning things we don’t control, in the context of Cox’s theorem—for different formulations of probability this can take on somewhat different meanings). This is “what they mean.”
As I said to lackofcheese:
If you accept that the events you’re trying to predict are meaningful (e.g. “whether it’s Monday or Tuesday when you look outside”), and you know Cox’s theorem, then P(Monday) is meaningful, because it encodes your information about a meaningful event.
In the Sleeping Beauty problem, the answer still happens to be straightforward in terms of logical probabilities, but step one is definitely agreeing that this is not a meaningless statement.
(side note: If all your information is meaningless, that’s no problem—then it’s just like not knowing anything and it gets P=0.5)
As I said to lackofcheese:
In general, the question “how many copies are there” may not be answerable in certain weird situations (or can be answered only arbitrarily).
EDIT: with copying and merging and similar, you get odd scenarios like “the probability of seeing something is x, the probability of remembering seeing it is y, the probability of remembering remembering it is z, and x y and z are all different.” Objectively it’s clear what’s going on, but in terms of “subjective anticipation”, it’s not clear at all.
Or put more simply: there are two identical copies of you. They will be merged soon. Do you currently have a 50% chance of dying soon?
I agree with this. In probability terms, this is saying that P(there are 9 copies of me) is not necessarily meaningful because the event is not necessarily well defined.
My first response is / was that the event “the internet says it’s Monday” seems a lot better-defined than “there are 9 of me,” and should therefore still have a meaningful probability, even in anthropic situations. But an example may be necessary here.
I think you’d agree that a good example of “certain weird situations” is the divisible brain. Suppose we ran a mind on transistors and wires of macroscopic size. That is, we could make them half as big and they’d still run the same program. Then one can imagine splitting this mind down the middle into two half-sized copies. If this single amount of material counts as two people when split, does it also count as two people when it’s together?
Whether it does or doesn’t is, to some extent, mere semantics. If we set up a Sleeping Beauty problem except that there’s the same amount of total width on both sides, it then becomes semantics whether there is equal anthropic probability on both sides, or unequal. So the “anthropic probabilities are meaningless” argument is looking pretty good. And if it’s okay to define amount of personhood based on thickness, why not define it however you like and make probability pointless?
But I don’t think it’s quite as bad as all that, because of the restriction that your definition of personhood is part of how you view the world, not a free parameter. You don’t try to change your mind about the gravitational constant so that you can jump higher. So agents can have this highly arbitrary factor in what they expect to see, but still behave somewhat reasonably. (Of course, any time an agent has some arbitrary-seeming information, I’d like to ask “how do you know what you think you know?” Exploring the possibilities better in this case would be a bit of a rabbit hole, though.)
Then, if I’m pretending to be Stuart Armstrong, I note that there’s an equivalence in the aforementioned equal-total-width sleeping beauty problem between e.g. agents who think that anthropic probability is proportional to total width but have the same payoffs in both worlds (“width-selfish agents”), and agents who ignore anthropic probability, but weight the payoffs to agents by their total widths, per total width (“width-average-utilitarian outside perspective [UDT] predictors”).
Sure, these two different agents have different information/probabilities and different internal experience, but to the extent that we only care about the actions in this game, they’re the same.
Even if an agent starts in multiple identical copies that then diverge into non-identical versions, a selfish agent will want to self-modify to be an average utilitarian between non-identical versions. But this is a bit different from the typical usage of “average utilitarianism” in population ethics. A population-ethics average utilitarian would feed one of their copies to hungry alligators if it paid of for the other copies. But a reflectively-selfish average utilitarian would expect some chance of being the one fed to the alligators, and wouldn’t like that plan at all.
Actually, I think the cause of this departure from average utilitarianism over copies is the starting state. When you start already defined as one of multiple copies, like in the divisible brain case, the UDT agent that naive selfish agents want to self-modify to be no longer looks just like an average utilitarian.
So that’s one caveat about this equivalence—that it might not apply to all problems, and to get these other problems right, the proper thing to do is to go back and derive the best strategy in terms of selfish preferences.
Which is sort of the general closing thought I have: your arguments make a lot more sense to me than they did before, but as long as you have some preferences that are indexically selfish, there will be cases where you need to do anthropic reasoning just to go from the selfish preferences to the “outside perspective” payoffs that generate the same behavior. And it doesn’t particularly matter if you have some contrived state of information that tells you you’re one person on Mondays and ten people on Tuesdays.
Man, I haven’t had a journey like this since DWFTTW. I was so sure that thing couldn’t be going downwind faster than the wind.
P.S. So I have this written down somewhere, the causal buzzword important for an abstract description of the game with the marbles is “factorizable probability distribution.” I may check out a causality textbook and try and figure the application of this out with less handwaving, then write a post on it.
Hi, “factorization” is just taking a thing and expressing it as a product of simpler things. For example, a composite integer is a product of powers of primes.
In probability theory, we get a simple factorization via the chain rule of probability. If we have independence, some things drop out, but factorization is basically intellectually content-free. Of course, I also think Bayes rule is an intellectually content-free consequence of the chain rule of probability. And of course this may be hindsight bias operating...
You are welcome to message or email me if you want to talk about it more.
That would be interesting.
You definitely don’t have a 50% chance of dying in the sense of “experiencing dying”. In the sense of “ceasing to exist” I guess you could argue for it, but I think that it’s much more reasonable to say that both past selves continue to exist as a single future self.
Regardless, this stuff may be confusing, but it’s entirely conceivable that with the correct theory of personal identity we would have a single correct answer to each of these questions.
Conceivable. But it doesn’t seem to me that such a theory is necessary, as it’s role seems merely to be able to state probabilities that don’t influence actions.