Updating, part 1: When can you change your mind? The binary model
I was recently disturbed by my perception that, despite years of studying and debating probability problems, the LessWrong community as a whole has not markedly improved its ability to get the right answer on them.
I had expected that people would read posts and comments by other people, and take special note of comments by people who had a prior history of being right, and thereby improve their own accuracy.
But can that possibly work? How can someone who isn’t already highly-accurate, identify other people who are highly accurate?
Aumann’s agreement theorem (allegedly) says that Bayesians with the same priors agree. But it doesn’t say that doing so helps. Under what circumstances does revising your opinions, by updating in response to people you consider reliable, actually improve your accuracy?
To find out, I built a model of updating in response to the opinions of others. It did, eventually, show that Bayesians improve their collective opinions by updating in response to the opinions of other Bayesians. But this turns out not to depend on them satisfying the conditions of Aumann’s theorem, or on doing Bayesian updating. It depends only on a very simple condition, established at the start of the simulation. Can you guess what it is?
I’ll write another post describing and explaining the results if this post receives a karma score over 10.
That’s getting a bit ahead of ourselves, though. This post models only non-Bayesians, and the results are very different.
Here’s the model:
There are G people in a group such as LessWrong.
There are N problems being discussed simultaneously.
Problems are binary problems, with an answer of either 1 or 0.
Each person’s opinion on each problem is always known to all people.
Each person i has an accuracy: Their probability pi of getting any arbitrary problem correct on the first guess.
givt is what person i believes at time t is the answer to problem v (1 or 0).
pij expresses person i’s estimate of the probability that an arbitrary belief of person j is correct.
Without loss of generality, assume the correct answer to every problem is 1.
Algorithm:
# Loop over T timesteps
For t = 0 to T-1 {
# Loop over G people
For i = 0 to G-1 {
# Loop over N problems
For v = 0 to N-1 {
If (t == 0)
# Special initialization for the first timestep
If (random in [0..1] < pi) givt := 1; Else givt := 0
Else {
# Product over all j of the probability that the answer to v is 1 given j’s answer and estimated accuracy
m1 := ∏j [ pijgjv(t-1) + (1-pij)(1-gjv(t-1)) ]
# Product over all j of the probability that the answer to v is 0 given j’s answer and estimated accuracy
m0 := ∏j [ pij(1-gjv(t-1)) + (1-pij)gjv(t-1) ]
p1 := m1 / (m0 + m1) # Normalize
If (p1 > .5) givt := 1; Else givt := 0
}
}
# Loop over G other people
For j = 0 to G-1
# Compute person i’s estimate of person j’s accuracy
pij := { Σs in [0 .. t] Σv in [s..N] [ givtgjvs + (1-givt)(1-gjvs) ] } / N
}
}
p1 is the probability that agent i assigns to problem v having the answer 1. Each term pijgjv(t-1) + (1-pij)(1-gjv(t-1)) is the probability of problem v having answer 1 computed using agent j’s beliefs, by adding either the probability that j is correct (if j believes it has answer 1), or the probability that j is wrong (if j believes it has answer 0). Agent i assumes that everyone’s opinions are independent, and multiplies all these probabilities together. The result, m1, is very small when there are very many agents (m1 is on the order of .5G), so it is normalized by computing a similar product m0 for the probability that v has answer 0, and setting p1 = m1 / (m0 + m1).
The sum of sums to compute pij (i’s opinion of j’s accuracy) computes the fraction of problems, summed over all previous time periods, on which person j has agreed with person i’s current opinions. It sums over previous time periods because otherwise, pii = 1. By summing over previous times, if person i ever changes its mind, that will decrease pii. (The inner sum starts from s instead of 0 to accomodate an addition to the model that I’ll make later, in which the true answer to problem t is revealed at the end of time t. Problems whose answer is public knowledge should not be considered in the sum after the time they became public knowledge.)
Now, what distribution should we use for the pi?
There is an infinite supply of problems. Many are so simple that everyone gets them right; many are so hard or incomprehensible that everyone performs randomly on them; and there are many, such as the Monty Haul problem, that most people get wrong because of systematic bias in our thinking. The range of population average performance pave on all possible problems thus falls within [0 .. 1].
I chose to model person accuracy instead of problem difficulty. I say “instead of”, because you can use either person accuracy or problem difficulty to set pave. Since a critical part of what we’re modeling is person i’s estimate of person j’s accuracy, person j should actually have an accuracy. I didn’t model problem difficulty partly because I assume we only talk about problems of a particular level of difficulty; partly because a person in this model can’t distinguish between “Most people disagree with me on this problem; therefore it is difficult” and “Most people disagree with me on this problem; therefore I was wrong about this problem”.
Because I assume we talk mainly about high-entropy problems, I set pave = .5. I do this by drawing pi from [0 .. 1], with a normal distribution with a mean of .5, truncated at .05 and .95. (I used a standard deviation of .15; this isn’t important.)
Because this distribution of pi is symmetric around .5, there is no way to know whether you’re living in the world where the right answer is always 1, or where the right answer is always 0. This means there’s no way, under this model, for a person to know whether they’re a crackpot (usually wrong) or a genius (usually right).
Note that these agents don’t satisfy the preconditions for Aumann agreement, because they produce 0⁄1 decisions instead of probabilities, and because some agents are biased to perform worse than random. It’s worth studying non-Bayesian agents before moving on to a model satisfying the preconditions for the theorem, if only because there are so many of them in the real world.
An important property of this model is that, if person i is highly accurate, and knows it, pii will approach 1, greatly reducing the chance that person i will change their mind about any problem. Thus, the more accurate a person becomes, the less able they are to change their minds when they are wrong—and this is not an error. It’s a natural limit on the speed at which one can converge on truth.
An obvious problem is that at t=0, person i will see that it always agrees with itself, and set pii = 1. By induction, no one will ever change their mind. (I consider this evidence for the model, rather than against it.)
The question of how people ever change their mind is key to this whole study. I use one of these two additions to the model to let people change their mind:
At the end of each timestep t, the answer to problem number t becomes mutual knowledge to the entire group. (This solves the crackpot/genius problem.)
Each person has a maximum allowable pij (including pii).
This model is difficult to solve analytically, so I wrote a Perl script to simulate it.
What do you think will happen when I run the program, or its variants?
What other variants would you like to see tested?
Is there a fundamental problem with the model?
- 13 May 2010 18:43 UTC; 0 points) 's comment on Beauty quips, “I’d shut up and multiply!” by (
What matters isn’t so much finding the right answer, as having the right approach.
At least as far as I’m concerned, that’s the main reason to spend much time here. I don’t care whether the answer to Sleeping Beauty is 1⁄2 or 1⁄3, that’s a mere curio.
I care about the general process whereby you can take a vague verbal description like that, map it into a formal expression that preserves the properties that matter, and use that form to check my intuitions. That’s of rather more value, since I might learn how my intuitions could mislead me in situations where that matters.
The purpose of this post is to ask how you intend that to improve your accuracy. You plan to check your calculations against your intuitions. But the disagreements we have on Sleeping Beauty are disagreements of intuitions, that cause people to perform different calculations. There’s no way comparing your intuitions to your calculations can make progress in that situation.
Well, I plan to improve my accuracy by learning to perform, not just any old calculation that happens to be suggested by my intuitions, but the calculations which reflect the structure of the situation. Some of our intuitions about what calculations are appropriate could well be wrong.
The calculations are secondary; as I sometimes tell my kids, the nice thing about math is that you’re guaranteed to get a correct answer by performing the operations mechanically, as long as you’ve posed the question properly to start with. How to pose the question properly in the language of math is what I’d like to learn more of.
Someone may have gotten the right answer to Sleeping Beauty by following a flawed argument: I want to be able to check their calculations, and be able to find the answer myself in similar but different problems.
Is there any real-group analog to the answer to problem t becoming mutual knowledge to the entire group? I can’t think of a single disagreement here EVER to which the answer has been revealed. Further, I don’t expect much revelation until Omega actually shows up.
Drawing Two Aces might count.
A bunch of people got the wrong answer, and it was presumed to be against your naive intuitions if you don’t know how to do the math. But any doubters understood the right answer once it was pointed out.
Thanks for recollecting that. That was a case where someone wrote a program to compute the answer, which could be taken as definitive.
I just counted up the first answers people gave, and their initial answers were 29 to 3 in favor of the correct answer. So there wasn’t much disagreement to begin with.
I don’t think that qualified. There was no revelation, just an agreement on process and on result. That was not a question analogous to PhilGoetz’s model, where some agents had more accurate estimates, and you use the result to determine how accurate they might be on other topics.
I can’t think of a single disagreement here to which the answer has been revealed, either. But—spoiler alert—having the answers to numerous problems revealed to at least some of the agents is the only factor I’ve found that can get the simulated agents to improve their beliefs.
It’s difficult to apply the simulation results to people, who can, in theory, be convinced of something by following a logical argument. The reasons why I think we can model that with a simple per-person accuracy level might need a post of their own.
Oops—that statement was based on a bug in my program.
The usual situation does involve agents changing their answers as time passes differentially towards “true”—your model is extremely simplified, but [edit: may be] accurate enough for the purpose.
The Sleeping Beauty problem and the other “paradoxes” of probability are problems that have been selected (in the evolutionary sense) because they contain psychological features that cause people’s reasoning to go wrong. People come up with puzzles and problems all the time, but the ones that gain prominence and endure are the ones that are discussed over and over again without resolution: Sleeping Beauty, Newcomb’s Box, the two-envelope problem.
So I think there’s something valuable to be learned from the fact that these problems are hard. Here are my own guesses about what makes the Sleeping Beauty problem so hard.
First, there’s ambiguity in the problem statement. It usually asks about your “credence”. What’s that? Well, if you’re a Bayesian reasoner, then “credence” probably means something like “subjective probability (of a hypothesis H given data D), defined by p(H|D) = p(D|H) p(H) / p(D)”. But some other reasoners take “credence” to mean something like “expected proportion of observations consistent with data D in which the hypothesis H was confirmed”.
In most problems these definitions give the same answer, so there’s normally no need to worry about the exact definition. But the Sleeping Beauty problem pushes a wedge between them: the Bayesians should answer ½ and the others ⅓. This can lead to endless argument between the factions if the underlying difference in definitions goes unnoticed.
Second, there’s a psychological feature that makes some Bayesian reasoners doubt their own calculation. (You can try saying “shut up and calculate” to these baffled reasoners but while that might get them the right answer, it won’t help them resolve their bafflement.) The problem somehow persuades some people to imagine themselves as an instance of Sleeping Beauty selected uniformly from the three instances {(heads,Monday), (tails,Monday), (tails,Tuesday)}. This appears to be a natural assumption that some reasoners are prepared to make, even though there’s no justification for it in the problem description.
Maybe it’s the principle of indifference gone wrong: the three instances are indistinguishable (to you) but that doesn’t mean the one you are experiencing was drawn from a uniform distribution.
Most of what you said here has already been said, and rebutted, in the comments on the Sleeping Beauties post, and in the followup post by Jonathan Lee. It would be polite, and helpful, to address those rebuttals. Simply restating arguments, without acknowledging counterarguments, could be a big part of why we don’t seem to be getting anywhere.
I did check both threads, and as far as I could see, nobody was making exactly this point. I’m sorry that I missed the comment in question: the threads were very long. If you can point me at it, and the rebuttal, then I can try to address it (or admit I’m wrong).
(Even if I’m wrong about why the problem is hard, I think the rest of my comment stands: it’s a problem that’s been selected for discussion because it’s hard, so it might be productive to try to understand why it’s hard. Just as it helps to understand our biases, it helps to understand our errors.)
Bayesians should not answer ½. Nobody should answer ½: that’s the wrong answer.
If your interpretation of the word “credence” leads you to answer ½, you are fighting with the rest of the community over the definition of the concept of subjective probability.
How is this a constructive comment? You’re just stating your position again. We all already know your position. I can just as easily say:
If the entire scientific establishment is using subjective probability in a different way, by all means, show us! But don’t keep asserting it like it has been established. That isn’t productive.
The point of the comment was to express disapproval of the idea that scientists had multiple different conceptions of subjective probability—and that the Bayesian approach gave a different answer to other ones—and to highlight exactly where I differed from garethrees—mostly for his benefit.
There is at least a minority that believes the term “subjective probability” isn’t meaningful.
I only scanned that—and I don’t immediately see the relationship to your comment—but it seems as though it would be a large digression of dubious relevance.
Or whether or not it is meaningful, it is certainly fraught with all the associated confusion of personal identity, the arrow of time and information. I don’t think anyone can claim to understand it well enough to assert that those of us who see the Sleeping Beauty problem entailing a different payoff scheme are obviously and demonstrably wrong. We know how to answer related decision problems but no one here has established the right or the best way to assign the payoff scheme to credence. And people seem too frustrated by the fact that anyone could disagree with them to actually consider the pros and cons of using other payoff schemes.
Does “the community” mean some scientific community outside of LessWrong? Because LW seems split on the issue.
Well, yes, sure. “That’s just peanuts to space”.
That’s interesting. But then you have to either abandon Bayes’ Law, or else adopt very bizarre interpretations of p(D|H), p(H) and p(D) in order to make it come out. Both of these seem like very heavy prices to pay. I’d rather admit that my intuition was wrong.
Is the motivating intuition beyond your comment, the idea that your subjective probability should be the same as the odds you’d take in a (fair) bet?
Subjective probabilities are traditionally analyzed in terms of betting behavior. Bets that are used for elucidating subjective probabilities are constructed using “scoring rules”. It’s a standard way of revealing such probabilities.
I am not sure what you mean by “abandoning Bayes’ Law”, or using “bizarre” interpretations of probability. In this case, the relevant data includes the design of the experiment—and that is not trivial to update on, so there is scope for making mistakes. Before questioning the integrity of your tools, is it possible that a mistake was made during their application?
Bayes’ Law says, p(H|D) = p(D|H) p(H) / p(D) where H is the hypothesis of interest and D is the observed data. In the Sleeping Beauty problem H is “the coin lands heads” and D is “Sleeping Beauty is awake”. p(H) = ½, and p(D|H) = p(D) = 1. So if your intuition tells you that p(H|D) = ⅓, then you have to either abandon Bayes’ Law, or else change one or more of the values of p(D|H), p(H) and p(D) in order to make it come out.
(We can come back to the intuition about bets once we’ve dealt with this point.)
Hold on—p(D|H) and P(D) are not point values but probability distributions, since there is yet another variable, namely what day it is.
The other variable has already been marginalized out.
So long as it is not Saturday. And the ideas that p(H) = ½ comes from Saturday.
But marginalizing over the day doesn’t work out to P(D)=1 since on some days Beauty is left asleep, depending on how the coin comes up.
Here is (for a three-day variant) the full joint probability distribution, showing values which are in accordance with Bayes’ Law but where P(D) and P(D|H) are not the above. We can’t “change the values” willy-nilly, they fall out of formalizing the problem.
Frustratingly, I can’t seem to get people to take much interest in that table, even though it seems to solve the freaking problem. It’s possible that I’ve made a mistake somewhere, in which case I’d love to see it pointed out.
I was just talking about the notation “p(D|H)” (and “p(D)”), given that D has been defined as the observed data. Then any extra variables have to have been marginalized out, or the expression would be p(D, day | H). I didn`t mean to assert anything about the correctness of the particular number ascribed to p(D|H).
I did look at the table, but I missed the other sheets, so I didn`t understand what you were arguing.
It seems to say that p(heads|woken) = 0.25. A whole new answer :-(
That’s in the three-day variant; it also has a sheet with the original.
It has three sheets. The respective conclusions are: p(heads|woken) = 0.25, p(heads|woken) = 0.33 and p(heads|woken) = 0.50. One wonders what you are trying to say.
That 1⁄3 is correct in the original, that 1⁄2 comes from allocating zero probability mass to “not woken up”, and the three-day version shows why that is wrong.
I don’t see how that analysis is useful. Beauty is awake at the start and the end of the experiment, and she updates accordingly, depending on whether she believes she is “inside” the experiment or not. So, having D mean: “Sleeping Beauty is awake” does not seem very useful. Beauty’s “data” should also include her knowledge of the experimental setup, her knowledge of the identity of the subject, and whether she is facing an interviewer with amnesia. These things vary over time—and so they can’t usefully be treated as a single probability.
You should be careful if plugging values into Bayes’ theorem in an attempt to solve this problem. It contains an amnesia-inducing drug. When Beauty updates, you had better make sure to un-update her again afterwards in the correct manner.
D is the observation that Sleeping Beauty makes in the problem, something like “I’m awake, it’s during the experiment, I don’t know what day it is, and I can’t remember being awoken before”. p(D) is the prior probability of making this observation during the experiment. p(D|H) is the likelihood of making this observation if the coin lands heads.
As I said, if your intuition tells you that p(H|D) = ⅓, then something else has to change to make the calculation work. Either you abandon or modify Bayes’ Law (in this case, at least) or you need to disagree with me on one or more of p(D), p(D|H), and p(H).
As I said, be careful about using Bayes’ theorem in the case where the agent’s mind is being meddled with by amnesia-inducing drugs. If Beauty had not had her mind addled by drugs, your formula would work—and p(H|D) would be equal to 1⁄2 on her first awakening. As it is, Beauty has lost some information that pertains to the answer she gives to the problem—namely the knowledge of whether she has been woken up before already—or not. Her uncertainty about this matter is the cause of the problem with plugging numbers into Bayes’ theorem.
The theorem models her update on new information—but does not model the drug-induced deletion from her mind of information that pertains to the answer she gives to the problem.
If she knew it was Monday, p(H|D) would be about 1⁄2. If she knew it was Tuesday, p(H|D) would be about 0. Since she is uncertain, the value lies between these extremes.
Is over-reliance on Bayes’ theorem—without considering its failure to model the problem’s drug-induced amnesia—a cause of people thinking the answer to the problem is 1⁄2, I wonder?
If I understand rightly, you’re happy with my values for p(H), p(D) and p(D|H), but you’re not happy with the result. So you’re claiming that a Bayesian reasoner has to abandon Bayes’ Law in order to get the right answer to this problem. (Which is what I pointed out above.)
Is your argument the same as the one made by Bradley Monton? In his paper Sleeping Beauty and the forgetful Bayesian, Monton argues convincingly that a Bayesian reasoner needs to update upon forgetting, but he doesn’t give a rule explaining how to do it.
Naively, I can imagine doing this by putting the reasoner back in the situation before they learned the information they forgot, and then updating forwards again, but omitting the forgotten information. (Monton gives an example on pp. 51–52 where this works.) But I can’t see how to make this work in the Sleeping Beauty case: how do I put Sleeping Beauty back in the state before she learned what day it is?
So I think the onus remains with you to explain the rules for Bayesian forgetting, and how they lead to the answer ⅓ in this case. (If you can do this convincingly, then we can explain the hardness of the Sleeping Beauty problem by pointing out how little-known the rules for Bayesian forgetting are.)
Well, there is not anything wrong with Bayes’ Law. It doesn’t model forgetting—but it doesn’t pretend to. I would not say you have to “abandon” Bayes’ Law to solve the problem. It is just that the problem includes a process (namely: forgetting) that Bayes’ Law makes no attempt to model in the first place. Bayes’ Law works just fine for elements of the problem involving updating based on evidence. What you have to do is not abuse Bayes’ Law—by using it in circumstances for which it was never intended and is not appropriate.
Your opinion that I am under some kind of obligation to provide a lecture on the little-known topic of Bayesian forgetting has been duly noted. Fortunately, people don’t need to know or understand the Bayesian rules of forgetting in order to successfully solve this problem—but it would certainly help if they avoid applying the Bayes update rule while completely ignoring the whole issue of the effect of drug-induced amnesia—much as Bradley Monton explains.
You’re not obliged to give a lecture. A reference would be ideal.
Appealing to “forgetting” only gives an argument that our reasoning methods are incomplete: it doesn’t argue against ½ or in favour of ⅓. We need to see the rules and the calculation to decide if it settles the matter.
To reiterate, people do not need to know or understand the Bayesian rules of forgetting in order to successfully solve this problem. Nobody used this approach to solving the problem—as far as I am aware—but the vast majority obtained the correct answer nontheless. Correct reasoning is given on http://en.wikipedia.org/wiki/Sleeping_Beauty_problem—and in dozens of prior comments on the subject.
The Wikipedia page explains how a frequentist can get the answer ⅓, but it doesn’t explain how a Bayesian can get that answer. That’s what’s missing.
I’m still hoping for a reference for “the Bayesian rules of forgetting”. If these rules exist, then we can check to see if they give the answer ⅓ in the Sleeping Beauty case. That would go a long way to convincing a naive Bayesian.
I do not think it is missing—since a Bayesian can ask themselves at what odds they would accept a bet on the coin coming up heads—just as easily as any other agent can.
What is missing is an account involving Bayesian forgetting. It’s missing because that is a way of solving the problem which makes little practical sense.
Now, it might be an interesting exercise to explore the rules of Bayesian forgetting—but I don’t think it can be claimed that that is needed to solve this problem—even from a Bayesian perspective. Bayesians have more tools available to them than just Bayes’ Law.
FWIW, Bayesian forgetting looks somewhat managable. Bayes’ Law is a reversible calculation—so you can just un-apply it.
Okay—WRT “credence”, you have a good point; it’s a vague word. But, p(H|D) and “expected proportion of observations consistent with data D in which the hypothesis H was confirmed” give the same results. (Frequentists are allowed to use the p(H|D) notation, too.) There isn’t a difference between Bayesians and other reasoners; there’s a difference between what evidence one believes is being conditioned on. You’re correct that your actual claim isn’t addressed by comments in those posts; but your claim depends on beliefs that are argued for and against in the comments.
That’s the correct interpretation, where “correct” means “what the original author intended”. Under the alternate interpretation, you will find yourself wondering why the author wrote all this stuff about Sleeping Beauty falling asleep, and forgetting what happened before, because it has no effect on the answer. This proves that the author didn’t have that interpretation.
The clearest explanation yet posted is actually included in the beginning of the Sleeping Beauty post.
Agreed.
I’d be interested in your opinion on this where I’ve formalized the SB problem as a joint probability distribution, with as precise a mathematical justification as I could muster as described here.
It seems that SB even generates confusion as to where the ambiguity comes from in the first place. :)
I believe I’ve proven that the thirders are objectively right (and everyone else wrong).
I would like you to publish any results you may generate with your script, and promise to upvote them even if the results do not prove anything, as long as they are presented roughly as clearly as this post is.
So… why does this post have such a low rating? Comments? I find it bewildering. If you’re interested in LessWrong, you should be interested in finding out under what conditions people become less wrong.
Posts with a lot of math require me to set aside larger chunks of time to consume them. I do want to examine this but that won’t be possible until later this week, which means I don’t vote on it until then.
Thanks—good to know.
You haven’t shown that your experiment will do so. Nor have you shown that your experiment models the situation well.
What would it take to show that? It seems to me that isn’t a thing that I could “show”, even in theory, since I’ve found no existing empirical data on Aumann-agreement-type experiments in humans. If you know one, I’d appreciate a comment describing it.
I believe that one of the purposes of LessWrong is to help us gain an understanding of important epistemic issues. Proposing a new way to study the issue and potentially gain insight is therefore important.
I think that your standard implies that LessWrong is like a peer-reviewed journal: A place for people to present completed research programs; not a place for people to cooperate to find answers to difficult problems.
As I’ve said before, it’s not good to apply standards that penalize rigor. If the act of putting equations into a post means that each equation needs to be empirically validated in order to get an upvote, pretty soon nobody is going to put equations into their posts.
I’m perfectly happy to come back and vote this up after I am satisfied that it is good, and I haven’t and won’t vote it down. I think it’s a good idea to seek public comment, but the voting is supposed to indicate posts which are excellent for public consumption—this isn’t, unless it’s the technical first half of a pair of such posts. I want to know that the formalization parallels the reality, and it’s not clear that it does before it is run.
So, you don’t want to vote until you see the results; and I don’t want to waste an entire day writing up the results if few people are interested. Is there a general solution to this general problem?
(The “Part 1” in the title was supposed to indicate that it is the first part of a multi-part post.)
If you are confident in the practical value of your results, I would recommend posting. Otherwise I can’t help you.
I held off on rating the post because I just skimmed it, saw most of it was describing an algorithm/model, decided I didn’t have time to check your working, and held off on rating the post because I didn’t check your work. I might not be representative, I don’t rate most posts; I’ve rated just 6 top-level posts so far this May.
Hmm—I wish I could see whether I have few upvotes, or numerous upvotes and downvotes. They’d have very different implications for what I should do differently.
I’m rather tired of the Sleeping Beauty debate and so didn’t read it. If others have had the same reaction this might explain the low score.
Thanks for answering. This isn’t a continuation of the sleeping beauty debate. Despite what you see in the comment section, which has been hijacked by sleeping beauty.
One thing I think is missing from your model is correlation between different answers, and I think that this is actually essential to the phenomenon: ignoring it makes it look like people are failing to come to agreement at all, when what’s actually happening is that they’re aligning into various ideological groups.
That is, there’s a big difference between a group of 100 people with independent answers on 10 binary questions (random fair coinflips), and two groups of 50 who disagree on each of the 10 binary questions. I think that if you compared LW newcomers with veterans, you’d find that the newcomers more resemble the first case, and veterans more the second. This would suggest that peoples’ answers are becoming more internally coherent, at least.
In particular, I expect that on this subject the veterans split roughly as follows:
Those who subscribe to Bostrom’s SIA and are Thirders (1/3 to 1⁄2 of the LW vets)
Those who subscribe to Bostrom’s SSA and are Halfers (less than 1⁄4)
Those who reject Bostromian anthropic probabilities entirely (less than 1⁄4)
One can easily predict the responses of the first two groups on subsequent questions.
I don’t build a model by looking at the observed results of a phenomena, and building in a special component to produce each observed result. You wouldn’t learn anything from your models if you did that; they would produce what you built them to produce. I build a model by enumerating the inputs, modeling each input, and seeing how much of the observed results the output matches.
When I run the simulation, people do in fact align into different groups. So far, always 2 groups. But the alignment process doesn’t give either group better overall accuracy. This shows that you don’t need any internal coherence or problem understanding for people to align into groups. Attributing accuracy to people who tend to agree with you, and inaccuracy to those who disagree with you, produces saddle-point dynamics. Once the initial random distribution gets off the saddle point, the groups on the opposite sides each rapidly converge to their own attractor.
What’s especially interesting is that this way of judging people’s accuracy doesn’t just cause different groups to converge to different points; it causes the groups to disagree with each other on every point. There isn’t one “right” group and one “wrong” group; there are two groups that are right about different things. Their agreement within a group on some topics indirectly causes them to take the opposite opinion on any topic on which other groups have strong opinions. In other words: My enemy’s belief P is evidence against P.
(Sleeping Beauty isn’t the subject of this post.)
OK, I see what you’re doing now. It’s an interesting model, though one feature jumps out at me now:
Although this phenomenon is a well-known fallacy among human beings, it doesn’t seem like it should be the rational behavior— and then I noticed that the probabilities p_i can be less than 1⁄2 in your model, and that some of your agents are in fact reliably anti-correct. This seems like a probable cause of a binary group split, if I’m understanding correctly.
What’s the result if you make the probabilities (and accordingly, people’s estimates of the probabilities) range from 1⁄2 to 1 instead of from 0 to 1?
Then everybody converges onto agreeing on the correct answer for every question. And you just answered the question as to why Bayesians should agree to agree: Because Bayesians can’t perform worse than random on average, their accuracies range from 1⁄2 to 1, and are not biased on any problem (unless the evidence is biased, in which case you’re screwed anyway). Averaging their opinions together will thus get the right answer to every (answerable) question. Congratulations! You win 1 Internet!
(The reason for choosing 0 to 1 is explained in the post.)
The behavior in my model is rational if the results indicate that it gets the right answer. So far, it looks look it doesn’t.
You could probably get the same answer by having some problems, rather than agents, usually be answered wrong. An abundance of wrong answers makes the agents split. The agents don’t split into the correct agents and the incorrect agents, at least not for the conditions I’ve tested. There doubtless are settings that would get them to do that.
Does the 2-group split stay even if you continue the simulation until all answers have been revealed?
If you increase the standard deviation of p[i] so there are more very right and very wrong guessers, do they tend to split more into right and wrong groups? I expect they would.
Good question—no; revelation of answers eventually causes convergence into 1 group.
It makes the splitting happen faster.
It also didn’t get a lot of on-topic comments. Possibly because guessing the answers to your questions seems the wrong way to answer them—the correct way being to put it to the test with the program, which means rewriting it (wasteful) or waiting for you to post it.
Are you planning on posting the perl script? I’m a bit tempted to just translate what you’ve got in the post into python, but realistically I probably won’t get around to it anytime soon.
I think there’s a way to upload it to LessWrong and post a link to it. But I don’t know how. My email is at gmail.
Summarizing the results in the same post would result in a gigantic post that people wouldn’t want to read.
The code could be cleaner. Couldn’t
be
or
same givt gjvs
It would clean up the code a lot, and make it less of a hassle to read. I’d also prefer higher order functions to for loops, but that may just be me.
The code is written that way to accomodate the continuous case. I think people who aren’t C or assembly programmers will find the not(xor) more confusing; and people who are programmers will find the second unfamiliar.
I’m mainly saying the code is a bit opaque at the moment.
If you want to keep the continuous case, fine.
As long as you defined the same or similar function somewhere else, programmers would be fine.
Commenting the code would help people get to grips with it, if you don’t want to change it.
Good idea. Comments it is.
Re: “I had expected that people would read posts and comments by other people, and take special note of comments by people who had a prior history of being right, and thereby improve their own accuracy.”
FWIW, I think that was how I originally approached the problem. Rather than trying to solve it directly, I first looked at your response, and Robin Hanson’s response. After a little reflection, I concluded that you agreed with each other—and that you had both focused on the key issues—and got the answer right.
At that time, most of the rest of the thread was people saying the problem was ambiguous and needed a bet to clear it up—and a fair bit of confusion—with very little defense of the standard answer.
This is a really interesting topic, there are heaps of things I want to say about it. I was initially waiting to see what your results were first, to avoid spoilers with my guesses, but that’s no way to have a conversation.
First—I think there’s an error in the program: When you compute p[i][j] you take a sum then divide by N, but it looks like you should divide by the number of guesses you are adding, which can be more than N since it includes multiple rounds of guesses.
My (inconsistent) thoughts about how the model would behave:
They’d quickly learn the ratio of correct initial guesses everyone had, and make near-perfect use of that information. But they don’t distinguish between the initial guesses and later updates, so that’s not right.
Even the bad guessers will get most of their updated estimates right by the end, so their opinions will be assumed to correlate with the truth. If you then went back and posed everyone a new question, all the bad guessers could significantly mislead everyone. That’s not the procedure in your code, but you could try it.
At the start of the simulation, all the guessers are simply seeing who else agrees with them. The good guessers might be converging to a correct consensus, while the bad guessers could converge to the opposite. But as the simulation progressed and the answers were revealed, the bad guessers would lose confidence in their whole subgroup, including themselves, and follow the good guessing group.
Ideas for variants:
Make the initial guess accuracy depend on both guesser accuracy and problem difficultly/deceptiveness. I proposed a formula for this in my previous comment. In this case, the best way to update from the initial guesses would seem to be to follow the average opinion of a few of the best guessers and maybe the reverse of the worst few guessers, but I’m not sure how it would play out in the simulation where you don’t know who they are, and you have to update on each other’s updated guesses.
Make the initial guess accuracy depend on both the skill of the guesser and the difficulty of the question, but vary what weight is given to skill—some questions can be just as hard for skilled guessers as everyone else. In this case, a way to update from an initial guess would be to look at enough of the best guessers that you’re confident which way they guess on average (you’d need to sample more if they are near 50%)
Repeat the exercise—after the first set of N answers are revealed, continue with N more questions. This time the guessers start with data about each other’s accuracy. Then after they are done, N more, etc.
Instead of everyone getting the same number of updates, let some update more often.
Instead of updating everyone and revealing one answer each round, randomly pick between updating a random person and randomly revealing a correct answer just to one person, which they will be certain of for the rest of the game. You could give different people different chances of updating from group opinions, and of getting the correct answer revealed. Since people don’t know who’s had what answers revealed they don’t stop counting them when evaluating each other’s accuracy.
The Sleeping Beauty Challenge
Maybe I’m naive, but I actually think that we can come close to consensus on the solution to this problem. This is a community of high IQ, aspiring rationalists.
I think it would be a good exercise to use what we know about rationality, evidence, biases, etc. and work this out.
I propose the following:
I will write up my best arguments in favor of the 1⁄2 solution. I’ll keep it shorter than my original post.
Someone representing the thirders will write up their best arguments in favor of the 1⁄3 solution
Before reading the others’ arguments, we will assume that they are right, and that reading it will only confirm our beliefs (this is hard to do, but I find that this approach can be helpful)
We cannot respond for at least 24 hours. (this will give us time to digest the arguments, without just reacting immediately)
We will then check to see if there is agreement
If we still disagree, we can have some discussion (say, via email) to see if progress can be made
We will post our original two arguments and conclusion here (maybe in a new post)?
What do you think?
I tried to set this up in such a way to reduce some of the known biases that prevent agreement. Am I missing something?
Possible pitfall: if we come to an agreement, people who disagree with our conclusion might say it’s because one of us was a poor representative of their viewpoint. However, I think we’d still move a step towards consensus.
What say you?
Unlike Jack, I’m pessimistic about your proposal. I’ve already changed my mind not once but twice.
The interesting aspect is that this doesn’t feel like I’m vacillating. I have gone from relying on a vague and unreliable intuition in favor of 1⁄3 qualified with “it depends”, to being moderately certain that 1⁄2 was unambiguously correct, to having worked out how I was allocating all of the probability mass in the original problem and getting back 1⁄3 as the answer that I cannot help but think is correct. That, plus the meta-observation that no-one, including people I’ve asked directly (including yourself), has a rebuttal to my construction of the table, is leaving me with a higher degree of confidence than I previously had in 1⁄3.
It now feels as if I’m justified to ignore pretty much any argument which is “merely” a verbal appeal to one intuition or the other. Either my formalization corresponds to the problem as verbally stated or it doesn’t; either my math is correct or it isn’t. “Here I stand, I can no other”—at least until someone shows me my mistake.
So I think I figured this whole thing out. Are people familiar with the type-token distinction and resulting ambiguities? If I have five copies of the book Catcher in the Rye and you ask me how many books I have there is an ambiguity. I could say one or five. One refers to the type, “Catcher in the Rye is a coming of age novel” is a sentence about the type. Five refers to the number of tokens, “I tossed Catcher in the Rye onto the bookshelf” is a sentence about the token. The distinction is ubiquitous and leads to occasional confusion, enough that the subject is at the top of my Less Wrong to-do list. The type token distinction becomes an issue whenever we introduce identical copies and the distinction dominates my views on personal identity.
In the Sleeping Beauty case, the amnesia means the experience of waking up on Monday and the experience of waking up on Tuesday, while token-distinct are type-identical. If we decide the right thing to update on isn’t the token experience but the type experience: well the calculations are really easy. The type experience “waking up” has P=1 for heads and tails. So the prior never changes. I think there are some really good reasons for worrying about types rather than tokens in this context but won’t go into until I make sure the above makes sense to someone.
How are you accounting for the fact that—on awakening—beauty has lost information that she previously had—namely that she no longer knows which day of the week it is?
Maybe it’s just because I haven’t thought about this in a couple of weeks but you’re going to have to clarify this. When does beauty know which day of the week it is?
Before consuming the memory-loss drugs she knows her own temporal history. After consuming the drugs, she doesn’t. She is more uncertain—because her memory has been meddled with, and important information has been deleted from it.
Information wasn’t deleted. Conditions changed and she didn’t receive enough information about the change. There is a type (with a single token) that is Beauty before the experiment and that type includes a property ‘knows what day of the week it is’, then the experiment begins and the day changes. During the experiment there is another type which is also Beauty, this type has two tokens. This type only has enough information to narrow down the date to one of two days. But she still knows what day of the week it was when the experiment began, it’s just your usual indexical shift (instead of knowing the date now she knows the date then but it is the same thing).
Her memories were DELETED. That’s the whole point of the amnesia-inducing drug.
Amnesia = memory LOSS: http://dictionary.reference.com/browse/Amnesia
Oh sure, the information contained in the memory of waking up is lost (though that information didn’t contain what day of the week it was and you said “namely that she no longer knows which day of the week it is”). I still have zero idea of what you’re trying to ask me.
If she had not ever been given the drug she would be likely to know which day of the week it was. She would know how many times she had been woken up, interviewed, etc. It is because all such information has been chemically deleted from her mind that she has the increased uncertainty that she does.
I might have some issues with that characterization but they aren’t worth going into since I still don’t know what this has to do with my discussion of the type-token ambiguity.
It is what was missing from this analysis:
“The type experience “waking up” has P=1 for heads and tails. So the prior never changes.”
Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.
K.
Yes, counterfactually if she hadn’t been given the drug on the second awakening she would have knowledge of the day. But she was given the drug. This meant a loss of the information and knowledge of the memory of the first awakening. But it doesn’t mean a loss of the knowledge of what day it is, she obviously never had that. It is because all her new experiences keep getting deleted that she is incapable of updating her priors (which were set prior to the beginning of the experiment). In type-theoretic terms:
If the drugs had not been administered she would not have had type experience “waking up” a second time. She would have had type experience “waking up with the memory of waking up yesterday”. If she had had that type experience then she would know what day it is.
Beauty probably knew what day it was before the experiment started. People often do know what day of the week it is.
You don’t seem to respond to my: “Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.”
In this case, that is exactly what happens. Had Beauty not been given the drug, her estimates of p(heads) would be: 0.5 on Monday and 0.0 on Tuesday. Since her knowledge of what day it is has been eliminated by a memory-erasing drug, her probability estimate is intermediate between those figures—reflecting her new uncertainty in the face of the chemical deletion of relevant evidence.
Yes. And throughout the experiment she knows what day it was before the experiment started. What she doesn’t know is the new day. This is the second or third time I’ve said this. What don’t you understand about an indexical shift?
The knowledge that Beauty has before the experiment is not deleted. Beauty has a single anticipated experience going into the experiment. That anticipated experience occurs. There is no new information to update on.
You don’t seem to be following what I’m saying at all.
What you said was: “it doesn’t mean a loss of the knowledge of what day it is, she obviously never had that”. Except that she did have that—before the experiment started. Maybe you meant something different—but what readers have to go on is what you say.
Beauty’s memories are deleted. The opinions of an agent can change if they gain information—or if they lose information. Beauty loses information about whether or not she has had a previous awakening and interrogation. She knew that at the start of the experiment, but not during it—so she has lost information that she previously had—it has been deleted by the amnesia-inducing drug. That’s relevant information—and it explains why her priors change.
I’m going to try this one more time.
On Sunday, before the experiment begins Beauty makes observation O1(a). She knows that O1 was made on a Sunday. She says to herself “I know what day it is now” (an indexical statement pointing to O1(a)) She also predicts the coin will flip heads with P=0.5 and predicts the next experience she has after going to sleep will be O2. Then she wakes up and makes observation O2(a). It is Monday but she doesn’t know this because it could just as easily be Tuesday since her memory of waking up on Monday will be erased. “I know what day it is now” is now false, not because knowledge was deleted but because of the indexical shift of ‘now’ which no longer refers to O1(a) but to O2(a). She still knows what day it was at O1(a), that knowledge has not been lost. Then she goes back to sleep and her memory of O2(a) is erased. But O2(a) includes no knowledge of what day it is (thought combined with other information Beauty could have inferred what day it was, she never had that information). Beauty wakes up on Tuesday and has observation O2(b). This observation is type-identical to O2(a) and exactly what she anticipated experiencing. If her memory had not been erased she would have had observation O3-- waking up along with the memory of having woken up the previous day. This would not have been an experience Beauty would have predicted with P=1 and therefore would require her to update her belief P(heads) from 0.5 to 0 as she would know it was Tuesday. But she doesn’t get to do that she just has a token of experience O2. She still knows what day it was at O1(a), no knowledge has been lost. And she still doesn’t know what day it is ‘now’.
[For those following this, note that spatio-temporality is a strictly property of tokens (though we have a linguistic convention of letting types inherit the properties of tokens like “the red-breasted woodpecker can be found in North America”… what that really means is that tokens of they type ‘red-breasted woodpecker’ can be found in North America). This, admittedly, might lead to confusing results that need clarification and I’m still working on that.]
I’ve been following, but I’m still nonplussed as to your use of the type-token distinction in this context. The comment of mine which was the parent for your type-token observation had a specific request: show me the specific mistake in my math, rather than appeal to a verbal presentation of a non-formal, intuitive explanation.
Take a bag with 1 red marble and 9 green marbles. There is a type “green marble” and it has 9 tokens. The experiences of drawing any particular green marble, while token-distinct are type-identical. It seems that what matters when we compute our credence for the proposition “the next marble I draw will be green” is the tokens, not the types. When you formalize the bag problem accordingly, probability theory gives you answers that seem quite robust from a math point of view.
If you start out ignorant of how many marbles the bag has of each color, you can ask questions like “given that I just took two green marbles in a row, what is my credence in the proposition ‘the next marble I draw will be green’”. You can compute things like the expected number of green marbles left in the bag. In the bag problem, IOW, we are quantifying our uncertainty over tokens, while taking types to be a fixed feature of the situation. (Which of course is only a convention of this kind of exercise: with precise enough instruments we could distinguish all ten individual marbles.)
Statements like “information is gained” or “information is lost” are vague and imprecise, with the consequence that a motivated interpretation of the problem statement will support whichever statement we happen to favor. The point of formalizing probability is precisely that we get to replace such vague statements with precisely quantifiable formalizations, which leave no wiggle room for interpretation.
If you have a formalism which shows, in that manner, why the answer to the Sleeping Beauty question is 1⁄2, I would love to see it: I have no attachment any longer to “my opinion” on the topic.
My questions to you, then, are: a) given your reasons for “worrying about types rather than tokens” in this situation, how do you formally quantify your uncertainty over various propositions, as I do in the spreadsheet I’ve linked to earlier? b) what justifies “worrying about types rather than tokens” in this situation, where every other discussion of probability “worries about tokens” in the sense I’ve outlined above in reference to the bag of marbles? c) how do you apply the type-token distinction in other problems, say, in the case of the Tuesday Boy?
My point was that I didn’t think anything was wrong with your math. If you count tokens the answer you get is 1⁄3. If you count types the answer you get is 1⁄2 (did you need more math for that?). Similarly, you can design payouts where the right choice is 1⁄3 and payouts where the right choice is 1⁄2.
This was a helpful comment for me. What we’re dealing with is actually a special case of the type-token ambiguity: the tokens are actually indistinguishable. Say I flip a coin. I, If tails I put six red marbles in a bag which already contains three red marbles bag, if heads do nothing to the bag with three red marbles. I draw a marble and tell Beauty “red”. And then I ask Beauty her credence for the coin landing heads. I think that is basically isomorphic to the Sleep Beauty problem. In the original she is woken up twice if heads, but thats just like having more red marbles to choose from, the experiences are indistinguishable just like the marbles.
I don’t really think they are. That’s my major problem with the 1⁄3 answer. No one has ever shown me the unexpected experience Beauty must have to update from 0.5. But if you feel that way I’ll try other methods.
Off hand there is no reason to worry about types, as the possible answers to the questions “Do you have exactly two children?” and “Is one of them a boy born on a Tuesday?” are all distinguishable. But I haven’t thought really hard about that problem, maybe there is something I’m missing. My approach does suggest a reason for why the Self-Indication Assumption is wrong: the necessary features of an observer are indistinguishable. So it returns 0.5 for the Presumptuous Philosopher problem.
I’ll come back with an answer to (a). Bug me about it if I don’t. There is admittedly a problem which I haven’t worked out: I’m not sure how to relate the experience-type to the day of the week (time is a property of tokens). Basically, the type by itself doesn’t seem to tell us anything about the day (just like picking the red marble doesn’t tell us whether or not it was added after the coin flip. And maybe that’s a reason to reject my approach. I don’t know.
“No knowledge has been lost”?!?
Memories are knowledge—they are knowledge about past perceptions. They have been lost—because they have been chemically deleted by the amnesia-inducing drug. If they had not been lost, Beauty’s probability estimates would be very different at each interview—so evidently the lost information was important in influencing Beauty’s probability estimates.
That should be all you need to know to establish that the deletion of Beauty’s memories changes her priors, and thereby alters her subjective probability estimates. Beauty awakens, not knowing if she has previously been interviewed—because of her memory loss. She knew whether she had previously been interviewed at the start of the experiment—she hadn’t. So: that illustrates which memories have been deleted, and why her uncertainty has increased.
Yes. The memories have been lost (and the knowledge that accompanies them). The knowledge of what day of the week it is has not been lost because she never had this… as I’ve said four times. I’m just going to keep referring you back to my previous comments because I’ve addressed all this already.
You seem to have got stuck on this “day of the week” business :-(
The point is that beauty has lost knowledge that she once had—and that is why her priors change. That that knowledge is “what day of the week it currently is” seems like a fine way of thinking about what information beauty loses to me. However, it clearly bugs you—so try thinking about the lost knowledge another way: beauty starts off knowing with a high degree of certainty whether or not she has previously been interviewed—but then she loses this information as the experiment progresses—and that is why her priors change.
This example, like the last one, is indexed to a specific time. You don’t lose knowledge about conditions at t1 just because it is now t2 and the conditions are different.
Beauty loses information about whether she has previously attended interviews because her memories of them are chemically deleted by an amnesia-inducing drug—not because it is later on.
Makes sense to me.
Cool. Now I haven’t quite thought through all this so it’ll be a little vague. It isn’t anywhere close to being an analytic, formalized argument. I’m just going to dump a bunch of examples that invite intuitions. Basically the notion is: all information is type, not token. Consider, to begin with the Catcher in the Rye example. The sentence about the type was about the information contained in the book. This isn’t a coincidence. The most abundant source of types in the history of the world is pure information: not just every piece of text every written but every single computer program or file is a type (with it’s backups and copies as tokens). Our entire information-theoretic understanding of the universe involves this notion of writing the universe like a computer program (with the possibility of running multiple simulations), k-complexity is a fact about types not tokens (of course this is confusing since when we think of tokens we often attribute them the features of the their type, but the difference is there). Persons are types (at least in part, I think our concept of personhood confuses types and tokens). That’s why most people here think they could survive by being uploaded. When Dennett swtiches between his two brains it seems like there is only one person because there is only one person-type, though two person-tokens. I forget who it was, but someone here has argued in regard to decision theory, that we when we act we should take into account all the simulations of us that may some day be run and act for them as well. This is merely decision theory representing the fact that what matters about persons is the type.
So if agents are types, and in particular if information is types… well then they type experiences are what we update on, they’re the ones that contain information. There is no information to tokens beyond their type. RIght? Of course, this is just an intuition that needs to be formalized. But is the intuition clear?
I’m sorry this isn’t better formulated. The complexity justifies a top level post which I don’t have time for until next week.
Entertainingly, I feel justified in ignoring your argument and most of the others for the same reason you feel justified in ignoring other arguments.
I got into a discussion about the SB problem a month ago after Mallah mentioned it as related to the red door/blue doors problem. After a while I realized I could get either of 1⁄2 or 1⁄3 as an answer, despite my original intuition saying 1⁄2.
I confirmed both 1⁄2 and 1⁄3 were defensible by writing a computer program to count relative frequencies two different ways. Once I did that, I decided not to take seriously any claims that the answer had to be one or the other, since how could a simple argument overrule the result of both my simple arithmetic and a computer simulation?
I was thinking about that earlier.
A higher level of understanding of an initially mysterious question should translate into knowing why people may disagree, and still insist on answers that you yourself have discarded. You explain away their disagreement as an inferential distance.
Neither of the answers you have arrived at is correct, from my perspective, and I can explain why. So I feel justified in ignoring your argument for ignoring my argument. :)
That a simulation program should compute 1⁄2 for “how many times on average the coin comes up heads per time it is flipped” is simply P(x) in my formalization. It’s a correct but entirely uninteresting answer to something other than the problem’s question.
That your program should compute 1⁄3 for “how many times on average the coin comes up heads per time Beauty is awoken” is also a correct answer to a slightly more subtly mistaken question. If you look at the “Halfer variant” page of my spreadsheet, you will see a probability distribution that also correspond to the same “facts” that yield the 1⁄3 answer, and yet applying the laws of probability to that distribution give Beauty a credence of 1⁄2. The question your program computes an answer to is not the question “what is the marginal probability of x=Heads, conditioning on z=Woken”.
Whereas, from the tables representing the joint probability distribution, I think I now ought to be able to write a program which can recover either answer: the Thirder answer by inputting the “right” model or the Halfer answer by inputting the “wrong” model. In the Halfer model, we basically have to fail to sample on Heads/Tuesday. Commenting out one code line might be enough.
ETA: maybe not as simple as that, now that I have a first cut of the program written; we’d need to count awakenings on monday twice, which makes no sense at all. It does look as if our programs are in fact computing the same thing to get 1⁄3.
Which specific formulation of the Sleeping Beauty problem did you use to work things out? Maybe we’re referring to descriptions of the problem that use different wording; I’ve yet to read a description that’s convinced me that 1⁄2 is an answer to the wrong question. For example, here’s the wiki’s description asks
Personally, I believe that using the word ‘subjective’ doesn’t add anything here (it just sounds like a cue to think Bayesian-ishly to me, which doesn’t change the actual answer). So I read the question as asking for the probability of the coin landing tails given the experiment’s setup. As it’s asking for a probabiliy, I see it as wholly legitimate to answer it along the lines of ‘how many times on average the coin comes up heads per X,’ where X is one of the two choices you mentioned.
If you ignore the specification that it is Beauty’s subjective probability under discussion, the problem becomes ill-defined—and multiple answers become defensible—depending on whose perspective we take.
The word ‘subjective’ before the word ‘probability’ is empty verbiage to me, so (as I see it) it doesn’t matter whether you or I have subjectivity in mind. The problem’s ill-defined either way; ‘the specification that it is Beauty’s subjective probability’ makes no difference to me.
The perspective makes a difference:
“In other words, only in a third of the cases would heads precede her awakening. So the right answer for her to give is 1⁄3. This is the correct answer from Beauty’s perspective. Yet to the experimenter the correct probability is 1⁄2.”
http://en.wikipedia.org/wiki/Sleeping_Beauty_problem
I think it’s not the change in perspective or subjective identity making a difference, but instead it’s a change in precisely which probability is being asked about. The Wikipedia page unhelpfully conflates the two changes.
It says that the experimenter must see a probability of 1⁄2 and Beauty must see a probability of 1⁄3, but that just ain’t so; there is nothing stopping Beauty from caring about the proportion of coin flips that turn out to be heads (which is 1⁄2), and there is nothing stopping the experimenter from caring about the proportion of wakings for which the coin is heads (which is 1⁄3). You can change which probability you care about without changing your subjective identity and vice versa.
Let’s say I’m Sleeping Beauty. I would interpret the question as being about my estimate of a probability (‘credence’) associated with a coin-flipping process. Having interpreted the question as being about that process, I would answer 1⁄2 - who I am would have nothing to do with the question’s correct answer, since who I am has no effect on the simple process of flipping a fair coin and I am given no new information after the coin flip about the coin’s state.
In the original problem post, Beauty is asked a specific question, though—namely:
“What is your credence now for the proposition that our coin landed heads?”
That’s fairly clearly the PROBABILITY NOW of the coin having landed heads—and not the PROPORTION that turn out AT SOME POINT IN THE FUTURE to have landed heads.
Perspective can make a difference—because different observers have different levels of knowledge about the situation. In this case, Beauty doesn’t know whether it is Tuesday or not—but she does know that if she is being asked on Tuesday, then the coin came down tails—and p(heads) is about 0.
It’s not specific enough. It only asks for Beauty’s credence of a coin landing heads—it doesn’t tell her to choose between the credence of a coin landing heads given that it is flipped and the credence of a coin landing heads given a single waking. The fact that it’s Beauty being asked does not, in and of itself, mean the question must be asking the latter probability. It is wholly reasonable for Beauty to interpret the question as being about a coin-flipping process for which the associated probability is 1⁄2.
The addition of the word ‘now’ doesn’t magically ban you from considering a probability as a limiting relative frequency.
Agree.
It’s not clear to me how this conditional can be informative from Beauty’s perspective, as she doesn’t know whether it’s Tuesday or not. The only new knowledge she gets is that she’s woken up; but she has an equal probability (i.e. 1) of getting evidence of waking up if the coin’s heads or if the coin’s tails. So Beauty has no more knowledge than she did on Sunday.
She has LESS knowledge than she had on Sunday in one critical area—because now she doesn’t know what day of the week it is. She may not have learned much—but she has definitely forgotten something—and forgetting things changes your estimates of their liklihood just as much as learning about them does.
That’s true.
I’m not as sure about this. It’s not clear to me how it changes the likelihoods if I sketch Beauty’s situation at time 1 and time 2 as
A coin will be flipped and I will be woken up on Monday, and perhaps Tuesday. It is Sunday.
I have been woken up, so a coin has been flipped. It is Monday or Tuesday but I do not know which.
as opposed to just
A coin will be flipped and I will be woken up on Monday, and perhaps Tuesday.
I have been woken up, so a coin has been flipped. It is Monday or Tuesday but I do not know which.
(Edit to clarify—the 2nd pair of statements is meant to represent roughly how I was thinking about the setup when writing my earlier comment. That is, it’s evident that I didn’t account for Beauty forgetting what day of the week it is in the way timtyler expected, but at the same time I don’t believe that made any material difference.)
I read it as “What is your credence”, which is supposed to be synonymous with “subjective probability”, which—and this is significant—I take to entail that Beauty must condition on having been woken (because she conditions on every piece of information known to her).
In other words, I take the question to be precisely “What is the probability you assign to the coin having come up heads, taking into account your uncertainty as to what day it is.”
Ahhhh, I think I understand a bit better now. Am I right in thinking that your objection is not that you disapprove of relative frequency arguments in themselves, but that you believe the wrong relative frequency/frequencies is/are being used?
Right up until your reply prompted me to write a program to check your argument, I wasn’t thinking in terms of relative frequencies at all, but in terms of probability distributions.
I haven’t learned the rules for relative frequencies yet (by which I mean thing like “(don’t) include counts of variables that have a correlation of 1 in your denominator”), so I really have no idea.
Here is my program—which by the way agrees with neq1′s comment here, insofar as the “magic trick” which will recover 1⁄2 as the answer consists of commenting out the TTW line.
However, this seems perfectly nonsensical when transposed to my spreadsheet: zeroing out the TTW cell at all means I end up with a total probability mass less than 1. So, I can’t accept at the moment that neq1′s suggestion accords with the laws of probability—I’d need to learn what changes to make to my table and why I should make them.
Replying again since I’ve now looked at the spreadsheet.
Using my intuition (which says the answer is 1⁄2), I would expect P(Heads, Tuesday, Not woken) + P(Tails, Tuesday, Not woken) > 0, since I know it’s possible for Beauty to not be woken on Tuesday. But the ‘halfer “variant”’ sheet says P(H, T, N) + P(T, T, N) = 0 + 0 = 0, so that sheet’s way of getting 1⁄2 must differ from how my intuition works.
(ETA—Unless I’m misunderstanding the spreadsheet, which is always possible.)
Yeah, that “Halfer variant” was my best attempt at making sense of the 1⁄2 answer, but it’s not very convincing even to me anymore.
That program is simple enough that you can easily compute expectations of your 8 counts analytically.
Your program looks good here, your code looks a lot like mine, and I ran it and got ~1/2 for P(H) and ~1/3 for F(H|W). I’ll try and compare to your spreadsheet.
Well, perhaps because relative frequencies aren’t always probabilities?
Of course. But if I simulate the experiment more and more times, the relative frequencies converge on the probabilities.
Even in the limit not all relative frequencies are probabilities. In fact, I’m quite sure that in the limit ntails/wakings is not a probability. That’s because you don’t have independent samples of wakings.
But if there is a probability to be found (and I think there is) the corresponding relative frequency converges on it almost surely in the limit.
I don’t understand.
I tried to explain it here: http://lesswrong.com/lw/28u/conditioning_on_observers/1zy8
Basically, the 2 wakings on tails should be thought of as one waking. You’re just counting the same thing twice. When you include counts of variables that have a correlation of 1 in your denominator, it’s not clear what you are getting back. The thirders are using a relative frequency that doesn’t converge to a probability
This is true if we want the ratio of tails to wakings. However...
Despite the perfect correlation between some of the variables, one can still get a probability back out—but it won’t be the probability one expects.
Maybe one day I decide I want to know the probability that a randomly selected household on my street has a TV. I print up a bunch of surveys and put them in people’s mailboxes. However, it turns out that because I am very absent-minded (and unlucky), I accidentally put two surveys in the mailboxes of people with a TV, and only one in the mailboxes of people without TVs. My neighbors, because they enjoy filling out surveys so much, dutifully fill out every survey and send them all back to me. Now the proportion of surveys that say ‘yes, I have a TV’ is not the probability I expected (the probability of a household having a TV) - but it is nonetheless a probability, just a different one (the probability of any given survey saying, ‘I have a TV’).
That’s a good example. There is a big difference though (it’s subtle). With sleeping beauty, the question is about her probability at a waking. At a waking, there are no duplicate surveys. The duplicates occur at the end.
That is a difference, but it seems independent from the point I intended the example to make. Namely, that a relative frequency can still represent a probability even if its denominator includes duplicates—it will just be a different probability (hence why one can get 1⁄3 instead of 1⁄2 for SB).
Ok, yes, sometimes relative frequencies with duplicates can be probabilities, I agree.
Morendil,
This is strange. It sounds like you have been making progress towards settling on an answer, after discussion with others. That would suggest to me that discussion can move us towards consensus.
I like your approach a lot. It’s the first time I’ve seen the thirder argument defended with actually probability statements. Personally, I think there shouldn’t be any probability mass on ‘not woken’, but that is something worth thinking about and discussing.
One thing that I think is odd. Thirders know she has nothing to update on when she is woken, because they admit she will give the same answer, regardless of if it’s heads or tails. If she really had new information that is correlated with the outcome, her credence would move towards heads when heads, and tails when tails.
Consider my cancer intuition pump example. Everyone starts out thinking there is a 50% chance they have cancer. Once woken, regarldess of if they have cancer or not, they all shift to 90%. Did they really learn anything about their disease state by being woken? If they did, those with cancer would have shifted their credence up a bit, and those without would have shifted down. That’s what updating is.
In your example the experimenter has learned whether you have cancer. And she reflects that knowledge in the structure of the experiment: you are woken up 9 times if you have the disease.
Set aside the amnesia effects of the drug for a moment, and consider the experimental setup as a contorted way of imparting the information to the patient. Then you’d agree that with full memory, the patient would have something to update on? As soon as the second day. So there is, normally, an information flow in this setup.
What the amnesia does is selectively impair the patient’s ability to condition on available information. it does that in a way which is clearly pathological, and results in the counter-intuitive reply to the question “conditioning on a) your having woken up and b) your inability to tell what day it is, what is your credence”? We have no everyday intuitions about the inferential consequences of amnesia.
Knowing about the amnesia, we can argue that Beauty “shouldn’t” condition on being woken up. But if she does, she’ll get that strange result. If she does have cancer, she is more likely to be woken up multiple times than once, and being woken up at all does have some evidential weight.
All this, though, being merely verbal aids as I try to wrap my head around the consequences of the math. And therefore to be taken more circumspectly than the math itself.
If she does condition on being woken up, I think she still gets 1⁄2. I hate to keep repeating arguments, but what she knows when she is woken up is that she has been woken up at least once. If you just apply Bayes rule, you get 1⁄2.
If conditioning causes her to change her probability, it should do so in such a way that makes her more accurate. But as we see in the cancer problem, people with cancer give the same answer as people without.
Yes, but then we wouldn’t be talking about her credence on an awakening. We’d be talking about her credence on first waking and second waking. We’d treat them separately. With amnesia, 2 wakings are the same as 1. It’s really just one experience.
Apply it to what terms?
I’m not sure what more I can say without starting to repeat myself, too. All I can say at this point, having formalized my reasoning as both a Python program and an analytical table giving out the full joint distribution, is “Where did I make a mistake?”
Where’s the bug in the Python code? How do I change my joint distribution?
I like the version of your halfer variant version of your table. I still need to think about your distributions more though. I’m not sure it makes sense to have a variable ‘woken that day’ for this problem.
Congratulations on getting to that point, I figure.
I think this kind of proposal isn’t going to work unless people understand why they disagree.
This is good.
I think it would also help if we did something to counter how attached people seem to have gotten to these positions. I’ll throw in 20 karma to anyone who changes their mind, who else will?
I’d like a variant where there is both person accuracy p[i] and problem easiness E[j], and the odds of person i getting the correct answer initially on problem j are p[i] E[j] : (1-p[i])(1-E[j])
Ideally the updating procedure for this variant wouldn’t treat everyone’s opinions as independent, but it would also be interesting to see what happens when it mistakenly does treat them as independent.
The poll on the subject:
http://lesswrong.com/lw/28u/conditioning_on_observers/1ztb
...currently has 75% saying 1⁄3 and 25% saying 1⁄2. (12:4)
Collective intelligence in action?
Udpate 2010-06-06 - the raw vote figures are now 14:3.
It’s 13:5 to make up for a case of manipulation and to include my vote.
In modeling Bayesians (not described here), I have the problem that saying “I assign this problem probabilty .5 of being true” really means “I have no information about this problem.”
My original model treated that p=.5 as an estimate, so that a bunch of Bayesians who all assign p=.5 to a problem end up respecting each other more, instead of ignoring their own opinions due to having no information about it themselves.
I’m reformulating it to weigh opinions according to the amount of information they claim to have. But what’s the right way to do that?
Use a log-based unit, like bits or decibels.
Yes; but then how to work that into the scheme to produce a probability?
I deleted the original comment because I realized that the equations given already give zero weight to an agent who assigns a problem a belief value of .5. That’s because it just multiplies both m0 and m1 by .5.
I do wonder though if you should have some way of distinguishing someone who assigns a probability of .5 for complete ignorance, versus one who assigns a probability of .5 due to massive amounts of relevant evidence that just happens to balance out. But then, you’ll observe the ignorant fellow updating significantly more than the well-informed fellow on a piece of evidence, and can use that to determine the strength of their convictions.
I’ve thought about that. You could use a possible-worlds model, where the ignorant person allows all worlds, and the other person has a restricted set of possible worlds within with p is still .5. If updating then means restricting possible worlds, it should work out right in both cases.
This suggestion contains a classic bootstrapping problem. If only I knew I was good at statistics, then I’d be confident of analysing this problem which tells me whether or not I’m good at statistics. But since I’m not sure, I’m not sure whether this will give me the right answer.
I think I’ll stick to counting.
Comment moved.
I didn’t mean you should move your original comment. That was fine where it was. (Asking people to state their conclusion on the Sleeping Beauty problem, and their reasons.)
I think it would be most organized if their responses were daughters to my comment, so all of the conclusions could be found grouped in one location.
Just curious why the responses are female.
The default gender is (usually) male, so I like to play with this by choosing the female gender whenever I have a free choice.
Nevertheless, branches and sub-divisions of any type are typically feminine—always sisters or daughters. Perhaps the reason for this is that the sisters and daughters inherit the ability of their mothers to again divide/branch/etc and this is considered a female trait.
...I found this answer on yahoo.
Interesting and thanks. I haven’t noticed this before: for whatever reason, I’ve only seen nodes in a tree structure referred to as “parent” and “child.”
In semantics we called them daughters. Shrug.
Males divide and inherit equally well :)
I always assumed that the predominantly male engineers behind terms like motherboard / daughterboard were simply lonely.
...but, preferably, in the Sleeping Beauty post?
I’ve already stated my position there, probably too many times.
Nevertheless, it was your position I couldn’t determine (for the amount of resources I was willing to invest).
I’m interested in hearing others responses to these questions:
As for this one:
As you know, that depends on what we want to use the model for. It ignores all sorts of structure in the real world, but that could end up being a feature rather than a bug.
I want to use it to try to get a grip on what conditions must be satisfied in order that people can expect to improve their accuracy on the problems discussed on LessWrong by participating in LessWrong; and whether accuracy can go to 1, or approaches a limit.
That reminds me; I need to add a sentence about the confidence effect.