Manfred, thanks for this post, and for the clarifications below.
I wonder how your approach works if the coin is potentially biased, but the bias is unknown? Let’s say it has probability p of Tails, using the relative frequency sense that p is the frequency of Tails if tossed multiple times. (This also means that in multiple repetitions a fraction 2p / (1 + p) Beauty awakenings are after Tails, and a fraction 1 / (1 + p) Beauty awakenings are on Mondays.)
Beauty has to estimate the parameter p before betting, which means in Bayesian terms she has to construct a subjective distribution over possible values of p.
Before going to sleep, what should her distribution look like? One application of the indifference principle is that she has no idea about p except that it is somewhere between 0 and 1, so her subjective distribution of p should be uniform on [0, 1].
When she wakes up, should she adjust her distribution of p at all, or is it still the same as at step 1?
Suppose she’s told that it is Monday before betting. Should she update her distribution towards lower values of p, because these would give her higher likelihood of finding out it’s Monday?
If the answer to 3 is “yes” then won’t that have implications for the Doomsday Argument as well? (Consider the trillion Beauty limit, where there will be a trillion awakenings if the coin fell Tails. In that case, the fraction of awakenings which are “first” awakenings—on the Monday right after the coin-toss—is about 1/(1 + 10^12 x p). Now suppose that Beauty has just discovered she’s in the first awakening… doesn’t that force a big shift in her distribution towards p close to zero?)
I wonder how your approach works if the coin is potentially biased, but the bias is unknown?
The way I formulated the problem, this is how it is already :) If you wanted a “known fair” coin, you’d need some information like “I watched this coin come up infinity times and it had a heads:tails ratio of 1:1.” Instead, all Beauty gets is the information “the coin has two mutually exclusive and exhaustive sides.”
This is slightly unrealistic, because in reality coins are known to be pretty fair (if the flipper cooperates) from things like physics and the physiology of flipping. But I think a known fair coin would make the problem more confusing, because it would make it more intuitive to pretend that the probability is a property of the coin, which would give you the wrong answer.
Anyhow, you’ve got it pretty much right. Uniform distribution, updated by P(result | coin’s bias), can give you a picture of a biased coin, unlike if the coin was known fair. However, if “result” is that you’re the first awakening, the update is proportional to P(Monday | coin’s bias), since being the first awakening is equivalent to saying you woke up on Monday. But notice that you always wake up on Monday, so it’s a constant, so it doesn’t change the average bias of the coin.
This is interesting, and I’d like to understand exactly how the updating goes at each step. I’m not totally sure myself, which is why I’m asking the question about what your approach implies.
Remember Beauty now has to update on two things: the bias of the coin (the fraction p of times it would fall Tails in many throws) and whether it actually fell Tails in the particular throw. So she has to maintain a subjective distribution over the pair of parameters (p, Heads|Tails).
Step 1: Assuming an “ignorant” prior (no information about p except that is between 0 and 1) she has a distribution P[p = r & Tails] = r, P[p = r & Heads] = 1 - r for all values of r between 0 and 1. This gives P[Tails] = 1⁄2 by integration.
Step 2: On awakening, does she update her distribution of p, or of the probability of Tails given that p=r? Or does she do both?
It seems paradoxical that the mere fact of waking up would cause her to update either of these. But she has to update something to allow her to now set P[Tails] = 2⁄3. I’m not sure exactly how she should do it, so your views on that would be helpful.
One approach is to use relative frequency again. Assume the experiment is now run multiple times, but with different coins each time, and the coins are chosen from a huge pile of coins having all biases between zero and one in “equal numbers”. (I’m not sure this makes sense, partly because p is a continuous variable, and we’ll need to approximate it by a discrete variable to get the pile to have equal numbers; but mainly because the whole
approach seems contrived. However, I will close my eyes and calculate!)
The fraction of awakenings after throwing a coin with bias p becomes proportional to 1 + p. So after normalization, the distribution of p on awakening should shift to (2/3)(1 + p). Then, given that a coin with bias p is thrown, the fraction of awakenings after Tails is 2p / (1 + p), so the joint distribution after awakening is P[p = r & Tails] = (4/3)r, and P[p = r & Heads] = (2/3)(1 - r), which when integrating again gives P[Tails] = 2⁄3.
Step 3: When Beauty learns it is Monday what happens then? Well her evidence (call it “E”) is that”I have been told that it is Monday today” (or “This awakening of Beauty is on Monday” if you want to ignore the possible complication of untruthful reports). Notice the indexical terms.
Continuing with the relative frequency approach (shut up and calculate again!) Beauty should set P[E|p = r] = 1/(1+r) since if a coin with bias r is thrown repeatedly, that becomes the fraction of all Beauty awakenings which will learn that “today is Monday”. So the evidence E should indeed shift Beauty’s distribution on p towards lower values of p (since they assign higher probability to the evidence E). However, all the shift is doing here is to reverse the previous upward shift at Step 2.
More formally, we have P[E & p = r] proportional to 1/(1 + r) x (1 + r) and the factors cancel out, so that p[E & p = r] is a constant in r. Hence P[p = r | E] is also a constant in r, and we are back to the uniform distribution over p. Filling in the distribution in the other variable, we get P[Tails | E & p = r] = r. Again look at relative frequencies: if a coin with bias r is thrown repeatedly, then among the Monday-woken Beauties, a fraction r of them will be woken after Tails. So we are back to the original joint distribution P[p = r & Tails] = r, P[p = r & Heads] = 1 - r, and again P[Tails] = 1⁄2 by integration.
After all that work, the effect of Step 2 is very like applying an SIA shift (Bias to Tails is deemed more likely, because that results in more Beautiful experiences) and the effect of Step 3 is then like applying an SSA shift (Heads-bias is more likely, because that makes it more probable that a randomly-selected Beautiful experience is a Monday-experience). The results cancel out. Churning through the trillion-Beauty case will give the same effect, but with bigger shifts in each direction; however they still cancel out.
The application to the Doomsday Argument is that (as is usual given the application of SIA and SSA together) there is no net shift towards “Doom” (low probability of expanding, colonizing the Galaxy with a trillion trillion people and so on). This is how I think it should go.
However, as I noted in my previous comments, there is still a “Presumptuous Philosopher” effect when Beauty wakes up, and it is really hard to justify this if the relative frequencies of different coin weights don’t actually exist. You could consider for instance that Beauty has different physical theories about p: one of those theories implies that p = 1⁄2 while another implies that p = 9⁄10. (This sounds pretty implausible if a coin, but if the coin-flip is replaced by some poorly-understood randomization source like a decaying Higgs Boson, then this seems more plausible). Also, for the sake of argument, both theories imply infinite multiverses, so that there are just as many
Beautiful awakenings—infinitely many—in each case.
How can Beauty justify believing the second theory more, simpy because she has just woken up, when she didn’t believe it before going to sleep? That does sound really Presumptuous!
A final point is that SIA tends to cause problems when there is a possibility of an infinite multiverse, and—as I’ve posted elsewhere—it doesn’t actually counter SSA in those cases, so we are still left with the Doomsday Argument. It’s a bit like refusing to shift towards “Tails” at Step 2 (there will be infinitely many Beauty awakenings for any value of p, so why shift? SIA doesn’t tell us to), but then shifting to “Heads” after Step 3 (if there is a coin bias towards Heads then most of the Beauty-awakenings are on Monday, so SSA cares, and let’s shift). In the trillion-Beauty
case, there’s a very big “Heads” shift but without the compensating “Tails” shift.
If your approach can recover the sorts of shift that happen under SIA+SSA, but without postulating either, that is a bonus, since it means we don’t have to worry about how to apply SIA in the infinite case.
So what does Bayes’ theorem tell us about the Sleeping Beauty case?
It says that P(B|AC) = P(B|C) * P(A|BC)/P(A|C). In this case C is sleeping beauty’s information before she wakes up, which is there for all the probabilities of course. A is the “anthropic information” of waking up and learning that what used to be “AND” things are now mutually exclusive things. B is the coin landing tails.
Bayes’ theorem actually appears to break down here, if we use the simple interpretation of P(A) as “the probability she wakes up.” Because Sleeping Beauty wakes up in all the worlds, this interpretation says P(A|C) = 1, and P(A|BC) = 1, and so learning A can’t change anything.
This is very odd, and is an interesting problem with anthropics (see eliezer’s post “The Anthropic Trilemma”). The practical but difficult-to-justify way to fix it is to use frequencies, not probabilities—because she can have a average frequency of waking up of 2 or 3⁄2, while probabilities can’t go above 1.
But the major lesson is that you have to be careful about applying Bayes’ rule in this sort of situation—if you use P(A) in the calculation, you’ll get this problem.
Anyhow, only some of this a response to anything you wrote, I just felt like finishing my line of thought :P Maybe I should solve this...
Thanks… whatever the correct resolution is, violating Bayes’s Theorem seems a bit drastic!
My suspicion is that A contains indexical evidence (summarized as something like “I have just woken up as Beauty, and remember going to sleep on Sunday and the story about the coin-toss”). The indexical term likely means that P[A] is not equal to 1 though exactly what it is equal to is an interesting question.
I don’t personally have a worked-out theory about indexical probabilities, though my latest WAG is a combination of SIA and SSA, with the caveat I mentioned on infinite cases not working properly under SIA. Basically I’ll try to map it to a relative frequency problem, where all the possibilities are realised a large but finite number of times, and count P[E] as the relative frequency of observations which contain evidence E (including any indexical evidence), taking the limit where the number of observations increases to infinity. I’m not totally satisfied with that approach, but it seems to work as a calculational tool.
I may be confused, but it seems like Beauty would have to ask “Under what conditions am I told ‘It’s Monday’?” to answer question 3.
In other problems, when someone is offering you information, followed by a chance to make a decision, if you have access to the conditions under which they decided to offer you that information should be used as information to influence your decision. As an example, the other host behaviors in the Monty Hall problem. mention that point, and it seems likely they would in this case as well.
If you have absolutely no idea under what circumstances they decided to offer that information, then I have no idea how you would aggregate meaning out of the information, because there appear to be a very large number of alternate theories. For instance:
1: If Beauty is connected to a random text to speech generator, which happens to randomly text to speech output “Smundy”, Beauty may have misheard nonsensical gibberish as “It’s Monday.”
2: Or perhaps it was intentional and trying to be helpful, but actually said “Es Martes” because it assumed you were a Spanish speaking rationalist, and Beauty just heard it as “It’s Monday.” when Beauty should have processed “It’s Tuesday.” which would cause Beauty to update the wrong way.
3: Or perhaps it always tells Beauty the day of the week, but only on the first Monday.
4: Or perhaps it always tells Beauty the day of the week, but only if Beauty flips tails.
5: Or perhaps it always tells Beauty the day of the week, but only if Beauty flips heads.
6: Or perhaps it always tells Beauty the day of the week on every day of the puzzle, but doesn’t tell Beauty whether it is the “first” Monday on Monday.
7: It didn’t tell Beauty anything directly. Beauty happened to see a calendar when it opened the door and it appears to have been entirely unintentional.
Not all of these would cause Beauty to adjust the distribution of P in the same way. And they aren’t exhaustive, since there are far more then these 7. Some may be more likely than others, but if Beauty don’t have any understanding about which would be happening when, Beauty wouldn’t know which way to update P, and if Beauty did have an understanding, Beauty would presumably have to use that understanding.
I’m not sure whether this insightful, or making it more confused then it needs to be.
OK, fair enough—I didn’t specify how she acquired that knowledge, and I wasn’t assuming a clever method. I was just considering a variant of the story (often discussed in the literature) where Beauty is always truthfully told the day of the week after choosing her betting odds, to see if she then adjusts her betting odds. (And to be explicit, in the trillion Beauty story, she’s always told truthfully whether she’s the first awakening or not, again to see if she changes her odds). Is that clearer?
The usual way this applies is in the standard problem where the coin is known to be unbiased. Typically, a person arguing for the 2⁄3 case says that Beauty should shift to 1⁄2 on learning it is Monday. Whereas a critic originally arguing for the 1⁄2 case says that Beauty should shift to 1⁄3 for Tails (2/3 for Heads) on learning it is Monday.
The difficulty is that both those answers give something very presumptuous in the trillion Beauty limit (near certainty of Tails before the shift, or near certainty of Heads after the shift).
Nick Bostrom has argued for a “hybrid” solution which avoids the shift, but on the face of things looks inconsistent with Bayesian updating. But the idea is that Beauty might be in a different “reference class” before and after learning the day.
Manfred, thanks for this post, and for the clarifications below.
I wonder how your approach works if the coin is potentially biased, but the bias is unknown? Let’s say it has probability p of Tails, using the relative frequency sense that p is the frequency of Tails if tossed multiple times. (This also means that in multiple repetitions a fraction 2p / (1 + p) Beauty awakenings are after Tails, and a fraction 1 / (1 + p) Beauty awakenings are on Mondays.)
Beauty has to estimate the parameter p before betting, which means in Bayesian terms she has to construct a subjective distribution over possible values of p.
Before going to sleep, what should her distribution look like? One application of the indifference principle is that she has no idea about p except that it is somewhere between 0 and 1, so her subjective distribution of p should be uniform on [0, 1].
When she wakes up, should she adjust her distribution of p at all, or is it still the same as at step 1?
Suppose she’s told that it is Monday before betting. Should she update her distribution towards lower values of p, because these would give her higher likelihood of finding out it’s Monday?
If the answer to 3 is “yes” then won’t that have implications for the Doomsday Argument as well? (Consider the trillion Beauty limit, where there will be a trillion awakenings if the coin fell Tails. In that case, the fraction of awakenings which are “first” awakenings—on the Monday right after the coin-toss—is about 1/(1 + 10^12 x p). Now suppose that Beauty has just discovered she’s in the first awakening… doesn’t that force a big shift in her distribution towards p close to zero?)
The way I formulated the problem, this is how it is already :) If you wanted a “known fair” coin, you’d need some information like “I watched this coin come up infinity times and it had a heads:tails ratio of 1:1.” Instead, all Beauty gets is the information “the coin has two mutually exclusive and exhaustive sides.”
This is slightly unrealistic, because in reality coins are known to be pretty fair (if the flipper cooperates) from things like physics and the physiology of flipping. But I think a known fair coin would make the problem more confusing, because it would make it more intuitive to pretend that the probability is a property of the coin, which would give you the wrong answer.
Anyhow, you’ve got it pretty much right. Uniform distribution, updated by P(result | coin’s bias), can give you a picture of a biased coin, unlike if the coin was known fair. However, if “result” is that you’re the first awakening, the update is proportional to P(Monday | coin’s bias), since being the first awakening is equivalent to saying you woke up on Monday. But notice that you always wake up on Monday, so it’s a constant, so it doesn’t change the average bias of the coin.
This is interesting, and I’d like to understand exactly how the updating goes at each step. I’m not totally sure myself, which is why I’m asking the question about what your approach implies.
Remember Beauty now has to update on two things: the bias of the coin (the fraction p of times it would fall Tails in many throws) and whether it actually fell Tails in the particular throw. So she has to maintain a subjective distribution over the pair of parameters (p, Heads|Tails).
Step 1: Assuming an “ignorant” prior (no information about p except that is between 0 and 1) she has a distribution P[p = r & Tails] = r, P[p = r & Heads] = 1 - r for all values of r between 0 and 1. This gives P[Tails] = 1⁄2 by integration.
Step 2: On awakening, does she update her distribution of p, or of the probability of Tails given that p=r? Or does she do both?
It seems paradoxical that the mere fact of waking up would cause her to update either of these. But she has to update something to allow her to now set P[Tails] = 2⁄3. I’m not sure exactly how she should do it, so your views on that would be helpful.
One approach is to use relative frequency again. Assume the experiment is now run multiple times, but with different coins each time, and the coins are chosen from a huge pile of coins having all biases between zero and one in “equal numbers”. (I’m not sure this makes sense, partly because p is a continuous variable, and we’ll need to approximate it by a discrete variable to get the pile to have equal numbers; but mainly because the whole approach seems contrived. However, I will close my eyes and calculate!)
The fraction of awakenings after throwing a coin with bias p becomes proportional to 1 + p. So after normalization, the distribution of p on awakening should shift to (2/3)(1 + p). Then, given that a coin with bias p is thrown, the fraction of awakenings after Tails is 2p / (1 + p), so the joint distribution after awakening is P[p = r & Tails] = (4/3)r, and P[p = r & Heads] = (2/3)(1 - r), which when integrating again gives P[Tails] = 2⁄3.
Step 3: When Beauty learns it is Monday what happens then? Well her evidence (call it “E”) is that”I have been told that it is Monday today” (or “This awakening of Beauty is on Monday” if you want to ignore the possible complication of untruthful reports). Notice the indexical terms.
Continuing with the relative frequency approach (shut up and calculate again!) Beauty should set P[E|p = r] = 1/(1+r) since if a coin with bias r is thrown repeatedly, that becomes the fraction of all Beauty awakenings which will learn that “today is Monday”. So the evidence E should indeed shift Beauty’s distribution on p towards lower values of p (since they assign higher probability to the evidence E). However, all the shift is doing here is to reverse the previous upward shift at Step 2.
More formally, we have P[E & p = r] proportional to 1/(1 + r) x (1 + r) and the factors cancel out, so that p[E & p = r] is a constant in r. Hence P[p = r | E] is also a constant in r, and we are back to the uniform distribution over p. Filling in the distribution in the other variable, we get P[Tails | E & p = r] = r. Again look at relative frequencies: if a coin with bias r is thrown repeatedly, then among the Monday-woken Beauties, a fraction r of them will be woken after Tails. So we are back to the original joint distribution P[p = r & Tails] = r, P[p = r & Heads] = 1 - r, and again P[Tails] = 1⁄2 by integration.
After all that work, the effect of Step 2 is very like applying an SIA shift (Bias to Tails is deemed more likely, because that results in more Beautiful experiences) and the effect of Step 3 is then like applying an SSA shift (Heads-bias is more likely, because that makes it more probable that a randomly-selected Beautiful experience is a Monday-experience). The results cancel out. Churning through the trillion-Beauty case will give the same effect, but with bigger shifts in each direction; however they still cancel out.
The application to the Doomsday Argument is that (as is usual given the application of SIA and SSA together) there is no net shift towards “Doom” (low probability of expanding, colonizing the Galaxy with a trillion trillion people and so on). This is how I think it should go.
However, as I noted in my previous comments, there is still a “Presumptuous Philosopher” effect when Beauty wakes up, and it is really hard to justify this if the relative frequencies of different coin weights don’t actually exist. You could consider for instance that Beauty has different physical theories about p: one of those theories implies that p = 1⁄2 while another implies that p = 9⁄10. (This sounds pretty implausible if a coin, but if the coin-flip is replaced by some poorly-understood randomization source like a decaying Higgs Boson, then this seems more plausible). Also, for the sake of argument, both theories imply infinite multiverses, so that there are just as many Beautiful awakenings—infinitely many—in each case.
How can Beauty justify believing the second theory more, simpy because she has just woken up, when she didn’t believe it before going to sleep? That does sound really Presumptuous!
A final point is that SIA tends to cause problems when there is a possibility of an infinite multiverse, and—as I’ve posted elsewhere—it doesn’t actually counter SSA in those cases, so we are still left with the Doomsday Argument. It’s a bit like refusing to shift towards “Tails” at Step 2 (there will be infinitely many Beauty awakenings for any value of p, so why shift? SIA doesn’t tell us to), but then shifting to “Heads” after Step 3 (if there is a coin bias towards Heads then most of the Beauty-awakenings are on Monday, so SSA cares, and let’s shift). In the trillion-Beauty case, there’s a very big “Heads” shift but without the compensating “Tails” shift.
If your approach can recover the sorts of shift that happen under SIA+SSA, but without postulating either, that is a bonus, since it means we don’t have to worry about how to apply SIA in the infinite case.
So what does Bayes’ theorem tell us about the Sleeping Beauty case?
It says that P(B|AC) = P(B|C) * P(A|BC)/P(A|C). In this case C is sleeping beauty’s information before she wakes up, which is there for all the probabilities of course. A is the “anthropic information” of waking up and learning that what used to be “AND” things are now mutually exclusive things. B is the coin landing tails.
Bayes’ theorem actually appears to break down here, if we use the simple interpretation of P(A) as “the probability she wakes up.” Because Sleeping Beauty wakes up in all the worlds, this interpretation says P(A|C) = 1, and P(A|BC) = 1, and so learning A can’t change anything.
This is very odd, and is an interesting problem with anthropics (see eliezer’s post “The Anthropic Trilemma”). The practical but difficult-to-justify way to fix it is to use frequencies, not probabilities—because she can have a average frequency of waking up of 2 or 3⁄2, while probabilities can’t go above 1.
But the major lesson is that you have to be careful about applying Bayes’ rule in this sort of situation—if you use P(A) in the calculation, you’ll get this problem.
Anyhow, only some of this a response to anything you wrote, I just felt like finishing my line of thought :P Maybe I should solve this...
Thanks… whatever the correct resolution is, violating Bayes’s Theorem seems a bit drastic!
My suspicion is that A contains indexical evidence (summarized as something like “I have just woken up as Beauty, and remember going to sleep on Sunday and the story about the coin-toss”). The indexical term likely means that P[A] is not equal to 1 though exactly what it is equal to is an interesting question.
I don’t personally have a worked-out theory about indexical probabilities, though my latest WAG is a combination of SIA and SSA, with the caveat I mentioned on infinite cases not working properly under SIA. Basically I’ll try to map it to a relative frequency problem, where all the possibilities are realised a large but finite number of times, and count P[E] as the relative frequency of observations which contain evidence E (including any indexical evidence), taking the limit where the number of observations increases to infinity. I’m not totally satisfied with that approach, but it seems to work as a calculational tool.
I may be confused, but it seems like Beauty would have to ask “Under what conditions am I told ‘It’s Monday’?” to answer question 3.
In other problems, when someone is offering you information, followed by a chance to make a decision, if you have access to the conditions under which they decided to offer you that information should be used as information to influence your decision. As an example, the other host behaviors in the Monty Hall problem. mention that point, and it seems likely they would in this case as well.
If you have absolutely no idea under what circumstances they decided to offer that information, then I have no idea how you would aggregate meaning out of the information, because there appear to be a very large number of alternate theories. For instance:
1: If Beauty is connected to a random text to speech generator, which happens to randomly text to speech output “Smundy”, Beauty may have misheard nonsensical gibberish as “It’s Monday.”
2: Or perhaps it was intentional and trying to be helpful, but actually said “Es Martes” because it assumed you were a Spanish speaking rationalist, and Beauty just heard it as “It’s Monday.” when Beauty should have processed “It’s Tuesday.” which would cause Beauty to update the wrong way.
3: Or perhaps it always tells Beauty the day of the week, but only on the first Monday.
4: Or perhaps it always tells Beauty the day of the week, but only if Beauty flips tails.
5: Or perhaps it always tells Beauty the day of the week, but only if Beauty flips heads.
6: Or perhaps it always tells Beauty the day of the week on every day of the puzzle, but doesn’t tell Beauty whether it is the “first” Monday on Monday.
7: It didn’t tell Beauty anything directly. Beauty happened to see a calendar when it opened the door and it appears to have been entirely unintentional.
Not all of these would cause Beauty to adjust the distribution of P in the same way. And they aren’t exhaustive, since there are far more then these 7. Some may be more likely than others, but if Beauty don’t have any understanding about which would be happening when, Beauty wouldn’t know which way to update P, and if Beauty did have an understanding, Beauty would presumably have to use that understanding.
I’m not sure whether this insightful, or making it more confused then it needs to be.
OK, fair enough—I didn’t specify how she acquired that knowledge, and I wasn’t assuming a clever method. I was just considering a variant of the story (often discussed in the literature) where Beauty is always truthfully told the day of the week after choosing her betting odds, to see if she then adjusts her betting odds. (And to be explicit, in the trillion Beauty story, she’s always told truthfully whether she’s the first awakening or not, again to see if she changes her odds). Is that clearer?
Yes, I wasn’t aware “Truthfully tell on all days” was a standard assumption for receiving that information, thank you for the clarification.
It’s OK.
The usual way this applies is in the standard problem where the coin is known to be unbiased. Typically, a person arguing for the 2⁄3 case says that Beauty should shift to 1⁄2 on learning it is Monday. Whereas a critic originally arguing for the 1⁄2 case says that Beauty should shift to 1⁄3 for Tails (2/3 for Heads) on learning it is Monday.
The difficulty is that both those answers give something very presumptuous in the trillion Beauty limit (near certainty of Tails before the shift, or near certainty of Heads after the shift).
Nick Bostrom has argued for a “hybrid” solution which avoids the shift, but on the face of things looks inconsistent with Bayesian updating. But the idea is that Beauty might be in a different “reference class” before and after learning the day.
See http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0011/5132/sleeping_beauty.pdf or http://www.nickbostrom.com/ (Right hand column, about halfway down the page).