“Reality is that which, when you stop believing in it, doesn’t go away.” —Philip K. Dick
There are two kinds of Bayesians, allegedly. Subjective Bayesians believe that “probabilities” are degrees of uncertainty existing in our minds; if you are uncertain about a phenomenon, that is a fact about your state of mind, not a property of the phenomenon itself; probability theory constrains the logical coherence of uncertain beliefs. Then there are objective Bayesians, who… I’m not quite sure what it means to be an “objective Bayesian”; there are multiple definitions out there. As best I can tell, an “objective Bayesian” is anyone who uses Bayesian methods and isn’t a subjective Bayesian.
If I recall correctly, E. T. Jaynes, master of the art, once described himself as a subjective-objective Bayesian. Jaynes certainly believed very firmly that probability was in the mind; Jaynes was the one who coined the term Mind Projection Fallacy. But Jaynes also didn’t think that this implied a license to make up whatever priors you liked. There was only one correct prior distribution to use, given your state of partial information at the start of the problem.
How can something be in the mind, yet still be objective?
It appears to me that a good deal of philosophical maturity consists in being able to keep separate track of nearby concepts, without mixing them up.
For example, to understand evolutionary psychology, you have to keep separate track of the psychological purpose of an act, and the evolutionary pseudo-purposes of the adaptations that execute as the psychology; this is a common failure of newcomers to evolutionary psychology, who read, misunderstand, and thereafter say, “You think you love your children, but you’re just trying to maximize your fitness!”
What is it, exactly, that the terms “subjective” and “objective”, mean? Let’s say that I hand you a sock. Is it a subjective or an objective sock? You believe that 2 + 3 = 5. Is your belief subjective or objective? What about two plus three actually equaling five—is that subjective or objective? What about a specific act of adding two apples and three apples and getting five apples?
I don’t intend to confuse you in shrouds of words; but I do mean to point out that, while you may feel that you know very well what is “subjective” or “objective”, you might find that you have a bit of trouble saying out loud what those words mean.
Suppose there’s a calculator that computes “2 + 3 = 5”. We punch in “2“, then “+”, then “3”, and lo and behold, we see “5” flash on the screen. We accept this as evidence that 2 + 3 = 5, but we wouldn’t say that the calculator’s physical output defines the answer to the question 2 + 3 = ?. A cosmic ray could strike a transistor, which might give us misleading evidence and cause us to believe that 2 + 3 = 6, but it wouldn’t affect the actual sum of 2 + 3.
Which proposition is common-sensically true, but philosophically interesting: while we can easily point to the physical location of a symbol on a calculator screen, or observe the result of putting two apples on a table followed by another three apples, it is rather harder to track down the whereabouts of 2 + 3 = 5. (Did you look in the garage?)
But let us leave aside the question of where the fact 2 + 3 = 5 is located—in the universe, or somewhere else—and consider the assertion that the proposition is “objective”. If a cosmic ray strikes a calculator and makes it output “6“ in response to the query “2 + 3 = ?”, and you add two apples to a table followed by three apples, then you’ll still see five apples on the table. If you do the calculation in your own head, expending the necessary computing power—we assume that 2 + 3 is a very difficult sum to compute, so that the answer is not immediately obvious to you—then you’ll get the answer “5”. So the cosmic ray strike didn’t change anything.
And similarly—exactly similarly—what if a cosmic ray strikes a neuron inside your brain, causing you to compute “2 + 3 = 7”? Then, adding two apples to three apples, you will expect to see seven apples, but instead you will be surprised to see five apples.
If instead we found that no one was ever mistaken about addition problems, and that, moreover, you could change the answer by an act of will, then we might be tempted to call addition “subjective” rather than “objective”. I am not saying that this is everything people mean by “subjective” and “objective”, just pointing to one aspect of the concept. One might summarize this aspect thus: “If you can change something by thinking differently, it’s subjective; if you can’t change it by anything you do strictly inside your head, it’s objective.”
Mind is not magic. Every act of reasoning that we human beings carry out, is computed within some particular human brain. But not every computation is about the state of a human brain. Not every thought that you think is about something that can be changed by thinking. Herein lies the opportunity for confusion-of-levels. The quotation is not the referent. If you are going to consider thoughts as referential at all—if not, I’d like you to explain the mysterious correlation between my thought “2 + 3 = 5” and the observed behavior of apples on tables—then, while the quoted thoughts will always change with thoughts, the referents may or may not be entities that change with changing human thoughts.
The calculator computes “What is 2 + 3?”, not “What does this calculator compute as the result of 2 + 3?” The answer to the former question is 5, but if the calculator were to ask the latter question instead, the result could self-consistently be anything at all! If the calculator returned 42, then indeed, “What does this calculator compute as the result of 2 + 3?” would in fact be 42.
So just because a computation takes place inside your brain, does not mean that the computation explicitly mentions your brain, that it has your brain as a referent, any more than the calculator mentions the calculator. The calculator does not attempt to contain a representation of itself, only of numbers.
Indeed, in the most straightforward implementation, the calculator that asks “What does this calculator compute as the answer to the query 2 + 3 = ?” will never return a result, just simulate itself simulating itself until it runs out of memory.
But if you punch the keys “2”, “+”, and “3”, and the calculator proceeds to compute “What do I output when someone punches ‘2 + 3’?”, the resulting computation does have one interesting characteristic: the referent of the computation is highly subjective, since it depends on the computation, and can be made to be anything just by changing the computation.
Is probability, then, subjective or objective?
Well, probability is computed within human brains or other calculators. A probability is a state of partial information that is possessed by you; if you flip a coin and press it to your arm, the coin is showing heads or tails, but you assign the probability 1⁄2 until you reveal it. A friend, who got a tiny but not fully informative peek, might assign a probability of 0.6.
So can you make the probability of winning the lottery be anything you like?
Forget about many-worlds for the moment—you should almost always be able to forget about many-worlds—and pretend that you’re living in a single Small World where the lottery has only a single outcome. You will nonetheless have a need to call upon probability. Or if you prefer, we can discuss the ten trillionth decimal digit of pi, which I believe is not yet known. (If you are foolish enough to refuse to assign a probability distribution to this entity, you might pass up an excellent bet, like betting $1 to win $1000 that the digit is not 4.) Your uncertainty is a state of your mind, of partial information that you possess. Someone else might have different information, complete or partial. And the entity itself will only ever take on a single value.
So can you make the probability of winning the lottery, or the probability of the ten trillionth decimal digit of pi equaling 4, be anything you like?
You might be tempted to reply: “Well, since I currently think the probability of winning the lottery is one in a hundred million, then obviously, I will currently expect that assigning any other probability than this to the lottery, will decrease my expected log-score—or if you prefer a decision-theoretic formulation, I will expect this modification to myself to decrease expected utility. So, obviously, I will not choose to modify my probability distribution. It wouldn’t be reflectively coherent.”
So reflective coherency is the goal, is it? Too bad you weren’t born with a prior that assigned probability 0.9 to winning the lottery! Then, by exactly the same line of argument, you wouldn’t want to assign any probability except 0.9 to winning the lottery. And you would still be reflectively coherent. And you would have a 90% probability of winning millions of dollars! Hooray!
“No, then I would think I had a 90% probability of winning the lottery, but actually, the probability would only be one in a hundred million.”
Well, of course you would be expected to say that. And if you’d been born with a prior that assigned 90% probability to your winning the lottery, you’d consider an alleged probability of 10^-8, and say, “No, then I would think I had almost no probability of winning the lottery, but actually, the probability would be 0.9.”
“Yeah? Then just modify your probability distribution, and buy a lottery ticket, and then wait and see what happens.”
What happens? Either the ticket will win, or it won’t. That’s what will happen. We won’t get to see that some particular probability was, in fact, the exactly right probability to assign.
“Perform the experiment a hundred times, and—”
Okay, let’s talk about the ten trillionth digit of pi, then. Single-shot problem, no “long run” you can measure.
Probability is subjectively objective: Probability exists in your mind: if you’re ignorant of a phenomenon, that’s an attribute of you, not an attribute of the phenomenon. Yet it will seem to you that you can’t change probabilities by wishing.
You could make yourself compute something else, perhaps, rather than probability. You could compute “What do I say is the probability?” (answer: anything you say) or “What do I wish were the probability?” (answer: whatever you wish) but these things are not the probability, which is subjectively objective.
The thing about subjectively objective quantities is that they really do seem objective to you. You don’t look them over and say, “Oh, well, of course I don’t want to modify my own probability estimate, because no one can just modify their probability estimate; but if I’d been born with a different prior I’d be saying something different, and I wouldn’t want to modify that either; and so none of us is superior to anyone else.” That’s the way a subjectively subjective quantity would seem.
No, it will seem to you that, if the lottery sells a hundred million tickets, and you don’t get a peek at the results, then the probability of a ticket winning, is one in a hundred million. And that you could be born with different priors but that wouldn’t give you any better odds. And if there’s someone next to you saying the same thing about their 90% probability estimate, you’ll just shrug and say, “Good luck with that.” You won’t expect them to win.
Probability is subjectively really objective, not just subjectively sort of objective.
Jaynes used to recommend that no one ever write out an unconditional probability: That you never, ever write simply P(A), but always write P(A|I), where I is your prior information. I’ll use Q instead of I, for ease of reading, but Jaynes used I. Similarly, one would not write P(A|B) for the posterior probability of A given that we learn B, but rather P(A|B,Q), the probability of A given that we learn B and had background information Q.
This is good advice in a purely pragmatic sense, when you see how many false “paradoxes” are generated by accidentally using different prior information in different places.
But it also makes a deep philosophical point as well, which I never saw Jaynes spell out explicitly, but I think he would have approved: there is no such thing as a probability that isn’t in any mind. Any mind that takes in evidence and outputs probability estimates of the next event, remember, can be viewed as a prior—so there is no probability without priors/minds.
You can’t unwind the Q. You can’t ask “What is the unconditional probability of our background information being true, P(Q)?” To make that estimate, you would still need some kind of prior. No way to unwind back to an ideal ghost of perfect emptiness...
You might argue that you and the lottery-ticket buyer do not really have a disagreement about probability. You say that the probability of the ticket winning the lottery is one in a hundred million given your prior, P(W|Q1) = 10^-8. The other fellow says the probability of the ticket winning given his prior is P(W|Q2) = 0.9. Every time you say “The probability of X is Y”, you really mean, “P(X|Q1) = Y”. And when he says, “No, the probability of X is Z”, he really means, “P(X|Q2) = Z”.
Now you might, if you traced out his mathematical calculations, agree that, indeed, the conditional probability of the ticket winning, given his weird prior is 0.9. But you wouldn’t agree that “the probability of the ticket winning” is 0.9. Just as he wouldn’t agree that “the probability of the ticket winning” is 10^-8.
Even if the two of you refer to different mathematical calculations when you say the word “probability”, you don’t think that puts you on equal ground, neither of you being better than the other. And neither does he, of course.
So you see that, subjectively, probability really does feel objective—even after you have subjectively taken all apparent subjectivity into account.
And this is not mistaken, because, by golly, the probability of winning the lottery really is 10^-8, not 0.9. It’s not as if you’re doing your probability calculation wrong, after all. If you weren’t worried about being fair or about justifying yourself to philosophers, if you only wanted to get the correct answer, your betting odds would be 10^-8.
Somewhere out in mind design space, there’s a mind with any possible prior; but that doesn’t mean that you’ll say, “All priors are created equal.”
When you judge those alternate minds, you’ll do so using your own mind—your own beliefs about the universe—your own posterior that came out of your own prior, your own posterior probability assignments P(X|A,B,C,...,Q1). But there’s nothing wrong with that. It’s not like you could judge using something other than yourself. It’s not like you could have a probability assignment without any prior, a degree of uncertainty that isn’t in any mind.
And so, when all that is said and done, it still seems like the probability of winning the lottery really is 10^-8, not 0.9. No matter what other minds in design space say differently.
Which shouldn’t be surprising. When you compute probabilities, you’re thinking about lottery balls, not thinking about brains or mind designs or other people with different priors. Your probability computation makes no mention of that, any more than it explicitly represents itself. Your goal, after all, is to win, not to be fair. So of course probability will seem to be independent of what other minds might think of it.
Okay, but… you still can’t win the lottery by assigning a higher probability to winning.
If you like, we could regard probability as an idealized computation, just like 2 + 2 = 4 seems to be independent of any particular error-prone calculator that computes it; and you could regard your mind as trying to approximate this ideal computation. In which case, it is good that your mind does not mention people’s opinions, and only thinks of the lottery balls; the ideal computation makes no mention of people’s opinions, and we are trying to reflect this ideal as accurately as possible...
But what you will calculate is the “ideal calculation” to plug into your betting odds, will depend on your prior, even though the calculation won’t have an explicit dependency on “your prior”. Someone who thought the universe was anti-Occamian, would advocate an anti-Occamian calculation, regardless of whether or not anyone thought the universe was anti-Occamian.
Your calculations get checked against reality, in a probabilistic way; you either win the lottery or not. But interpreting these results, is done with your prior; once again there is no probability that isn’t in any mind.
I am not trying to argue that you can win the lottery by wishing, of course. Rather, I am trying to inculcate the ability to distinguish between levels.
When you think about the ontological nature of probability, and perform reductionism on it—when you try to explain how “probability” fits into a universe in which states of mind do not exist fundamentally—then you find that probability is computed within a brain; and you find that other possible minds could perform mostly-analogous operations with different priors and arrive at different answers.
But, when you consider probability as probability, think about the referent instead of the thought process—which thinking you will do in your own thoughts, which are physical processes—then you will conclude that the vast majority of possible priors are probably wrong. (You will also be able to conceive of priors which are, in fact, better than yours, because they assign more probability to the actual outcome; you just won’t know in advance which alternative prior is the truly better one.)
If you again swap your goggles to think about how probability is implemented in the brain, the seeming objectivity of probability is the way the probability algorithm feels from inside; so it’s no mystery that, considering probability as probability, you feel that it’s not subject to your whims. That’s just what the probability-computation would be expected to say, since the computation doesn’t represent any dependency on your whims.
But when you swap out those goggles and go back to thinking about probabilities, then, by golly, your algorithm seems to be right in computing that probability is not subject to your whims. You can’t win the lottery just by changing your beliefs about it. And if that is the way you would be expected to feel, then so what? The feeling has been explained, not explained away; it is not a mere feeling. Just because a calculation is implemented in your brain, doesn’t mean it’s wrong, after all.
Your “probability that the ten trillionth decimal digit of pi is 4”, is an attribute of yourself, and exists in your mind; the real digit is either 4 or not. And if you could change your belief about the probability by editing your brain, you wouldn’t expect that to change the probability.
Therefore I say of probability that it is “subjectively objective”.
Probability is Subjectively Objective
Followup to: Probability is in the Mind
There are two kinds of Bayesians, allegedly. Subjective Bayesians believe that “probabilities” are degrees of uncertainty existing in our minds; if you are uncertain about a phenomenon, that is a fact about your state of mind, not a property of the phenomenon itself; probability theory constrains the logical coherence of uncertain beliefs. Then there are objective Bayesians, who… I’m not quite sure what it means to be an “objective Bayesian”; there are multiple definitions out there. As best I can tell, an “objective Bayesian” is anyone who uses Bayesian methods and isn’t a subjective Bayesian.
If I recall correctly, E. T. Jaynes, master of the art, once described himself as a subjective-objective Bayesian. Jaynes certainly believed very firmly that probability was in the mind; Jaynes was the one who coined the term Mind Projection Fallacy. But Jaynes also didn’t think that this implied a license to make up whatever priors you liked. There was only one correct prior distribution to use, given your state of partial information at the start of the problem.
How can something be in the mind, yet still be objective?
It appears to me that a good deal of philosophical maturity consists in being able to keep separate track of nearby concepts, without mixing them up.
For example, to understand evolutionary psychology, you have to keep separate track of the psychological purpose of an act, and the evolutionary pseudo-purposes of the adaptations that execute as the psychology; this is a common failure of newcomers to evolutionary psychology, who read, misunderstand, and thereafter say, “You think you love your children, but you’re just trying to maximize your fitness!”
What is it, exactly, that the terms “subjective” and “objective”, mean? Let’s say that I hand you a sock. Is it a subjective or an objective sock? You believe that 2 + 3 = 5. Is your belief subjective or objective? What about two plus three actually equaling five—is that subjective or objective? What about a specific act of adding two apples and three apples and getting five apples?
I don’t intend to confuse you in shrouds of words; but I do mean to point out that, while you may feel that you know very well what is “subjective” or “objective”, you might find that you have a bit of trouble saying out loud what those words mean.
Suppose there’s a calculator that computes “2 + 3 = 5”. We punch in “2“, then “+”, then “3”, and lo and behold, we see “5” flash on the screen. We accept this as evidence that 2 + 3 = 5, but we wouldn’t say that the calculator’s physical output defines the answer to the question 2 + 3 = ?. A cosmic ray could strike a transistor, which might give us misleading evidence and cause us to believe that 2 + 3 = 6, but it wouldn’t affect the actual sum of 2 + 3.
Which proposition is common-sensically true, but philosophically interesting: while we can easily point to the physical location of a symbol on a calculator screen, or observe the result of putting two apples on a table followed by another three apples, it is rather harder to track down the whereabouts of 2 + 3 = 5. (Did you look in the garage?)
But let us leave aside the question of where the fact 2 + 3 = 5 is located—in the universe, or somewhere else—and consider the assertion that the proposition is “objective”. If a cosmic ray strikes a calculator and makes it output “6“ in response to the query “2 + 3 = ?”, and you add two apples to a table followed by three apples, then you’ll still see five apples on the table. If you do the calculation in your own head, expending the necessary computing power—we assume that 2 + 3 is a very difficult sum to compute, so that the answer is not immediately obvious to you—then you’ll get the answer “5”. So the cosmic ray strike didn’t change anything.
And similarly—exactly similarly—what if a cosmic ray strikes a neuron inside your brain, causing you to compute “2 + 3 = 7”? Then, adding two apples to three apples, you will expect to see seven apples, but instead you will be surprised to see five apples.
If instead we found that no one was ever mistaken about addition problems, and that, moreover, you could change the answer by an act of will, then we might be tempted to call addition “subjective” rather than “objective”. I am not saying that this is everything people mean by “subjective” and “objective”, just pointing to one aspect of the concept. One might summarize this aspect thus: “If you can change something by thinking differently, it’s subjective; if you can’t change it by anything you do strictly inside your head, it’s objective.”
Mind is not magic. Every act of reasoning that we human beings carry out, is computed within some particular human brain. But not every computation is about the state of a human brain. Not every thought that you think is about something that can be changed by thinking. Herein lies the opportunity for confusion-of-levels. The quotation is not the referent. If you are going to consider thoughts as referential at all—if not, I’d like you to explain the mysterious correlation between my thought “2 + 3 = 5” and the observed behavior of apples on tables—then, while the quoted thoughts will always change with thoughts, the referents may or may not be entities that change with changing human thoughts.
The calculator computes “What is 2 + 3?”, not “What does this calculator compute as the result of 2 + 3?” The answer to the former question is 5, but if the calculator were to ask the latter question instead, the result could self-consistently be anything at all! If the calculator returned 42, then indeed, “What does this calculator compute as the result of 2 + 3?” would in fact be 42.
So just because a computation takes place inside your brain, does not mean that the computation explicitly mentions your brain, that it has your brain as a referent, any more than the calculator mentions the calculator. The calculator does not attempt to contain a representation of itself, only of numbers.
Indeed, in the most straightforward implementation, the calculator that asks “What does this calculator compute as the answer to the query 2 + 3 = ?” will never return a result, just simulate itself simulating itself until it runs out of memory.
But if you punch the keys “2”, “+”, and “3”, and the calculator proceeds to compute “What do I output when someone punches ‘2 + 3’?”, the resulting computation does have one interesting characteristic: the referent of the computation is highly subjective, since it depends on the computation, and can be made to be anything just by changing the computation.
Is probability, then, subjective or objective?
Well, probability is computed within human brains or other calculators. A probability is a state of partial information that is possessed by you; if you flip a coin and press it to your arm, the coin is showing heads or tails, but you assign the probability 1⁄2 until you reveal it. A friend, who got a tiny but not fully informative peek, might assign a probability of 0.6.
So can you make the probability of winning the lottery be anything you like?
Forget about many-worlds for the moment—you should almost always be able to forget about many-worlds—and pretend that you’re living in a single Small World where the lottery has only a single outcome. You will nonetheless have a need to call upon probability. Or if you prefer, we can discuss the ten trillionth decimal digit of pi, which I believe is not yet known. (If you are foolish enough to refuse to assign a probability distribution to this entity, you might pass up an excellent bet, like betting $1 to win $1000 that the digit is not 4.) Your uncertainty is a state of your mind, of partial information that you possess. Someone else might have different information, complete or partial. And the entity itself will only ever take on a single value.
So can you make the probability of winning the lottery, or the probability of the ten trillionth decimal digit of pi equaling 4, be anything you like?
You might be tempted to reply: “Well, since I currently think the probability of winning the lottery is one in a hundred million, then obviously, I will currently expect that assigning any other probability than this to the lottery, will decrease my expected log-score—or if you prefer a decision-theoretic formulation, I will expect this modification to myself to decrease expected utility. So, obviously, I will not choose to modify my probability distribution. It wouldn’t be reflectively coherent.”
So reflective coherency is the goal, is it? Too bad you weren’t born with a prior that assigned probability 0.9 to winning the lottery! Then, by exactly the same line of argument, you wouldn’t want to assign any probability except 0.9 to winning the lottery. And you would still be reflectively coherent. And you would have a 90% probability of winning millions of dollars! Hooray!
“No, then I would think I had a 90% probability of winning the lottery, but actually, the probability would only be one in a hundred million.”
Well, of course you would be expected to say that. And if you’d been born with a prior that assigned 90% probability to your winning the lottery, you’d consider an alleged probability of 10^-8, and say, “No, then I would think I had almost no probability of winning the lottery, but actually, the probability would be 0.9.”
“Yeah? Then just modify your probability distribution, and buy a lottery ticket, and then wait and see what happens.”
What happens? Either the ticket will win, or it won’t. That’s what will happen. We won’t get to see that some particular probability was, in fact, the exactly right probability to assign.
“Perform the experiment a hundred times, and—”
Okay, let’s talk about the ten trillionth digit of pi, then. Single-shot problem, no “long run” you can measure.
Probability is subjectively objective: Probability exists in your mind: if you’re ignorant of a phenomenon, that’s an attribute of you, not an attribute of the phenomenon. Yet it will seem to you that you can’t change probabilities by wishing.
You could make yourself compute something else, perhaps, rather than probability. You could compute “What do I say is the probability?” (answer: anything you say) or “What do I wish were the probability?” (answer: whatever you wish) but these things are not the probability, which is subjectively objective.
The thing about subjectively objective quantities is that they really do seem objective to you. You don’t look them over and say, “Oh, well, of course I don’t want to modify my own probability estimate, because no one can just modify their probability estimate; but if I’d been born with a different prior I’d be saying something different, and I wouldn’t want to modify that either; and so none of us is superior to anyone else.” That’s the way a subjectively subjective quantity would seem.
No, it will seem to you that, if the lottery sells a hundred million tickets, and you don’t get a peek at the results, then the probability of a ticket winning, is one in a hundred million. And that you could be born with different priors but that wouldn’t give you any better odds. And if there’s someone next to you saying the same thing about their 90% probability estimate, you’ll just shrug and say, “Good luck with that.” You won’t expect them to win.
Probability is subjectively really objective, not just subjectively sort of objective.
Jaynes used to recommend that no one ever write out an unconditional probability: That you never, ever write simply P(A), but always write P(A|I), where I is your prior information. I’ll use Q instead of I, for ease of reading, but Jaynes used I. Similarly, one would not write P(A|B) for the posterior probability of A given that we learn B, but rather P(A|B,Q), the probability of A given that we learn B and had background information Q.
This is good advice in a purely pragmatic sense, when you see how many false “paradoxes” are generated by accidentally using different prior information in different places.
But it also makes a deep philosophical point as well, which I never saw Jaynes spell out explicitly, but I think he would have approved: there is no such thing as a probability that isn’t in any mind. Any mind that takes in evidence and outputs probability estimates of the next event, remember, can be viewed as a prior—so there is no probability without priors/minds.
You can’t unwind the Q. You can’t ask “What is the unconditional probability of our background information being true, P(Q)?” To make that estimate, you would still need some kind of prior. No way to unwind back to an ideal ghost of perfect emptiness...
You might argue that you and the lottery-ticket buyer do not really have a disagreement about probability. You say that the probability of the ticket winning the lottery is one in a hundred million given your prior, P(W|Q1) = 10^-8. The other fellow says the probability of the ticket winning given his prior is P(W|Q2) = 0.9. Every time you say “The probability of X is Y”, you really mean, “P(X|Q1) = Y”. And when he says, “No, the probability of X is Z”, he really means, “P(X|Q2) = Z”.
Now you might, if you traced out his mathematical calculations, agree that, indeed, the conditional probability of the ticket winning, given his weird prior is 0.9. But you wouldn’t agree that “the probability of the ticket winning” is 0.9. Just as he wouldn’t agree that “the probability of the ticket winning” is 10^-8.
Even if the two of you refer to different mathematical calculations when you say the word “probability”, you don’t think that puts you on equal ground, neither of you being better than the other. And neither does he, of course.
So you see that, subjectively, probability really does feel objective—even after you have subjectively taken all apparent subjectivity into account.
And this is not mistaken, because, by golly, the probability of winning the lottery really is 10^-8, not 0.9. It’s not as if you’re doing your probability calculation wrong, after all. If you weren’t worried about being fair or about justifying yourself to philosophers, if you only wanted to get the correct answer, your betting odds would be 10^-8.
Somewhere out in mind design space, there’s a mind with any possible prior; but that doesn’t mean that you’ll say, “All priors are created equal.”
When you judge those alternate minds, you’ll do so using your own mind—your own beliefs about the universe—your own posterior that came out of your own prior, your own posterior probability assignments P(X|A,B,C,...,Q1). But there’s nothing wrong with that. It’s not like you could judge using something other than yourself. It’s not like you could have a probability assignment without any prior, a degree of uncertainty that isn’t in any mind.
And so, when all that is said and done, it still seems like the probability of winning the lottery really is 10^-8, not 0.9. No matter what other minds in design space say differently.
Which shouldn’t be surprising. When you compute probabilities, you’re thinking about lottery balls, not thinking about brains or mind designs or other people with different priors. Your probability computation makes no mention of that, any more than it explicitly represents itself. Your goal, after all, is to win, not to be fair. So of course probability will seem to be independent of what other minds might think of it.
Okay, but… you still can’t win the lottery by assigning a higher probability to winning.
If you like, we could regard probability as an idealized computation, just like 2 + 2 = 4 seems to be independent of any particular error-prone calculator that computes it; and you could regard your mind as trying to approximate this ideal computation. In which case, it is good that your mind does not mention people’s opinions, and only thinks of the lottery balls; the ideal computation makes no mention of people’s opinions, and we are trying to reflect this ideal as accurately as possible...
But what you will calculate is the “ideal calculation” to plug into your betting odds, will depend on your prior, even though the calculation won’t have an explicit dependency on “your prior”. Someone who thought the universe was anti-Occamian, would advocate an anti-Occamian calculation, regardless of whether or not anyone thought the universe was anti-Occamian.
Your calculations get checked against reality, in a probabilistic way; you either win the lottery or not. But interpreting these results, is done with your prior; once again there is no probability that isn’t in any mind.
I am not trying to argue that you can win the lottery by wishing, of course. Rather, I am trying to inculcate the ability to distinguish between levels.
When you think about the ontological nature of probability, and perform reductionism on it—when you try to explain how “probability” fits into a universe in which states of mind do not exist fundamentally—then you find that probability is computed within a brain; and you find that other possible minds could perform mostly-analogous operations with different priors and arrive at different answers.
But, when you consider probability as probability, think about the referent instead of the thought process—which thinking you will do in your own thoughts, which are physical processes—then you will conclude that the vast majority of possible priors are probably wrong. (You will also be able to conceive of priors which are, in fact, better than yours, because they assign more probability to the actual outcome; you just won’t know in advance which alternative prior is the truly better one.)
If you again swap your goggles to think about how probability is implemented in the brain, the seeming objectivity of probability is the way the probability algorithm feels from inside; so it’s no mystery that, considering probability as probability, you feel that it’s not subject to your whims. That’s just what the probability-computation would be expected to say, since the computation doesn’t represent any dependency on your whims.
But when you swap out those goggles and go back to thinking about probabilities, then, by golly, your algorithm seems to be right in computing that probability is not subject to your whims. You can’t win the lottery just by changing your beliefs about it. And if that is the way you would be expected to feel, then so what? The feeling has been explained, not explained away; it is not a mere feeling. Just because a calculation is implemented in your brain, doesn’t mean it’s wrong, after all.
Your “probability that the ten trillionth decimal digit of pi is 4”, is an attribute of yourself, and exists in your mind; the real digit is either 4 or not. And if you could change your belief about the probability by editing your brain, you wouldn’t expect that to change the probability.
Therefore I say of probability that it is “subjectively objective”.
Part of The Metaethics Sequence
Next post: “Whither Moral Progress?”
Previous post: “Rebelling Within Nature”