Human beings don’t actually seem to have utility functions, all they really have are “preferences” i.e. a method for choosing between alternatives. But von Neumann and Morgenstern showed that under some conditions this is the same as having a utility function.
Now Scurfield is saying that human beings, even smart ones like scientists, don’t have prior probability distributions, all they really have is a database of claims and criticisms of those claims. Is there any result analogous to von Neumann-Morgenstern that says this is the same thing as having a prior, under conditions?
Yes. The question has been addressed repeatedly by a variety of people. John Maynard Keynes may have been the first. Notable formulations since his include de Finetti, Savage, and Jeffrey’s online book.
Discovering subjective probabilities is usually done in conjunction with discovering utilities by revealed preferences because much of the machinery (choices between alternatives, lotteries) is shared between the two problems. People like Jaynes who want a pure epistemology uncontaminated by crass utility considerations have to demand that their “test subjects” adhere to some fairly hard-to-justify consistency rules. But people like de Finetti don’t impose arbitrary consistency, instead they prove that inconsistent probability assignments lose money to clever gamblers who construct “Dutch books”.
I’m not even familiar with Halpern’s work. The only serious criticism I have seen regarding the usual consistency rules for subjective probabilities dealt with the “sure thing rule”. I didn’t find it particularly convincing.
No, I have no trouble justifying a mathematical argument in favor of this kind of consistency. But not everyone else is all that convinced by mathematics. Their attention can be grabbed, however, by the danger of being taken to the cleaners by Dutch book professional bookies.
One of these days, I will get around to producing a posting on probability, developing it from what I call the “surprisal” of a proposition—the amount, on a scale from zero to positive infinity, by which you would be surprised upon learning that a proposition is true.
Prob(X) = 2^(-Surp(X)).
Surp(coin flip yields heads)= 1 bit.
Surp(A) + Surp(B|A) = Surp(A&B)
That last formula strikes me as particularly easy to justify (surprisals are additive). Given that and the first formula, you can easily derive Bayes law. The middle formula simply fixes the scale for surprisals. I suppose we also need a rule that Surp(True)=0
Cool! Saves me the trouble of writing that posting. :)
Absurdity is probably a better name for the concept. Except that it sounds objective, whereas amount of surprise more obviously depends on who is being surprised.
If you were asked to bet on whether it was true or not, then you should assign a probability.
Scientists often do something like that when deciding how to allocate their research funds.
But then we have to develop a quantitative formalism for both beliefs and utilities. Is it really necessary to attack both problems at once?
Human beings don’t actually seem to have utility functions, all they really have are “preferences” i.e. a method for choosing between alternatives. But von Neumann and Morgenstern showed that under some conditions this is the same as having a utility function.
Now Scurfield is saying that human beings, even smart ones like scientists, don’t have prior probability distributions, all they really have is a database of claims and criticisms of those claims. Is there any result analogous to von Neumann-Morgenstern that says this is the same thing as having a prior, under conditions?
Yes. The question has been addressed repeatedly by a variety of people. John Maynard Keynes may have been the first. Notable formulations since his include de Finetti, Savage, and Jeffrey’s online book.
Discovering subjective probabilities is usually done in conjunction with discovering utilities by revealed preferences because much of the machinery (choices between alternatives, lotteries) is shared between the two problems. People like Jaynes who want a pure epistemology uncontaminated by crass utility considerations have to demand that their “test subjects” adhere to some fairly hard-to-justify consistency rules. But people like de Finetti don’t impose arbitrary consistency, instead they prove that inconsistent probability assignments lose money to clever gamblers who construct “Dutch books”.
I’d be interested in reading more about your views on this (unless you’re referring to Halpern’s papers on Cox’s theorem).
I’m not even familiar with Halpern’s work. The only serious criticism I have seen regarding the usual consistency rules for subjective probabilities dealt with the “sure thing rule”. I didn’t find it particularly convincing.
No, I have no trouble justifying a mathematical argument in favor of this kind of consistency. But not everyone else is all that convinced by mathematics. Their attention can be grabbed, however, by the danger of being taken to the cleaners by Dutch book professional bookies.
One of these days, I will get around to producing a posting on probability, developing it from what I call the “surprisal” of a proposition—the amount, on a scale from zero to positive infinity, by which you would be surprised upon learning that a proposition is true.
Prob(X) = 2^(-Surp(X)).
Surp(coin flip yields heads)= 1 bit.
Surp(A) + Surp(B|A) = Surp(A&B)
That last formula strikes me as particularly easy to justify (surprisals are additive). Given that and the first formula, you can easily derive Bayes law. The middle formula simply fixes the scale for surprisals. I suppose we also need a rule that Surp(True)=0
Actually “Surprisal” is a pretty standard term, I think.
Yudkowsky suggests calling it “absurdity” here
Cool! Saves me the trouble of writing that posting. :)
Absurdity is probably a better name for the concept. Except that it sounds objective, whereas amount of surprise more obviously depends on who is being surprised.
Wild. Is there an exposition of subjective expected utility better than wikipedia’s?
Jeffrey’s book, which I already linked, or any good text on Game theory. Myerson, for example, or Luce and Raiffa.
Agents can reasonably be expected to quantify both beliefs and utilities. How the ability to do that is developed—is up to the developer.
People are agents, and they are very bad at quantifying their beliefs and utilities.