But mostly this post is to introduce people to the argument and to get people thinking about a solution.
I’m afraid I don’t understand what problem you are trying to solve here. How is what you want to accomplish different from what is done, for example, in Chapter 1 of Myerson?
Not having read that book I couldn’t really tell you how or even if what I want to accomplish is different. I’m introducing people to the central arguments of Bayesian epistemology, the right way to interpret those arguments being a matter of controversy in the field. It seems unlikely the matter is conclusively settled in this book, but if it is the Myerson’s point needs to be promoted and someone would do well to summarize it here. There are of course many books and articles that go into the matter deeper than I have here- if you are sufficiently familiar with the literature you may have been impressed with someone’s treatment of it even though the field has not developed a consensus on the matter. Can you explain Myerson’s?
ETA: I just found in on Google. Give me a minute.
Update: Myerson doesn’t mention the Dutch book arguments in the pages I have access to. I’ve just skimmed the chapter and I don’t see anything that obviously would provide a satisfactory interpretation of the Dutch book arguments. You’ll have to make it more explicit or give me time to get the full book and read closely.
Myerson gives an argument justifying probability theory, Bayesian updating, and expected utility maximization based on some plausible axioms about rational decision making.
As I understand it, Dutch book arguments are another way of justifying (some of) these results, but you are seeking ways of doing that justification without assuming that a rational decision maker has to function as a bookie—being willing to bet on either side of any question (receiving a small transaction fee). Decision theoretic arguments, which instead force the decision maker to choose one side or the other (while preserving transitivity of preferences), are an alternative to Dutch book arguments, are what Myerson provides, and are what I thought you were looking for. But apparently I was wrong.
So I repeat: I don’t understand what problem you are trying to solve here.
I realize there are many plausible ways of justifying these results, the vast majority of which I have never read and larges classes of which I may have never read. I was particularly interested in arguments in the Dutch book areaspace but I am of course interested in other ways of doing it. I’m trying to talk about the foundations of our epistemology, the most prominent of which appears to be these Dutch book arguments. I want to know if there is a good way to interpret them or revise them. If they are unsalvageable then I would like to know that. I am interested in alternative justifications and the degree to which they preserve the Dutch book argument’s structure and the degree to which they don’t. I haven’t given a specification of the problem. I’ve picked a concept which has some problems and suggested we talk about it and work on it.
So why don’t you just explain how Myerson’s argument works.
So why don’t you just explain how Myerson’s argument works.
It is essentially the same as that of Anscombe and Aumann. Since that classic paper is available online, you can go straight to the source.
But the basic idea is straightforward and has been covered by Ramsey, Savage, von Neumann, Luce and Raiffa, and many others. The central assumptions are that preferences are transitive, together with something variously called “The sure thing principle” (Savage) or the “Axiom of Independence” (von Neumann).
So that particular method (the one in the paper you link) has, to my mind, a rather troubling flaw: it bases subjective probability on so-called physical probability. I agree with what appears to be the dominant position here that all probabilities are subjective probabilities which makes the Anscombe and Aumann proof rather less interesting—in fact it is question begging. (though it does work as a way of getting from more certain “objective” probabilities to less certain probabilities). They say that most of the other attempts have not relied on this, so I guess I’ll have to look at some of those. I’m also not sure Anscombe and Aumann have in anyway motivated agents to treat degrees of belief as probability: they’ve just defined such a agent, not shown that such conditions are necessary and sufficient for that agent to be considered rational (I suppose an extended discussion of those central assumptions might do the the trick).
This is not as clear as it could be in your original post. It might be helpful for others if you add an introduction that explicitly says what your aim is.
I’m afraid I don’t understand what problem you are trying to solve here. How is what you want to accomplish different from what is done, for example, in Chapter 1 of Myerson?
Not having read that book I couldn’t really tell you how or even if what I want to accomplish is different. I’m introducing people to the central arguments of Bayesian epistemology, the right way to interpret those arguments being a matter of controversy in the field. It seems unlikely the matter is conclusively settled in this book, but if it is the Myerson’s point needs to be promoted and someone would do well to summarize it here. There are of course many books and articles that go into the matter deeper than I have here- if you are sufficiently familiar with the literature you may have been impressed with someone’s treatment of it even though the field has not developed a consensus on the matter. Can you explain Myerson’s?
ETA: I just found in on Google. Give me a minute.
Update: Myerson doesn’t mention the Dutch book arguments in the pages I have access to. I’ve just skimmed the chapter and I don’t see anything that obviously would provide a satisfactory interpretation of the Dutch book arguments. You’ll have to make it more explicit or give me time to get the full book and read closely.
Myerson gives an argument justifying probability theory, Bayesian updating, and expected utility maximization based on some plausible axioms about rational decision making.
As I understand it, Dutch book arguments are another way of justifying (some of) these results, but you are seeking ways of doing that justification without assuming that a rational decision maker has to function as a bookie—being willing to bet on either side of any question (receiving a small transaction fee). Decision theoretic arguments, which instead force the decision maker to choose one side or the other (while preserving transitivity of preferences), are an alternative to Dutch book arguments, are what Myerson provides, and are what I thought you were looking for. But apparently I was wrong.
So I repeat: I don’t understand what problem you are trying to solve here.
Again, I don’t have the book!
I realize there are many plausible ways of justifying these results, the vast majority of which I have never read and larges classes of which I may have never read. I was particularly interested in arguments in the Dutch book areaspace but I am of course interested in other ways of doing it. I’m trying to talk about the foundations of our epistemology, the most prominent of which appears to be these Dutch book arguments. I want to know if there is a good way to interpret them or revise them. If they are unsalvageable then I would like to know that. I am interested in alternative justifications and the degree to which they preserve the Dutch book argument’s structure and the degree to which they don’t. I haven’t given a specification of the problem. I’ve picked a concept which has some problems and suggested we talk about it and work on it.
So why don’t you just explain how Myerson’s argument works.
It is essentially the same as that of Anscombe and Aumann. Since that classic paper is available online, you can go straight to the source.
But the basic idea is straightforward and has been covered by Ramsey, Savage, von Neumann, Luce and Raiffa, and many others. The central assumptions are that preferences are transitive, together with something variously called “The sure thing principle” (Savage) or the “Axiom of Independence” (von Neumann).
Thanks for the link.
So that particular method (the one in the paper you link) has, to my mind, a rather troubling flaw: it bases subjective probability on so-called physical probability. I agree with what appears to be the dominant position here that all probabilities are subjective probabilities which makes the Anscombe and Aumann proof rather less interesting—in fact it is question begging. (though it does work as a way of getting from more certain “objective” probabilities to less certain probabilities). They say that most of the other attempts have not relied on this, so I guess I’ll have to look at some of those. I’m also not sure Anscombe and Aumann have in anyway motivated agents to treat degrees of belief as probability: they’ve just defined such a agent, not shown that such conditions are necessary and sufficient for that agent to be considered rational (I suppose an extended discussion of those central assumptions might do the the trick).
But yes, these arguments are somewhat on topic.
Jack, you might be more interested in the paper linked to in this post.
This is not as clear as it could be in your original post. It might be helpful for others if you add an introduction that explicitly says what your aim is.