I shared the link to this post on an IRC channel populated by a number of people, but mostly by mathematically inclined CS majors. It provoked a bunch of discussion about the way frequentism/bayesianism is generally discussed on LW. Here are a few snippets from the conversation (nicknames left out except my own, less relevant lines have been edited out):
11:03 < Person A> For fucks sake “And so at age 62, Laplace — the world’s first Bayesian — converted to frequentism, which he used for the remaining 16 years of his life.” 11:04 <@Guy B> well he believed that the results were the same 11:04 <@Guy B> counterexamples were invented only later 11:05 < Person A> Guy B: Still, I just hate the way that lesswrong talks about “bayesians” and “frequentists” 11:05 <@Guy B> Person A: oh, I misinterpreted you 11:06 < Person A> Every time yudkowsky writes “The Way of Bayes” i get a sudden urge to throw my laptop out of the window. 11:08 < Person A> Yudkowsky is a really good popular writer, but I hate the way he tries to create strange conflicts even where they don’t exist. 11:10 <@Xuenay> I guess I should point out that the article in question wasn’t written by Yudkowsky :P 11:10 <@Dude C> Xuenay: it was posted on lesswrong 11:11 <@Dude C> so obv we will talk about Yudkowski
11:13 <@Dude C> it’s just htat there is no conflict, there are just several ways to do that. 11:13 <@Dude C> several models 11:16 <@Dude C> uh, several modes 11:17 <@Dude C> or I guess several schools. w/e. 11:17 <@Entity D> it’s like this stupid philosophical conflict over two mathematically valid ways of doing statistical inference, a conflict some people seem to take all too seriously 11:17 <@Guy B> IME self-described bayesians are always going on about this “conflict” 11:17 <@Guy B> while most people just concentrate on science 11:18 <@Entity D> Guy B: exactly 11:18 <@Dude C> and use appropriate methods where they are appropriate
Summing up, the general consensus on the channel is that the whole frequentist/bayesian conflict gets seriously and annoyingly exaggarated on LW, and that most people doing science are happy to use either methodology if that suits the task at hand. Those who really do care and could reasonably be described as ‘frequentist’ or ‘bayesian’ are really a small minority, and LW’s way of constantly bringing it up is just something that’s used to make the posters feel smugly superior to “those clueless frequentists”. This consensus has persisted over an extended time, and has contributed to LW suffering from a lack of credibility in the eyes of many of the channel regulars.
Does anybody better versed in the debate have a comment?
Does anybody better versed in the debate have a comment?
Though I was not addressed by that, here goes anyway:
That people are happy doing whatever works doesn’t make them part Bayesian and part Frequentist in LW’s meaning any more than eating some vegetables and some meat makes one part vegetarian and part carnivore. Omnivores are not insiders among vegetarians or carnivores.
Bayesians—those who really do care, as you put it—believe something like “learning works to the extent it models Bayesian updating”. When omnistatisticians decide to use a set of tools they customize for the situation, and make the result look clean and right and not silly and even extrapolatable and predictive, etc., and this gets a result better than formal Bayesian analysis or any other analysis, Bayesians believe that the thing that modeled Bayesian updating happened within the statisticians’ own minds—their models are not at all simple, because the statistician is part of the model. Consequently, any non-Bayesian model is almost by definition poorly understood.
This is my impression of the collective LW belief, that impression is of course open to further revision.
LW has contributed to the confusion tremendously by simplistically using only two terms. Just as from the vegetarian perspective, omnivores and carnivores may be lumped into a crude “meat-eater” outgroup, from the philosophical position people on LW often take “don’t know, don’t care” and “principled frequentist” are lumped together into one outgroup.
People will not respect the opinions of those they believe don’t understand the situation, and this scene has repeatedly occurred—posters on LW convince many that they do not understand people’s beliefs, so of course the analysis and lessons are poorly received.
But the content in my post isn’t by Less Wrong, it’s by McGrayne.
The history in McGrayne’s book is an excellent substantiation of just how deep, serious, and long-standing the debate between frequentism and Bayesianism really is. If they want, they can check the notes at the back of McGrayne’s book and read the original articles from people like Fisher and Jeffreys. McGrayne’s book is full of direct quotes, filled with venom for the ‘opposing’ side.
But the content in my post isn’t by Less Wrong, it’s by McGrayne.
Fair point. Still, a person who hasn’t read the book can’t know whether lines such as “at age 62, Laplace — the world’s first Bayesian — converted to frequentism” are from the book or if they were something you came up when summarizing.
If they want, they can check the notes at the back of McGrayne’s book and read the original articles from people like Fisher and Jeffreys.
In previous discussions on the topic, I’ve seen people express the opinion that the fierce debates are somewhat of a thing of the past. I.e. yes there have been fights, but these days people are mostly over that.
In previous discussions on the topic, I’ve seen people express the opinion that the fierce debates are somewhat of a thing of the past. I.e. yes there have been fights, but these days people are mostly over that.
This is something I was told over and over again by professors, when I was applying to grad school for biostatistics and told them I was interested in doing specifically Bayesian statistics. They mistook my epistemological interest in Bayes as like… ideological alignment, I guess. This is how I learned 1. that there were fierce debates in the recent past and 2. most people in biology don’t like them or consider them productive.
They mistook my epistemological interest in Bayes as like… ideological alignment, I guess. This is how I learned 1. that there were fierce debates in the recent past and 2. most people in biology don’t like them or consider them productive.
I’m not sure that the debates were even THAT recent. I think your professsors are worried about a common failure mode that sometimes creeps up- people like to think they know the “one true way” to do statistics (or really any problem) and so they start turning every problem into a nail so that they can keep using their hammer, instead of using appropriate methodology to the problem at hand.
I see this a fair amount in data mining, where certain people ONLY use neural nets, and certain people ONLY use various GLMs and extensions and sometimes get overly-heated about it.
Thanks for the warning. I thought the only danger was ideological commitment. But—correct me if I’m wrong, or just overrecahing—it sounds like if I fail, it’ll be because I develop an expertise and become motivated to defend the value of my own skill.
if I fail, it’ll be because I develop an expertise and become motivated to defend the value of my own skill.
No, more like you’ll spend months (or more) pushing against a research problem to make it approachable via something in a Bayesian toolbox when there was a straightforward frequentist approach sitting there all along.
Because of its subject, your post in particular will obviously focus on those who care about the debate. It’s not about the practice of learning from data, it’s about the history of views on how to learn from data.
The criticism that it ignores those who utilize and do not theorize is wrong headed. The only thing that prevents it from being an outright bizarre accusation is that LW has repeatedly ignored the mere utilizers who are outside the academic debate when they should have been discussed and addressed.
But the content in my post isn’t by Less Wrong, it’s by McGrayne.
I strongly, strongly disagree. Even presenting unaltered material in a context not planned by the original author is a form of authorship. You have gone far, far beyond that by paraphrasing. You have presented an idea to a particular audience with media, you are an author, you are responsible.
If my friend asks to borrow a book to read, and I say “Which book” and he or she says “Whichever” I affect what is read and create the context in which it is read.
I literally just finished the book, and Luke’s paraphrase seems pretty apt. As presented by McGrayne, with specific quotes and punitive actions, the feud was brutal.
My problem, and likely the chatters’, is that by leading a team cheer for one audience, the larger neutral audience feels excluded. Doesn’t really matter whose words it was.
And while most of the history was very interesting, some of it felt cherry-picked or spun, adding to that feeling of team-ization.
I don’t think “neutral” is quite the right word for the audience in question. It may be the best one, but there is more to it, as it only captures the group’s view of itself, and not how others might see it.
The Bayesians (vegetarians) see the “neutrals” (omnivores) as non-understanding (animal-killers). The neutrals see themselves as partaking of the best tools (foods) there are, both Bayesian and frequentist (vegetable and animal), and think that when Bayesians call them “non-Bayesians” (animal-killers) the Bayesians are making a mistake of fact by thinking that they are frequentists (carnivores). Sometimes Bayesians even say “frequentist” when context makes it obvious they mean “non-Bayesian” (or that they are making a silly mistake, which is what the threatened “neutrals” are motivated to assume).
As neutrals is absolutely how those in the group in question see themselves, but also true is that Bayesians see them as heretics, (murderers of Bambi, Thumper, and Lambchop), or what have you, without them making a mistake of fact. The Bayesian theoretical criticisms should not be brushed aside on the grounds that they are out of touch with how things are done, and do not understand that it that most use all available tools (are omnivorous). They can be addressed by invoking the outside view against the inside view, or practice against theory, etc. (these are arguments in which Bayesians and frequentists are joined against neutrals) and subsequently (if the “neutrals” (omnivores) do not win against the Bayesians [and their frequentist allies {those favoring pure diets}] outright in that round) on the well worn Bayesian (vegetarian) v. frequentist (carnivore) battlegrounds.
This is quite possible, but there is some irony here—you have misrepresented the analogy by describing a three category grouping system by naming two of its categories, implying it is about opposites!
I think that people do this too often in general and that it is implicated in this debate’s confused character. Hence, the analogy with more than a dichotomy of oppositional groups!
Having said that, I find myself agreeing with kurokikaze; the vegetarian-omnivore-carnivore metaphor doesn’t help. The spilt blood (and spilt sap) distract from, and obscure, the “Three, not two” point.
In my laboratory statistics manual from college (the first edition of this book) the only statistics were frequentist, and Jaynes was considered a statistical outlier in my first year of graduate school. His results were respected, but the consensus was that he got them in spite of his unorthodox reduction method, not because of it.
In my narrow field (reflection seismology) two of the leaders explicitly addressed this question in a (surprisingly to me little-read and seldom-referenced) paper: To Bayes or not to Bayes. Their conclusion: they prefer their problems neat enough to not require the often-indispensable Bayes method.
It is a debate I prefer to avoid unless it is required. The direction of progress is unambiguous but it seems to me a classic example of a Kuhn paradigm shift where a bunch of old guys have to die before we can proceed amicably.
A very small minority of people hate Bayesian data reduction. A very small minority of people hate frequentist data reduction. The vast majority of people do not care very much unless the extremists are loudly debating and drowning out all other topics.
Another graduate student, I have in general heard a similar opinions from many professors through undergrad and grad school. Never disdan for bays but often something along the lines of “I am not so sure about that” or “I never really grasped the concept/need for bayes.” The statistics books that have been required for classes, in my opinion durring the class, used a slightly negative tone while discussing bayes and ‘subjective probability.’
Eliezer argues that this difference between “many tools in the statistical toolbox” and “one mathematical law to rule them all” is actually one of the differences between the frequentist and Bayesian approaches.
I think this is due to Yudkowsky’s focus on AI theory; an AI can’t use discretion to choose the right method unless we formalize this discretion. Bayes’ theorem is applicable to all inference problems, while frequentist methods have domains of applicability. This may seem philosophical to working statisticians—after all, Bayes’ theorem is rather inefficient for many problems, so it may still be considered inapplicable in this sense—but programming an AI to use a frequentist method without a complete understanding of its domain of applicability could be disastrous, while that problem just does not exist for Bayesianism. There is the problem of choosing a prior, but that can be dealt with by using objective priors or Solomonoff induction.
Yes, mostly that lesser meaning of disastrous, though an AI that almost works but has a few very wrong beliefs could be unfriendly. If I misunderstood your comment and you were actually asking for an example of a frequentist method failing, one of the simplest examples is a mistaken assumption of linearity.
“There is the problem of choosing a prior, but that can be dealt with by using objective priors or Solomonoff induction.”
Yeah, well. That of course is the core of what is dubious and disputed here. Really, Bayes’ theorem itself is hardly controversial, and talking about it this way is pointless.
There’s sort of a continuum here. A weak claim is that these priors can be an adequate model of uncertainty in many situations. Stronger and stronger claims will assert that this works in more and more situations, and the strongest claim is that these cover all forms of uncertainty in all situations. Lukeprog makes the strongest claim, by means of examples which I find rather sketchy relative to the strength of the claim.
To Kaj Sotala’s conversation, adherents of the weaker claim would be fine with the “use either methodlogy if that suits it” attitude. This is less acceptable to those who think priors should be broadly applicable. And it is utterly unacceptable from the perspective of the strongest claim.
For that matter “either” is incorrect (note the original conversation one of them actually talks about several rather than two). There is lots of work on modeling uncertainty in non-frequentist and non-bayesian ways.
I was thinking of the simpler case of someone who has already assigned utilities as required by the VNM axioms for the noncontroversial case of gambling with probabilities that are relative frequencies, but refuses on philosophical grounds to apply the expected utility decision procedure to other kinds of uncertainty.
(I do think the statement still stands in general. I don’t have a complete proof but Savage’s axioms get most of the way there.)
On the thread cited I gave a three state, two outcome counterexample to P2 which does just that. Having two outcomes obviously a utility function is not an issue. (It can be extended it with an arbitrary number of “fair coins” for example to satisfy P6, which covers your actual frequency requirement here)
My weak claim is that it is not vulnerable to “Dutch-book-type” arguments. My strong claim is that this behaviour is reasonable, even rational. The strong claim is being disputed on that thread. And of course we haven’t agreed on any prior definition of reasonable or rational. But nobody has attempted to Dutch book me, and the weak claim is all that is needed to contradict your claim here.
“It is, I think, particularly in periods of acknowledged crisis that scientists have turned to philosophical analysis as a device for unlocking the riddles of their field. Scientists have not generally needed or wanted to be philosophers.”
--Thomas Kuhn, The Structure of Scientific Revolutions
I shared the link to this post on an IRC channel populated by a number of people, but mostly by mathematically inclined CS majors. It provoked a bunch of discussion about the way frequentism/bayesianism is generally discussed on LW. Here are a few snippets from the conversation (nicknames left out except my own, less relevant lines have been edited out):
11:03 < Person A> For fucks sake “And so at age 62, Laplace — the world’s first Bayesian — converted to frequentism, which he used for the remaining 16 years of his life.”
11:04 <@Guy B> well he believed that the results were the same
11:04 <@Guy B> counterexamples were invented only later
11:05 < Person A> Guy B: Still, I just hate the way that lesswrong talks about “bayesians” and “frequentists”
11:05 <@Guy B> Person A: oh, I misinterpreted you
11:06 < Person A> Every time yudkowsky writes “The Way of Bayes” i get a sudden urge to throw my laptop out of the window.
11:08 < Person A> Yudkowsky is a really good popular writer, but I hate the way he tries to create strange conflicts even where they don’t exist.
11:10 <@Xuenay> I guess I should point out that the article in question wasn’t written by Yudkowsky :P
11:10 <@Dude C> Xuenay: it was posted on lesswrong
11:11 <@Dude C> so obv we will talk about Yudkowski
11:13 <@Dude C> it’s just htat there is no conflict, there are just several ways to do that.
11:13 <@Dude C> several models
11:16 <@Dude C> uh, several modes
11:17 <@Dude C> or I guess several schools. w/e.
11:17 <@Entity D> it’s like this stupid philosophical conflict over two mathematically valid ways of doing statistical inference, a conflict some people seem to take all too seriously
11:17 <@Guy B> IME self-described bayesians are always going on about this “conflict”
11:17 <@Guy B> while most people just concentrate on science
11:18 <@Entity D> Guy B: exactly
11:18 <@Dude C> and use appropriate methods where they are appropriate
Summing up, the general consensus on the channel is that the whole frequentist/bayesian conflict gets seriously and annoyingly exaggarated on LW, and that most people doing science are happy to use either methodology if that suits the task at hand. Those who really do care and could reasonably be described as ‘frequentist’ or ‘bayesian’ are really a small minority, and LW’s way of constantly bringing it up is just something that’s used to make the posters feel smugly superior to “those clueless frequentists”. This consensus has persisted over an extended time, and has contributed to LW suffering from a lack of credibility in the eyes of many of the channel regulars.
Does anybody better versed in the debate have a comment?
Though I was not addressed by that, here goes anyway:
That people are happy doing whatever works doesn’t make them part Bayesian and part Frequentist in LW’s meaning any more than eating some vegetables and some meat makes one part vegetarian and part carnivore. Omnivores are not insiders among vegetarians or carnivores.
Bayesians—those who really do care, as you put it—believe something like “learning works to the extent it models Bayesian updating”. When omnistatisticians decide to use a set of tools they customize for the situation, and make the result look clean and right and not silly and even extrapolatable and predictive, etc., and this gets a result better than formal Bayesian analysis or any other analysis, Bayesians believe that the thing that modeled Bayesian updating happened within the statisticians’ own minds—their models are not at all simple, because the statistician is part of the model. Consequently, any non-Bayesian model is almost by definition poorly understood.
This is my impression of the collective LW belief, that impression is of course open to further revision.
LW has contributed to the confusion tremendously by simplistically using only two terms. Just as from the vegetarian perspective, omnivores and carnivores may be lumped into a crude “meat-eater” outgroup, from the philosophical position people on LW often take “don’t know, don’t care” and “principled frequentist” are lumped together into one outgroup.
People will not respect the opinions of those they believe don’t understand the situation, and this scene has repeatedly occurred—posters on LW convince many that they do not understand people’s beliefs, so of course the analysis and lessons are poorly received.
But the content in my post isn’t by Less Wrong, it’s by McGrayne.
The history in McGrayne’s book is an excellent substantiation of just how deep, serious, and long-standing the debate between frequentism and Bayesianism really is. If they want, they can check the notes at the back of McGrayne’s book and read the original articles from people like Fisher and Jeffreys. McGrayne’s book is full of direct quotes, filled with venom for the ‘opposing’ side.
Fair point. Still, a person who hasn’t read the book can’t know whether lines such as “at age 62, Laplace — the world’s first Bayesian — converted to frequentism” are from the book or if they were something you came up when summarizing.
In previous discussions on the topic, I’ve seen people express the opinion that the fierce debates are somewhat of a thing of the past. I.e. yes there have been fights, but these days people are mostly over that.
I took this as a successful attempt at humor.
This is something I was told over and over again by professors, when I was applying to grad school for biostatistics and told them I was interested in doing specifically Bayesian statistics. They mistook my epistemological interest in Bayes as like… ideological alignment, I guess. This is how I learned 1. that there were fierce debates in the recent past and 2. most people in biology don’t like them or consider them productive.
I’m not sure that the debates were even THAT recent. I think your professsors are worried about a common failure mode that sometimes creeps up- people like to think they know the “one true way” to do statistics (or really any problem) and so they start turning every problem into a nail so that they can keep using their hammer, instead of using appropriate methodology to the problem at hand.
I see this a fair amount in data mining, where certain people ONLY use neural nets, and certain people ONLY use various GLMs and extensions and sometimes get overly-heated about it.
Thanks for the warning. I thought the only danger was ideological commitment. But—correct me if I’m wrong, or just overrecahing—it sounds like if I fail, it’ll be because I develop an expertise and become motivated to defend the value of my own skill.
No, more like you’ll spend months (or more) pushing against a research problem to make it approachable via something in a Bayesian toolbox when there was a straightforward frequentist approach sitting there all along.
Because of its subject, your post in particular will obviously focus on those who care about the debate. It’s not about the practice of learning from data, it’s about the history of views on how to learn from data.
The criticism that it ignores those who utilize and do not theorize is wrong headed. The only thing that prevents it from being an outright bizarre accusation is that LW has repeatedly ignored the mere utilizers who are outside the academic debate when they should have been discussed and addressed.
I strongly, strongly disagree. Even presenting unaltered material in a context not planned by the original author is a form of authorship. You have gone far, far beyond that by paraphrasing. You have presented an idea to a particular audience with media, you are an author, you are responsible.
If my friend asks to borrow a book to read, and I say “Which book” and he or she says “Whichever” I affect what is read and create the context in which it is read.
I literally just finished the book, and Luke’s paraphrase seems pretty apt. As presented by McGrayne, with specific quotes and punitive actions, the feud was brutal.
My problem, and likely the chatters’, is that by leading a team cheer for one audience, the larger neutral audience feels excluded. Doesn’t really matter whose words it was.
And while most of the history was very interesting, some of it felt cherry-picked or spun, adding to that feeling of team-ization.
I don’t think “neutral” is quite the right word for the audience in question. It may be the best one, but there is more to it, as it only captures the group’s view of itself, and not how others might see it.
The Bayesians (vegetarians) see the “neutrals” (omnivores) as non-understanding (animal-killers). The neutrals see themselves as partaking of the best tools (foods) there are, both Bayesian and frequentist (vegetable and animal), and think that when Bayesians call them “non-Bayesians” (animal-killers) the Bayesians are making a mistake of fact by thinking that they are frequentists (carnivores). Sometimes Bayesians even say “frequentist” when context makes it obvious they mean “non-Bayesian” (or that they are making a silly mistake, which is what the threatened “neutrals” are motivated to assume).
As neutrals is absolutely how those in the group in question see themselves, but also true is that Bayesians see them as heretics, (murderers of Bambi, Thumper, and Lambchop), or what have you, without them making a mistake of fact. The Bayesian theoretical criticisms should not be brushed aside on the grounds that they are out of touch with how things are done, and do not understand that it that most use all available tools (are omnivorous). They can be addressed by invoking the outside view against the inside view, or practice against theory, etc. (these are arguments in which Bayesians and frequentists are joined against neutrals) and subsequently (if the “neutrals” (omnivores) do not win against the Bayesians [and their frequentist allies {those favoring pure diets}] outright in that round) on the well worn Bayesian (vegetarian) v. frequentist (carnivore) battlegrounds.
I think vegetarian-carnivore metaphor here doesn’t help at all :)
I found it helpful. But I’m an omnivore so I (mistakenly) think that I don’t have a dog in that fight.
This is quite possible, but there is some irony here—you have misrepresented the analogy by describing a three category grouping system by naming two of its categories, implying it is about opposites!
I think that people do this too often in general and that it is implicated in this debate’s confused character. Hence, the analogy with more than a dichotomy of oppositional groups!
Realising that it is a three-way split, not a two-way split is my latest hammer. See me use it in Is Bayesian probability individual, situational, or transcendental: a break with the usual subjective/objective bun fight.
Having said that, I find myself agreeing with kurokikaze; the vegetarian-omnivore-carnivore metaphor doesn’t help. The spilt blood (and spilt sap) distract from, and obscure, the “Three, not two” point.
In my laboratory statistics manual from college (the first edition of this book) the only statistics were frequentist, and Jaynes was considered a statistical outlier in my first year of graduate school. His results were respected, but the consensus was that he got them in spite of his unorthodox reduction method, not because of it.
In my narrow field (reflection seismology) two of the leaders explicitly addressed this question in a (surprisingly to me little-read and seldom-referenced) paper: To Bayes or not to Bayes. Their conclusion: they prefer their problems neat enough to not require the often-indispensable Bayes method.
It is a debate I prefer to avoid unless it is required. The direction of progress is unambiguous but it seems to me a classic example of a Kuhn paradigm shift where a bunch of old guys have to die before we can proceed amicably.
A very small minority of people hate Bayesian data reduction. A very small minority of people hate frequentist data reduction. The vast majority of people do not care very much unless the extremists are loudly debating and drowning out all other topics.
Another graduate student, I have in general heard a similar opinions from many professors through undergrad and grad school. Never disdan for bays but often something along the lines of “I am not so sure about that” or “I never really grasped the concept/need for bayes.” The statistics books that have been required for classes, in my opinion durring the class, used a slightly negative tone while discussing bayes and ‘subjective probability.’
Eliezer argues that this difference between “many tools in the statistical toolbox” and “one mathematical law to rule them all” is actually one of the differences between the frequentist and Bayesian approaches.
Thanks, I’d forgotten that post. It sums things up pretty well, I think.
I think this is due to Yudkowsky’s focus on AI theory; an AI can’t use discretion to choose the right method unless we formalize this discretion. Bayes’ theorem is applicable to all inference problems, while frequentist methods have domains of applicability. This may seem philosophical to working statisticians—after all, Bayes’ theorem is rather inefficient for many problems, so it may still be considered inapplicable in this sense—but programming an AI to use a frequentist method without a complete understanding of its domain of applicability could be disastrous, while that problem just does not exist for Bayesianism. There is the problem of choosing a prior, but that can be dealt with by using objective priors or Solomonoff induction.
I’m not sure what you meant by that, but as far as I can tell not explicitly using Bayesian reasoning makes AIs less functional, not unfriendly.
Yes, mostly that lesser meaning of disastrous, though an AI that almost works but has a few very wrong beliefs could be unfriendly. If I misunderstood your comment and you were actually asking for an example of a frequentist method failing, one of the simplest examples is a mistaken assumption of linearity.
“There is the problem of choosing a prior, but that can be dealt with by using objective priors or Solomonoff induction.”
Yeah, well. That of course is the core of what is dubious and disputed here. Really, Bayes’ theorem itself is hardly controversial, and talking about it this way is pointless.
There’s sort of a continuum here. A weak claim is that these priors can be an adequate model of uncertainty in many situations. Stronger and stronger claims will assert that this works in more and more situations, and the strongest claim is that these cover all forms of uncertainty in all situations. Lukeprog makes the strongest claim, by means of examples which I find rather sketchy relative to the strength of the claim.
To Kaj Sotala’s conversation, adherents of the weaker claim would be fine with the “use either methodlogy if that suits it” attitude. This is less acceptable to those who think priors should be broadly applicable. And it is utterly unacceptable from the perspective of the strongest claim.
For that matter “either” is incorrect (note the original conversation one of them actually talks about several rather than two). There is lots of work on modeling uncertainty in non-frequentist and non-bayesian ways.
Anyone who bases decisions on a non-Bayesian model of uncertainty that is not equivalent to Bayesianism with some prior is vulnerable to Dutch books.
It seems not. Sniffnoy’s recent thread asked the very question as to whether Savage’s axioms could really be justified by dutch book arguments.
I was thinking of the simpler case of someone who has already assigned utilities as required by the VNM axioms for the noncontroversial case of gambling with probabilities that are relative frequencies, but refuses on philosophical grounds to apply the expected utility decision procedure to other kinds of uncertainty.
(I do think the statement still stands in general. I don’t have a complete proof but Savage’s axioms get most of the way there.)
On the thread cited I gave a three state, two outcome counterexample to P2 which does just that. Having two outcomes obviously a utility function is not an issue. (It can be extended it with an arbitrary number of “fair coins” for example to satisfy P6, which covers your actual frequency requirement here)
My weak claim is that it is not vulnerable to “Dutch-book-type” arguments. My strong claim is that this behaviour is reasonable, even rational. The strong claim is being disputed on that thread. And of course we haven’t agreed on any prior definition of reasonable or rational. But nobody has attempted to Dutch book me, and the weak claim is all that is needed to contradict your claim here.
Sorry, I didn’t check that thread for posts by you. I replied there.
--Thomas Kuhn, The Structure of Scientific Revolutions