I’m not arguing with the math; I’m arguing with how the philosophy is often applied. Consider the condition where my prior is greater than my evidence for all choices I’ve looked at, the number of possibilities is unknown, but I still need to make a decision about the problem? As the paper I was originally referencing mentioned, what if all options are false?
What does “have to make a decision” mean when “all options are false”?
Are you thinking about the situation when you have, say, 10 alternatives with the probabilities of 10% each except for two, one at 11% and one at 9%? None of them are “true” or “false”, you don’t know that. What you probably mean is that even the best option, the 11% alternative, is more likely to be false than true. Yes, but so what? If you have to pick one, you pick the RELATIVE best and if its probability doesn’t cross the 50% threshold, well, them’s the breaks.
Yes that is exactly what I’m getting at. It doesn’t seem reasonable to say you’ve confirmed the 11% alternative. But then there’s another problem, what if you have to make this decision multiple times? Do you throw out the other alternatives and only focus on the 11%? That would lead to status quo bias. So you have to keep the other alternatives in mind, but what do you do with them? Would you then say you’ve confirmed those other alternatives? This is where the necessity of something like falsification comes into play. You’ve got to continue analyzing multiple options as new evidence comes in, but trying to analyze all the alternatives is too difficult, so you need a way to throw out certain alternatives, but you never actually confirm any of them. These problems come up all the time in day to day decision making such as deciding on what’s for dinner tonight.
It doesn’t seem reasonable to say you’ve confirmed the 11% alternative.
In the context of the Bayesian confirmation theory, it’s not you who “confirms” the hypothesis. It’s evidence which confirms some hypothesis and that happens at the prior → posterior stage. Once you’re dealing with posteriors, all the confirmation has already been done.
what if you have to make this decision multiple times?
Do you get any evidence to update your posteriors? Is there any benefit to picking different alternatives? If no and no, then sure, you repeat your decision.
That would lead to status quo bias.
No, it would not. That’s not what the status quo bias is.
You keep on using words without understanding their meaning. This is a really bad habit.
If your problem is which tests to run, then you’re in the experimental design world. Crudely speaking, you want to rank your available tests by how much information they will give you and then do those which have high expected information and discard those which have low expected information.
All you have to do is not simultaneously use “confirm” to mean both “increase the probability of” and “assign high probability to”.
As for throwing out unlikely possibilities to save on computation: that (or some other shortcut) is sometimes necessary but it’s an entirely separate matter from Bayesian confirmation theory or indeed Popperian falsificationism. (Popper just says to rule things out when you’ve disproved them. In your example, you have a bunch of things near to 10% and Popper gives you no licence to throw any of them out.
Yes, sorry. I’m considering multiple sources which I recognize the rest of you haven’t read, and trying to translate them into short comments which I’m probably not the best person to do so, so I recognize the problem I’m talking about may come out a bit garbled, but I think the quote from the Morey et al. paper I quoted above describes the problem the best.
You see how Morey et al call the position they’re criticizing “Overconfident Bayesianism”? That’s because they’re contrasting it with another way of doing Bayesianism, about which they say “we suspect that most Bayesians adhere to a similar philosophy”. They explicitly say that what they’re advocating is a variety of Bayesian confirmation theory.
The part about deduction from the Morey et al. paper:
GS describe model testing as being outside the scope of Bayesian confirmation theory, and we agree. This should not be seen as a failure of Bayesian confirmation theory, but rather as an admission that Bayesian confirmation theory cannot describe all aspects of the data analysis cycle. It would be widely agreed that the initial generation of models is outside Bayesian confirmation theory; it should then be no surprise that subsequent generation of models is also outside its scope.
Who has been claiming that Bayesian confirmation theory is a tool for generating models?
(It can kinda-sorta be used that way if you have a separate process that generates all possible models, hence the popularity of Solomonoff induction around here. But that’s computationally intractable.)
As stated in my original comment, confirmation is only half the problem to be considered. The other half is inductive inference which is what many people mean when they refer to Bayesian inference. I’m not saying one way is clearly right and the other wrong, but that this is a difficult problem to which the standard solution may not be best.
You’d have to read the Andrew Gelman paper they’re responding to to see a criticism of confirmation.
I’m not arguing with the math; I’m arguing with how the philosophy is often applied. Consider the condition where my prior is greater than my evidence for all choices I’ve looked at, the number of possibilities is unknown, but I still need to make a decision about the problem? As the paper I was originally referencing mentioned, what if all options are false?
You are not arguing, you’re just being incoherent. For example,
...that sentence does not make any sense.
Then the option “something else” is true.
But you can’t pick something else; you have to make a decision
What does “have to make a decision” mean when “all options are false”?
Are you thinking about the situation when you have, say, 10 alternatives with the probabilities of 10% each except for two, one at 11% and one at 9%? None of them are “true” or “false”, you don’t know that. What you probably mean is that even the best option, the 11% alternative, is more likely to be false than true. Yes, but so what? If you have to pick one, you pick the RELATIVE best and if its probability doesn’t cross the 50% threshold, well, them’s the breaks.
Yes that is exactly what I’m getting at. It doesn’t seem reasonable to say you’ve confirmed the 11% alternative. But then there’s another problem, what if you have to make this decision multiple times? Do you throw out the other alternatives and only focus on the 11%? That would lead to status quo bias. So you have to keep the other alternatives in mind, but what do you do with them? Would you then say you’ve confirmed those other alternatives? This is where the necessity of something like falsification comes into play. You’ve got to continue analyzing multiple options as new evidence comes in, but trying to analyze all the alternatives is too difficult, so you need a way to throw out certain alternatives, but you never actually confirm any of them. These problems come up all the time in day to day decision making such as deciding on what’s for dinner tonight.
In the context of the Bayesian confirmation theory, it’s not you who “confirms” the hypothesis. It’s evidence which confirms some hypothesis and that happens at the prior → posterior stage. Once you’re dealing with posteriors, all the confirmation has already been done.
Do you get any evidence to update your posteriors? Is there any benefit to picking different alternatives? If no and no, then sure, you repeat your decision.
No, it would not. That’s not what the status quo bias is.
You keep on using words without understanding their meaning. This is a really bad habit.
When I say throw out I’m talking about halting tests, not changing the decision.
If your problem is which tests to run, then you’re in the experimental design world. Crudely speaking, you want to rank your available tests by how much information they will give you and then do those which have high expected information and discard those which have low expected information.
True.
All you have to do is not simultaneously use “confirm” to mean both “increase the probability of” and “assign high probability to”.
As for throwing out unlikely possibilities to save on computation: that (or some other shortcut) is sometimes necessary but it’s an entirely separate matter from Bayesian confirmation theory or indeed Popperian falsificationism. (Popper just says to rule things out when you’ve disproved them. In your example, you have a bunch of things near to 10% and Popper gives you no licence to throw any of them out.
Yes, sorry. I’m considering multiple sources which I recognize the rest of you haven’t read, and trying to translate them into short comments which I’m probably not the best person to do so, so I recognize the problem I’m talking about may come out a bit garbled, but I think the quote from the Morey et al. paper I quoted above describes the problem the best.
You see how Morey et al call the position they’re criticizing “Overconfident Bayesianism”? That’s because they’re contrasting it with another way of doing Bayesianism, about which they say “we suspect that most Bayesians adhere to a similar philosophy”. They explicitly say that what they’re advocating is a variety of Bayesian confirmation theory.
The part about deduction from the Morey et al. paper:
Who has been claiming that Bayesian confirmation theory is a tool for generating models?
(It can kinda-sorta be used that way if you have a separate process that generates all possible models, hence the popularity of Solomonoff induction around here. But that’s computationally intractable.)
As stated in my original comment, confirmation is only half the problem to be considered. The other half is inductive inference which is what many people mean when they refer to Bayesian inference. I’m not saying one way is clearly right and the other wrong, but that this is a difficult problem to which the standard solution may not be best.
You’d have to read the Andrew Gelman paper they’re responding to to see a criticism of confirmation.