As I said, the issue can be corrected for if the number of hypotheses is known, but not if the number of possibilities is
unknown
You don’t need to know the number, you need to know the model (which could have infinite hypotheses in it).
Your model (hypothesis set) could be specified by an infinite number of parameters, say “all possible means and variances of a Gaussian.” You can have a prior on this space, which is a density. You update the density with evidence to get a new density. This is Bayesian stats 101. Why not just go read about it? Bishop’s machine learning book is good.
True, but working from a model is not an inductive method, so it can’t be classified as confirmation through inductive inference which is what I’m criticizing.
??? IlyaShpitser if I understand correctly is talking about creating a model of a prior, collecting evidence, and then determining whether the model is true or false. That’s hypothesis testing, which is deduction; not induction.
You have a (possibly infinite) set of hypotheses. You maintain beliefs about this set. As you get more data, your beliefs change. To maintain beliefs you need a distribution/density. To do that you need a model (a model is just a set of densities you consider). You may have a flexible model and let the data decide how flexible you want to be (non-parametric Bayes stuff, I don’t know too much about it), but there’s still a model.
Suggesting for the third and final time to get off the internet argument train and go read a book about Bayesian inference.
That interesting solution is exactly what people doing Bayesian inference do. Any criticism you may have that doesn’t apply to what Ilya describes isn’t a criticism of Bayesian inference.
You don’t need to know the number, you need to know the model (which could have infinite hypotheses in it).
Your model (hypothesis set) could be specified by an infinite number of parameters, say “all possible means and variances of a Gaussian.” You can have a prior on this space, which is a density. You update the density with evidence to get a new density. This is Bayesian stats 101. Why not just go read about it? Bishop’s machine learning book is good.
True, but working from a model is not an inductive method, so it can’t be classified as confirmation through inductive inference which is what I’m criticizing.
You are severely confused about the basics. Please unconfuse yourself before getting to the criticism stage.
??? IlyaShpitser if I understand correctly is talking about creating a model of a prior, collecting evidence, and then determining whether the model is true or false. That’s hypothesis testing, which is deduction; not induction.
You don’t understand.
You have a (possibly infinite) set of hypotheses. You maintain beliefs about this set. As you get more data, your beliefs change. To maintain beliefs you need a distribution/density. To do that you need a model (a model is just a set of densities you consider). You may have a flexible model and let the data decide how flexible you want to be (non-parametric Bayes stuff, I don’t know too much about it), but there’s still a model.
Suggesting for the third and final time to get off the internet argument train and go read a book about Bayesian inference.
Oh, sorry I misunderstood your argument. That’s an interesting solution.
That interesting solution is exactly what people doing Bayesian inference do. Any criticism you may have that doesn’t apply to what Ilya describes isn’t a criticism of Bayesian inference.
As much as I hate to do it, I am going to have to agree with Lumifer, you sound confused. Go read Bishop.