That depends on lot how narrowly or widely we interpret things. It could make a lot of sense that “updating correctly” also includes not updating into a eventual deadend and updating in a way that can faciliate paradigm shifts.
It might be worth noting that for some “updating” can refer to a very narrow process involving explicitly and conciously formulated numbers. but theorethical mullings over bayes rule can also include appriciation for the wiggleroom ie 99.999% vs 99.9% differing in the degree how much emphasis for totally different paradigms is given.
That depends on lot how narrowly or widely we interpret things
Indeed. Where Bayes is a taken loosely series of maxims, you can add some n advice about not flogging a dead horse.
But if Bayes means a formal, mathematical method, there is nothing in it to tell you to stop incrementally updating the same framing of a problem,and nothing to help you come up with a bold new hypothesis.
Whether there is nothing when things are interpreted formally is a very hard claim to prove. If a framing of a problem dissatisfactory it can be incrementally updated away from too. If you have a problem and try to solve it with epicycles that there are model deficiencies that do weaken the reliability of the whole paradigm.
Or stated in another way every hypothesis always has a positive probability, we never stop considering any hypothesis so we are guaranteed to not “miss out” on any of them (0 doesn’t occur as a probability and can never arise). Only in approximations do we have a cutout that sufficiently small probabilities are not given any thought.
There migth be a problem how to design the hypothesis space to be truly universally representative. In approximations we define hypotheses to be some pretty generic class of statements which has something parameter like and we try to learn what are the appropriate values of the parameters. But coming up with an even more generic class might allow for hypotheses which have a better structure that better fit. In the non-approximation case we don’t get “imagination starved” because we don’t specify what pool the hypotheses are drawn from.
For more concrete example, if have a set of points and ask what line best fits you are going to get a bad result if the data is in a shape of a parabola. You could catch parabolas if you asked “what polynomial best fits these points?”. But trying to ask a question like “what is the best thought that explains these observations” is resistant to be made formal because “thought” is so nebolous. No amount of line fitting will suggest a parabola but some amount of polynomial fittings will suggest parabolas over lines.
Whether there is nothing when things are interpreted formally
I didn’t say there is nothing when things are interpreted formally. I said the formalism of Bayesian probability does not include a formula for generating novel hypotheses, and that is easy to prove.
If a framing of a problem dissatisfactory it can be incrementally updated away from too.
Can it? That doesn’t seem to be how things work in practice. There is a set of revolutions in science, and inasmuch as they are revolutions,they are not slow incremental changes.
Or stated in another way every hypothesis always has a positive probability, we never stop considering any hypothesis
We don’t have every hypothesis pre existing in our heads. If you were some sort of ideal reasoner with an infinite memory , you could do things that way, but you’re not. Cognitive limitations may well explain the existence of revolutionary paradigm shifts.
But trying to ask a question like “what is the best thought that explains these observations” is resistant to be made formal because “thought” is so nebolous
That what I was saying. You can’t formalise hypothesis fornation, yet it is necessary. Therefore, formal Bayes is not the one epistemology to rule then all, because all formalisations have that shortcoming.
There’s a third thing: how do you realise that incremental updated are no longer working,and you need a revolutionary shift to another paradigm.
That depends on lot how narrowly or widely we interpret things. It could make a lot of sense that “updating correctly” also includes not updating into a eventual deadend and updating in a way that can faciliate paradigm shifts.
It might be worth noting that for some “updating” can refer to a very narrow process involving explicitly and conciously formulated numbers. but theorethical mullings over bayes rule can also include appriciation for the wiggleroom ie 99.999% vs 99.9% differing in the degree how much emphasis for totally different paradigms is given.
Indeed. Where Bayes is a taken loosely series of maxims, you can add some n advice about not flogging a dead horse. But if Bayes means a formal, mathematical method, there is nothing in it to tell you to stop incrementally updating the same framing of a problem,and nothing to help you come up with a bold new hypothesis.
Whether there is nothing when things are interpreted formally is a very hard claim to prove. If a framing of a problem dissatisfactory it can be incrementally updated away from too. If you have a problem and try to solve it with epicycles that there are model deficiencies that do weaken the reliability of the whole paradigm.
Or stated in another way every hypothesis always has a positive probability, we never stop considering any hypothesis so we are guaranteed to not “miss out” on any of them (0 doesn’t occur as a probability and can never arise). Only in approximations do we have a cutout that sufficiently small probabilities are not given any thought.
There migth be a problem how to design the hypothesis space to be truly universally representative. In approximations we define hypotheses to be some pretty generic class of statements which has something parameter like and we try to learn what are the appropriate values of the parameters. But coming up with an even more generic class might allow for hypotheses which have a better structure that better fit. In the non-approximation case we don’t get “imagination starved” because we don’t specify what pool the hypotheses are drawn from.
For more concrete example, if have a set of points and ask what line best fits you are going to get a bad result if the data is in a shape of a parabola. You could catch parabolas if you asked “what polynomial best fits these points?”. But trying to ask a question like “what is the best thought that explains these observations” is resistant to be made formal because “thought” is so nebolous. No amount of line fitting will suggest a parabola but some amount of polynomial fittings will suggest parabolas over lines.
I didn’t say there is nothing when things are interpreted formally. I said the formalism of Bayesian probability does not include a formula for generating novel hypotheses, and that is easy to prove.
Can it? That doesn’t seem to be how things work in practice. There is a set of revolutions in science, and inasmuch as they are revolutions,they are not slow incremental changes.
We don’t have every hypothesis pre existing in our heads. If you were some sort of ideal reasoner with an infinite memory , you could do things that way, but you’re not. Cognitive limitations may well explain the existence of revolutionary paradigm shifts.
That what I was saying. You can’t formalise hypothesis fornation, yet it is necessary. Therefore, formal Bayes is not the one epistemology to rule then all, because all formalisations have that shortcoming.