At which point infinitely many of my 100% theories will be refuted. And infinitely many will remain. You can never win at that game using finite evidence. For any finite set of evidence, infinitely many 100% type theories predict all of it perfectly.
It seems that your objection is basically that if I toss a coin seventeen times and it ends up in a sequence of HTTTHTHHHHTHTHTTH, there is a specific theory T1 (namely, that the physical laws cause the sequence to be HTTTHTHHHHTHTHTTH) which scores higher than the clearly correct explanation T2 (i.e. the probability of each sequence is the same 2^(-17)). But this is precisely why priors depend on the Kolmogorov complexity of hypotheses: with such a prior, the posterior of T2 will be higher than the posterior of T1.
And, after all, you don’t have infinitely many theories. Theories live in brains, not in an infinite Platonic space of ideas. Why should we care whether there are infinitely many ways to formulate a theory so absurd that nobody would think of it but still compatible with the evidence? Solomonoff induction tells you to ignore them, which agrees with the common sense.
Selectively ignoring theories, even when we’re aware of them, is just bias, isn’t it?
I’m a bit surprised that someone here is saying to me “OK so mathematically, abstractly, we’re screwed, but in practice it’s not a big deal, proceed anyway”. Most people here respect math and abstract thinking, and don’t dismiss problems merely for involving substantial amounts of theory.
Of course a prior can arbitrarily tell you which theories to prefer over others. But why those? You’re getting into problems of arbitrary foundations.
Bias is a systematic error in judgement, something which yields bad results. It is incorrect to apply that label to heuristics which are working well.
I haven’t told you that we are abstractly screwed, but it’s no big deal. We are not screwed, on the contrary, the Solomonoff induction is a consistent algorithm which works well in practice. It is as arbitrary as any axioms are arbitrary. You can’t do any better if you want to have any axioms at all, or any method at all. If your epistemology isn’t completely empty, it can be criticised for being arbitrary without regard to its actual details. And after all, what ultimately matters is whether it works practically, not some perceived lack of arbitrariness.
We’re fundamentally incapable of making statements about reality without starting on some sort of arbitrary foundation.
And I think describing it as “selectively ignoring” is doing it an injustice. We’re deductively excluding, and it there were some evidence to appear that would contradict that exclusion, those theories would no longer be excluded.
I’m actually have trouble finding a situation in which a fallibilist would accept/reject a proposition, and a Bayesian would do the opposite of the fallibilist. And I don’t mean epistemological disagreements, I mean disagreements of the form “Theory Blah is not false.”
It seems that your objection is basically that if I toss a coin seventeen times and it ends up in a sequence of HTTTHTHHHHTHTHTTH, there is a specific theory T1 (namely, that the physical laws cause the sequence to be HTTTHTHHHHTHTHTTH) which scores higher than the clearly correct explanation T2 (i.e. the probability of each sequence is the same 2^(-17)). But this is precisely why priors depend on the Kolmogorov complexity of hypotheses: with such a prior, the posterior of T2 will be higher than the posterior of T1.
And, after all, you don’t have infinitely many theories. Theories live in brains, not in an infinite Platonic space of ideas. Why should we care whether there are infinitely many ways to formulate a theory so absurd that nobody would think of it but still compatible with the evidence? Solomonoff induction tells you to ignore them, which agrees with the common sense.
Selectively ignoring theories, even when we’re aware of them, is just bias, isn’t it?
I’m a bit surprised that someone here is saying to me “OK so mathematically, abstractly, we’re screwed, but in practice it’s not a big deal, proceed anyway”. Most people here respect math and abstract thinking, and don’t dismiss problems merely for involving substantial amounts of theory.
Of course a prior can arbitrarily tell you which theories to prefer over others. But why those? You’re getting into problems of arbitrary foundations.
Bias is a systematic error in judgement, something which yields bad results. It is incorrect to apply that label to heuristics which are working well.
I haven’t told you that we are abstractly screwed, but it’s no big deal. We are not screwed, on the contrary, the Solomonoff induction is a consistent algorithm which works well in practice. It is as arbitrary as any axioms are arbitrary. You can’t do any better if you want to have any axioms at all, or any method at all. If your epistemology isn’t completely empty, it can be criticised for being arbitrary without regard to its actual details. And after all, what ultimately matters is whether it works practically, not some perceived lack of arbitrariness.
We’re fundamentally incapable of making statements about reality without starting on some sort of arbitrary foundation.
And I think describing it as “selectively ignoring” is doing it an injustice. We’re deductively excluding, and it there were some evidence to appear that would contradict that exclusion, those theories would no longer be excluded.
I’m actually have trouble finding a situation in which a fallibilist would accept/reject a proposition, and a Bayesian would do the opposite of the fallibilist. And I don’t mean epistemological disagreements, I mean disagreements of the form “Theory Blah is not false.”
This is something Popper disputes. He says you can start in the middle, or anywhere. Why can’t that be done?
I was talking about the theories that can’t be deductively excluded b/c they make identical predictions for all available evidence.