It’s one thing if you want to calculate a theory that is simpler because you don’t have a need for perfect accuracy. Newton is good enough for a large fraction of physics calculations and so even though it is strictly wrong I imagine most reasoners would have need to keep it handy because it is simpler. But if you have two empirically equivalent and complete theories X and Y, and X is computationally simpler so you rely on X for calculating predictions, it seems to me you believe x. What would saying “No, actually I believe in Y not X” even mean in this context? The statement is unconnected to anticipated experience and any conceivable payoff structure.
Better yet, taboo “belief”. Say you are an agent with a program that allows you to calculate, based on your observations, what your observations will be in the future contingent on various actions. You have another program that ranks those futures according to a utility function. What would it mean to add “belief” to this picture?
Your first paragraph looks misguided to me: does it imply we should “believe” matrix multiplication is defined by the naive algorithm for small n, and the Strassen and Coppersmith-Winograd algorithms for larger values of n? Your second paragraph, on the other hand, makes exactly the point I was trying to make in the original post: we can assign degrees of belief to equivalence classes of theories that give the same observable predictions.
It’s one thing if you want to calculate a theory that is simpler because you don’t have a need for perfect accuracy. Newton is good enough for a large fraction of physics calculations and so even though it is strictly wrong I imagine most reasoners would have need to keep it handy because it is simpler. But if you have two empirically equivalent and complete theories X and Y, and X is computationally simpler so you rely on X for calculating predictions, it seems to me you believe x. What would saying “No, actually I believe in Y not X” even mean in this context? The statement is unconnected to anticipated experience and any conceivable payoff structure.
Better yet, taboo “belief”. Say you are an agent with a program that allows you to calculate, based on your observations, what your observations will be in the future contingent on various actions. You have another program that ranks those futures according to a utility function. What would it mean to add “belief” to this picture?
Your first paragraph looks misguided to me: does it imply we should “believe” matrix multiplication is defined by the naive algorithm for small n, and the Strassen and Coppersmith-Winograd algorithms for larger values of n? Your second paragraph, on the other hand, makes exactly the point I was trying to make in the original post: we can assign degrees of belief to equivalence classes of theories that give the same observable predictions.