I don’t understand your point about bounded rationality. If you know theory X is equivalent to theory Y, you can believe in X more, but use Y for calculations.
Thats the definition of a free-floating belief isn’t it? If you only have so much computational resources even storing theory X in your memory is a waste of space.
I think cousin_it’s point was that if you have a preference for both quickly solving problems and knowing the true nature of things, then if theory X tells you the true nature of things but theory Y is a hackjob approximation that nevertheless gives you the answer you need much faster (in computer terms, say, a simulation of the actual event vs a monte-carlo run with the probabilities just plugged in) then it might be positive utility even under bounded rationality to keep both theory X and theory Y.
edit: the assumption is that we have at least mild preferences for both and the bounds on our rationality are sufficiently high that this is the preferred option for most of science).
It’s one thing if you want to calculate a theory that is simpler because you don’t have a need for perfect accuracy. Newton is good enough for a large fraction of physics calculations and so even though it is strictly wrong I imagine most reasoners would have need to keep it handy because it is simpler. But if you have two empirically equivalent and complete theories X and Y, and X is computationally simpler so you rely on X for calculating predictions, it seems to me you believe x. What would saying “No, actually I believe in Y not X” even mean in this context? The statement is unconnected to anticipated experience and any conceivable payoff structure.
Better yet, taboo “belief”. Say you are an agent with a program that allows you to calculate, based on your observations, what your observations will be in the future contingent on various actions. You have another program that ranks those futures according to a utility function. What would it mean to add “belief” to this picture?
Your first paragraph looks misguided to me: does it imply we should “believe” matrix multiplication is defined by the naive algorithm for small n, and the Strassen and Coppersmith-Winograd algorithms for larger values of n? Your second paragraph, on the other hand, makes exactly the point I was trying to make in the original post: we can assign degrees of belief to equivalence classes of theories that give the same observable predictions.
I don’t understand your point about bounded rationality. If you know theory X is equivalent to theory Y, you can believe in X more, but use Y for calculations.
Thats the definition of a free-floating belief isn’t it? If you only have so much computational resources even storing theory X in your memory is a waste of space.
I think cousin_it’s point was that if you have a preference for both quickly solving problems and knowing the true nature of things, then if theory X tells you the true nature of things but theory Y is a hackjob approximation that nevertheless gives you the answer you need much faster (in computer terms, say, a simulation of the actual event vs a monte-carlo run with the probabilities just plugged in) then it might be positive utility even under bounded rationality to keep both theory X and theory Y.
edit: the assumption is that we have at least mild preferences for both and the bounds on our rationality are sufficiently high that this is the preferred option for most of science).
It’s one thing if you want to calculate a theory that is simpler because you don’t have a need for perfect accuracy. Newton is good enough for a large fraction of physics calculations and so even though it is strictly wrong I imagine most reasoners would have need to keep it handy because it is simpler. But if you have two empirically equivalent and complete theories X and Y, and X is computationally simpler so you rely on X for calculating predictions, it seems to me you believe x. What would saying “No, actually I believe in Y not X” even mean in this context? The statement is unconnected to anticipated experience and any conceivable payoff structure.
Better yet, taboo “belief”. Say you are an agent with a program that allows you to calculate, based on your observations, what your observations will be in the future contingent on various actions. You have another program that ranks those futures according to a utility function. What would it mean to add “belief” to this picture?
Your first paragraph looks misguided to me: does it imply we should “believe” matrix multiplication is defined by the naive algorithm for small n, and the Strassen and Coppersmith-Winograd algorithms for larger values of n? Your second paragraph, on the other hand, makes exactly the point I was trying to make in the original post: we can assign degrees of belief to equivalence classes of theories that give the same observable predictions.