That’s a little too vaguely stated for me to interpret. Can you give an illustration? For comparison, here’s one of how I assumed it would work:
A paperclip-making AI is given a piece of black-box machinery and given specifications for two possible control schemes for it. It calculates that if scheme A is true, it can make 700 paperclips per second, and if scheme B is true, only 300 per second. As a Bayesian AI using Pascal’s Goldpan formalized as a utilitarian prior, it assigns a prior probability of 0.7 for A and 0.3 for B. Then it either acts based on a weighted sum of models (0.7A+0.3B) or runs some experiments until it reaches a satisfactory posterior probability.
Occam’s razor is the basis for believing that those experiments tell us anything whatsoever about the future. Without it, there is no way to assign the probabilities you mention.
Clearly people who don’t know about Occam’s Razor, and people who explicitly reject it, still believe in the future. Just as clearly, we can use Occam’s Razor or other principles in evaluating theories about what happened in the past. Your claim appears wholly unjustified. Was it just a vague hifalutin’ metaphysical claim, or are there some underlying points that you’re not bringing out?
People who don’t know about Newtonian mechanics still believe that rocks fall downwards, but people who reject it explicitly will have a harder time reconciling their beliefs with the continued falling of rocks. It would be a mistake to reject Newtonian mechanics, then say “people who reject Newtonian mechanics clearly still believe that rocks fall”, then to conclude that there is no problem in rejecting Newtonian mechanics. Similarly, if you reject Occam’s razor then you need to replace it with something that actually fills the explanatory gap—it’s not good enough to say “well people who reject Occam’s razor clearly still believe Occam’s razor”, and then just carry right on.
The point is that to evaluate the utility of holding a belief, you need to have already decided upon a scheme to set your beliefs.
That’s a little too vaguely stated for me to interpret. Can you give an illustration? For comparison, here’s one of how I assumed it would work:
A paperclip-making AI is given a piece of black-box machinery and given specifications for two possible control schemes for it. It calculates that if scheme A is true, it can make 700 paperclips per second, and if scheme B is true, only 300 per second. As a Bayesian AI using Pascal’s Goldpan formalized as a utilitarian prior, it assigns a prior probability of 0.7 for A and 0.3 for B. Then it either acts based on a weighted sum of models (0.7A+0.3B) or runs some experiments until it reaches a satisfactory posterior probability.
That doesn’t seem intractably circular.
Occam’s razor is the basis for believing that those experiments tell us anything whatsoever about the future. Without it, there is no way to assign the probabilities you mention.
Clearly people who don’t know about Occam’s Razor, and people who explicitly reject it, still believe in the future. Just as clearly, we can use Occam’s Razor or other principles in evaluating theories about what happened in the past. Your claim appears wholly unjustified. Was it just a vague hifalutin’ metaphysical claim, or are there some underlying points that you’re not bringing out?
People who don’t know about Newtonian mechanics still believe that rocks fall downwards, but people who reject it explicitly will have a harder time reconciling their beliefs with the continued falling of rocks. It would be a mistake to reject Newtonian mechanics, then say “people who reject Newtonian mechanics clearly still believe that rocks fall”, then to conclude that there is no problem in rejecting Newtonian mechanics. Similarly, if you reject Occam’s razor then you need to replace it with something that actually fills the explanatory gap—it’s not good enough to say “well people who reject Occam’s razor clearly still believe Occam’s razor”, and then just carry right on.