I feel like there’s a big difference between “similar complexity” and “the same complexity.” Like, if we have theory T and then we have theory T* which adds some simple unobtrusive twist to it, we get another theory which is of similar complexity… yet realistically an Occam’s-Razor-driven search process is not going to settle on T*, because you only get T* by modifying T. And if I’m wrong about this then it seems like Occam’s Razor is broken in general; in any domain there are going to be ways to turn T’s into T*’s. But Occam’s Razor is not broken in general (I feel).
Maybe this is the argument you anticipate above with ”...we aren’t actually choosing randomly.” Occam’s Razor isn’t random. Again, I might agree with you that intuitively Occam’s Razor seems more useful in physics than in preference-learning. But intuitions are not arguments, and anyhow they aren’t arguments that appeared in the text of A&M’s paper.
I feel like there’s a big difference between “similar complexity” and “the same complexity.” Like, if we have theory T and then we have theory T* which adds some simple unobtrusive twist to it, we get another theory which is of similar complexity… yet realistically an Occam’s-Razor-driven search process is not going to settle on T*, because you only get T* by modifying T. And if I’m wrong about this then it seems like Occam’s Razor is broken in general; in any domain there are going to be ways to turn T’s into T*’s. But Occam’s Razor is not broken in general (I feel).
Maybe this is the argument you anticipate above with ”...we aren’t actually choosing randomly.” Occam’s Razor isn’t random. Again, I might agree with you that intuitively Occam’s Razor seems more useful in physics than in preference-learning. But intuitions are not arguments, and anyhow they aren’t arguments that appeared in the text of A&M’s paper.