But when I say “macroscopic decoherence is simpler than collapse” it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code.
Computer programs in which language? The kolmogorov complexity of a given string depends on the choice of description language (or programming language, or UTM) used. I’m not familiar with MML, but considering that it’s apparently strongly related to kolmogorov complexity, I’d expect its simplicity ratings to be similarly dependent on parameters for which there is no obvious optimal choice.
If one uses these metrics to judge the simplicity of hypotheses, any probability judgements based on them will ultimately depend strongly on this parameter choice. Given that, what’s the best way to choose these parameters? The only two obvious ways I see are to either
1) Make an intuitive judgement, which means the resulting complexity ratings might not turn out any more reliable than if you intuitively judged the simplicity of each individual hypothesis, or
2) Figure out which of the resulting choices can be implemented cheaper in this universe; i.e. try to build the smallest/least-energy-using computer for each reasonably-seeming language, and see which one turns out cheapest. Since resource use at runtime doesn’t matter for kolmogorov complexity, it would probably be appropriate to consider how well the designs would work if scaled up to include immense amounts of working memory, even if they’re never actually built at that scale.
Neither of those is particularly elegant. I think 2) might work out, but unfortunately is quite sensitive to parameter choice, itself.
If one uses these metrics to judge the simplicity of hypotheses, any probability judgements based on them will ultimately depend strongly on this parameter choice. Given that, what’s the best way to choose these parameters? The only two obvious ways I see are to either 1) Make an intuitive judgement, which means the resulting complexity ratings might not turn out any more reliable than if you intuitively judged the simplicity of each individual hypothesis, or 2) Figure out which of the resulting choices can be implemented cheaper in this universe; i.e. try to build the smallest/least-energy-using computer for each reasonably-seeming language, and see which one turns out cheapest. Since resource use at runtime doesn’t matter for kolmogorov complexity, it would probably be appropriate to consider how well the designs would work if scaled up to include immense amounts of working memory, even if they’re never actually built at that scale.
Neither of those is particularly elegant. I think 2) might work out, but unfortunately is quite sensitive to parameter choice, itself.