Thank you, this has been a very interesting conversation so far.
I originally started writing a much longer reply explaining my position on the interpretation of QM in full, but realized that the explanation would grow so long that it would really need to be its own post. So instead, I’ll just make a few shorter remarks. Sorry if these sound a bit snappy.
As soon as you assume that there exists an external universe, you can forget about your personal experience just try to estimate the length of the program that runs the universe.
And if one assumes an external universe evolving according to classical laws, the Bohmian interpretation has the lowest KC. If you’re going to be baking extra assumptions into your theory, why not go all the way?
Interpretations and Kolmogorov Complexity
An interpretation is still a program. All programs have a KC (although it is usually ill-defined). Ultimately I don’t think it matters whether we call these objects we’re studying theories or interpretations.
Collapse postulate
Has nothing to do with how the universe operates, as I see it. If you’d like, I think we can cast Copenhagen into a more Many Worlds -like framework by considering Many Imaginary Worlds. This is an interpretation, in my opinion functionally equivalent to Copenhagen, where the worlds of MWI are assumed to represent imaginary possibilities rather than real universes. The collapse postulate, then, corresponds to observing that you inhabit a particular imaginary world—observing that that world is real for you at the moment. By contrast, in ordinary MWI, all worlds are real, and observation simply reduces your uncertainty as to which observer (and in which world) you are.
If we accept the functional equivalence between Copenhagen and MIWI, this gives us an upper bound on the KC of Copenhagen. It is at most as complex as MWI. I would argue less.
Chess
I think we need to distinguish between “playing skill” and “positional evaluation skill”. It could be said that DeepBlue is dumber than Kasparov in the sense of being worse at evaluating any given board position than him, while at the same time being a vastly better player than Kasparov simply because it evaluates exponentially more positions.
If you know that a player has made the right move for the wrong reasons, that should still increase your estimate of their playing skill, but not their positional evaluation skill.
Of course, in the case of chess, the two skills will be strongly correlated, and your estimate of the player’s playing skill will still go down as you observe them making blunders in other positions. But this is not always so. In some fields, it is possible to reach a relatively high level of performance using relatively dumb heuristics.
Moving onto the case of logical arguments, playing skill corresponds to “getting the right answers” and positional evaluation skill corresponds to “using the right arguments”.
In many cases it is much easier to find the right answers than to find correct proofs for those answers. For example, most proofs that Euler and Newton gave for their mathematical results are, technically, wrong by today’s standards of rigor. Even worse, even today’s proofs are not completely airtight, since they are not usually machine-verifiable.
And yet we “know” that the results are right. How can that be, if we also know that our arguments aren’t 100% correct? Many reasons, but one is that we can see that our current proofs could be made more rigorous. We can see that they are steelmannable. And in fact, our current proofs were often reached by effectively steelmanning Euler’s and Newton’s proofs.
If we see DeepSeek making arguments that are steelmannable, that should increase our expectation that future models will, in fact, be able to steelman those arguments.
Thank you, this has been a very interesting conversation so far.
I originally started writing a much longer reply explaining my position on the interpretation of QM in full, but realized that the explanation would grow so long that it would really need to be its own post. So instead, I’ll just make a few shorter remarks. Sorry if these sound a bit snappy.
And if one assumes an external universe evolving according to classical laws, the Bohmian interpretation has the lowest KC. If you’re going to be baking extra assumptions into your theory, why not go all the way?
An interpretation is still a program. All programs have a KC (although it is usually ill-defined). Ultimately I don’t think it matters whether we call these objects we’re studying theories or interpretations.
Has nothing to do with how the universe operates, as I see it. If you’d like, I think we can cast Copenhagen into a more Many Worlds -like framework by considering Many Imaginary Worlds. This is an interpretation, in my opinion functionally equivalent to Copenhagen, where the worlds of MWI are assumed to represent imaginary possibilities rather than real universes. The collapse postulate, then, corresponds to observing that you inhabit a particular imaginary world—observing that that world is real for you at the moment. By contrast, in ordinary MWI, all worlds are real, and observation simply reduces your uncertainty as to which observer (and in which world) you are.
If we accept the functional equivalence between Copenhagen and MIWI, this gives us an upper bound on the KC of Copenhagen. It is at most as complex as MWI. I would argue less.
I think we need to distinguish between “playing skill” and “positional evaluation skill”. It could be said that DeepBlue is dumber than Kasparov in the sense of being worse at evaluating any given board position than him, while at the same time being a vastly better player than Kasparov simply because it evaluates exponentially more positions.
If you know that a player has made the right move for the wrong reasons, that should still increase your estimate of their playing skill, but not their positional evaluation skill.
Of course, in the case of chess, the two skills will be strongly correlated, and your estimate of the player’s playing skill will still go down as you observe them making blunders in other positions. But this is not always so. In some fields, it is possible to reach a relatively high level of performance using relatively dumb heuristics.
Moving onto the case of logical arguments, playing skill corresponds to “getting the right answers” and positional evaluation skill corresponds to “using the right arguments”.
In many cases it is much easier to find the right answers than to find correct proofs for those answers. For example, most proofs that Euler and Newton gave for their mathematical results are, technically, wrong by today’s standards of rigor. Even worse, even today’s proofs are not completely airtight, since they are not usually machine-verifiable.
And yet we “know” that the results are right. How can that be, if we also know that our arguments aren’t 100% correct? Many reasons, but one is that we can see that our current proofs could be made more rigorous. We can see that they are steelmannable. And in fact, our current proofs were often reached by effectively steelmanning Euler’s and Newton’s proofs.
If we see DeepSeek making arguments that are steelmannable, that should increase our expectation that future models will, in fact, be able to steelman those arguments.