“In Solomonoff induction, the complexity of your model is the amount of code in the computer program you have to write to simulate your model. The amount of code, not the amount of RAM it uses, or the number of cycles it takes to compute.”
What!? Are you assuming that everyone has on the exact same data on the position of the quarks of the universe stashed in a variable? The code/data divide is not useful, code can substitute for data and data for code (interpreted languages).
Let us say I am simulating the quarks and stuff for your region of space, I would like my friend bob to be able to make the same predictions (although most likely they would be postdictions as I wouldn’t be able to make them in faster than real time) about you. I send him my program (sans quark positions), but he still can’t simulate you. He needs the quark positions, they are as much code for the simulator as the physical laws.
Or to put it another way, quark positions are to physics simulators as the initial state of the tape is to a UTM simulator. That is code. Especially as physics simulations are computationally universal.
I personally don’t put much stock by occams razor.
You confuse data, which should absolutely be counted (compressed) as complexity, with required RAM, which (EY asserts) should not.
I am well convinced that RAM requirements shouldn’t be counted exclusively, and fairly well convinced that it shouldn’t be counted similarly to rules; I am not convinced it shouldn’t be counted at all. A log*(RAM) factor in the prior wouldn’t make a difference for most judgements, but might tip the scale on MWI vs collapse. That said, I am not at all confident it does weigh in.
In reality, all the computer program specifies is the simulation of a QM wave function (complex scalar field in an infinite dimensional hilbertspace with space curvature or something like that), along with the minimum message of the conditions of the big bang.
“In Solomonoff induction, the complexity of your model is the amount of code in the computer program you have to write to simulate your model. The amount of code, not the amount of RAM it uses, or the number of cycles it takes to compute.”
What!? Are you assuming that everyone has on the exact same data on the position of the quarks of the universe stashed in a variable? The code/data divide is not useful, code can substitute for data and data for code (interpreted languages).
Let us say I am simulating the quarks and stuff for your region of space, I would like my friend bob to be able to make the same predictions (although most likely they would be postdictions as I wouldn’t be able to make them in faster than real time) about you. I send him my program (sans quark positions), but he still can’t simulate you. He needs the quark positions, they are as much code for the simulator as the physical laws.
Or to put it another way, quark positions are to physics simulators as the initial state of the tape is to a UTM simulator. That is code. Especially as physics simulations are computationally universal.
I personally don’t put much stock by occams razor.
You confuse data, which should absolutely be counted (compressed) as complexity, with required RAM, which (EY asserts) should not.
I am well convinced that RAM requirements shouldn’t be counted exclusively, and fairly well convinced that it shouldn’t be counted similarly to rules; I am not convinced it shouldn’t be counted at all. A log*(RAM) factor in the prior wouldn’t make a difference for most judgements, but might tip the scale on MWI vs collapse. That said, I am not at all confident it does weigh in.
In reality, all the computer program specifies is the simulation of a QM wave function (complex scalar field in an infinite dimensional hilbertspace with space curvature or something like that), along with the minimum message of the conditions of the big bang.