We can already do MWI vs Collapse without being clear on F=ma.
At this point I am not interested in human logic, I want a calculation of complexity. I want a string (an algorithm) corresponding to F=ma. Then we can build on that.
If F, m and a are true real numbers, its un-computable (can’t code it in TM) and not even considered by Solomonoff induction, so here.
My point was simply that MWI is not a theory that would pass the ‘input begins with string s’ criterion, and therefore, is not even part of the sum at all.
It’s pretty hilarious, the explanation is semi okay, but implied is that Solomonoff induction is awesome, and the topic is hard to think about, so people substitute “awesome” for Solomonoff induction, then all the imagined implications end up “not even wrong”. edit: also someone somehow thinks that Solomonoff induction finds probabilities for theories, while it just assigns 2^-length as probability for software code of such length, which is obviously absurd when applied to anything but brute force generated shortest pieces of code, because we don’t get codes to their minimum lengths (thats uncomputable) and the hypothesis generation process can have e.g. bloat up factor of 10 (plausible) or even a quadratic blow up factor, making the 2^-length prior be much much too strongly discriminating against actual theories in favour of simply having the copy of the data stored inside.
If F, m and a are true real numbers, its un-computable (can’t code it in TM) and not even considered by Solomonoff induction, so here.
That’s a cop-out, just discretize the relevant variables to deal with integers, Planck units, if you feel like it. The complexity should not depend on the step size.
Well it does depend on the step size. You end up with higher probability for larger discretization constant (and zero probability for no discretization at all), if you use the Turing machine model of computation (and if you use something that does reals you will not face such issue). I’m trying explain that in the ideal limit, it has certain huge shortcomings. The optimality proofs do not imply it is good, they only imply other stuff isn’t ‘everywhere as good and somewhere better’.
The primary use of this sort of thing—highly idealized induction that is uncomputable—is not to do induction but to find limits to induction.
At this point I am not interested in human logic, I want a calculation of complexity. I want a string (an algorithm) corresponding to F=ma. Then we can build on that.
If F, m and a are true real numbers, its un-computable (can’t code it in TM) and not even considered by Solomonoff induction, so here.
My point was simply that MWI is not a theory that would pass the ‘input begins with string s’ criterion, and therefore, is not even part of the sum at all.
It’s pretty hilarious, the explanation is semi okay, but implied is that Solomonoff induction is awesome, and the topic is hard to think about, so people substitute “awesome” for Solomonoff induction, then all the imagined implications end up “not even wrong”. edit: also someone somehow thinks that Solomonoff induction finds probabilities for theories, while it just assigns 2^-length as probability for software code of such length, which is obviously absurd when applied to anything but brute force generated shortest pieces of code, because we don’t get codes to their minimum lengths (thats uncomputable) and the hypothesis generation process can have e.g. bloat up factor of 10 (plausible) or even a quadratic blow up factor, making the 2^-length prior be much much too strongly discriminating against actual theories in favour of simply having the copy of the data stored inside.
That’s a cop-out, just discretize the relevant variables to deal with integers, Planck units, if you feel like it. The complexity should not depend on the step size.
Well it does depend on the step size. You end up with higher probability for larger discretization constant (and zero probability for no discretization at all), if you use the Turing machine model of computation (and if you use something that does reals you will not face such issue). I’m trying explain that in the ideal limit, it has certain huge shortcomings. The optimality proofs do not imply it is good, they only imply other stuff isn’t ‘everywhere as good and somewhere better’.
The primary use of this sort of thing—highly idealized induction that is uncomputable—is not to do induction but to find limits to induction.