To be clear, the process that I’m talking about for turning a quantum state into a hypothesis is not intended to be a physical process (such as a measurement), it’s intended to be a Turing machine (that produces output suitable for use by Solomonoff induction).
Then you run into the basic problem of using SI to investigate MW: SI’s are supposed to output a series of definite observations. They are inherently “single world”
If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you have to identify the subset if bits relating to your world. That’s extra complexity which isn’t accounted for because it’s being done by hand, as it were.
In particular, if you just model the wave function, the only results you will get represent every possible outcome. In order to match observation , you will have to keep discarding unobserved outcomes and renormalising as you do in every interpretation. It’s just that that extra stage is performed manually, not by the programme.
To get an output that matches one observers measurements, you would need to simulate collapse somehow. You could simulate collapse with a PRNG, but it won’t give you the right random numbers.
Or you would need to keep feeding your observations back in so that the simulator can perform projection and renormalisation itself. That would work, but that’s a departure from how SI’s are supposed to work.
Meta: trying to mechanise epistemology doesn’t solve much , because mechanisms still have assumptions built into them.
Yeah yeah, this is the problem I’m referring to :-)
I disagree that you must simulate collapse to solve this problem, though I agree that that would be one way to do it. (The way you get the right random numbers, fwiw, is from sample complexity—SI doesn’t put all its mass on the single machine that predicts the universe, it allocates mass to all machines that have not yet erred in proportion to their simplicity, so probability mass can end up on the class of machines, each individually quite complex, that describe QM and then hardcode the branch predictions. See also the proof about how the version of SI in which each TM outputs probabilities is equivalent to the version where they don’t.)
SI doesn’t put all its mass on the single machine that predicts the universe, it allocates mass to all machines that have not yet erred in proportion to their simplicity,
If your SI can’t make predictions ITFP, that’s rather beside the point. “Not erring” only has a straightforward implementation if you are expecting the predictions to be deterministic. How could an SI compare a deterministic theory to a probablistic one?
How could an SI compare a deterministic theory to a probablistic one?
The deterministic theory gets probability proportional to 2^-length + (0 if it was correct so far else -infty), the probabilistic theory gets probability proportional to 2^-length + log(probability it assigned to the observations so far).
That said, I was not suggesting a solomonoff inductor in which some machines were outputting bits and others were outputting probabilities.
I suspect that there’s a miscommunication somewhere up the line, and my not-terribly-charitable-guess is that it stems from you misunderstanding the formalism of Solomonoff induction and/or the point I was making about it. I do not expect to clarify further, alas. I’d welcome someone else hopping in if they think they see the point I was making & can transmit it.
Then you run into the basic problem of using SI to investigate MW: SI’s are supposed to output a series of definite observations. They are inherently “single world”
If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you have to identify the subset if bits relating to your world. That’s extra complexity which isn’t accounted for because it’s being done by hand, as it were.
In particular, if you just model the wave function, the only results you will get represent every possible outcome. In order to match observation , you will have to keep discarding unobserved outcomes and renormalising as you do in every interpretation. It’s just that that extra stage is performed manually, not by the programme.
To get an output that matches one observers measurements, you would need to simulate collapse somehow. You could simulate collapse with a PRNG, but it won’t give you the right random numbers.
Or you would need to keep feeding your observations back in so that the simulator can perform projection and renormalisation itself. That would work, but that’s a departure from how SI’s are supposed to work.
Meta: trying to mechanise epistemology doesn’t solve much , because mechanisms still have assumptions built into them.
Yeah yeah, this is the problem I’m referring to :-)
I disagree that you must simulate collapse to solve this problem, though I agree that that would be one way to do it. (The way you get the right random numbers, fwiw, is from sample complexity—SI doesn’t put all its mass on the single machine that predicts the universe, it allocates mass to all machines that have not yet erred in proportion to their simplicity, so probability mass can end up on the class of machines, each individually quite complex, that describe QM and then hardcode the branch predictions. See also the proof about how the version of SI in which each TM outputs probabilities is equivalent to the version where they don’t.)
If your SI can’t make predictions ITFP, that’s rather beside the point. “Not erring” only has a straightforward implementation if you are expecting the predictions to be deterministic. How could an SI compare a deterministic theory to a probablistic one?
The deterministic theory gets probability proportional to 2^-length + (0 if it was correct so far else -infty), the probabilistic theory gets probability proportional to 2^-length + log(probability it assigned to the observations so far).
That said, I was not suggesting a solomonoff inductor in which some machines were outputting bits and others were outputting probabilities.
I suspect that there’s a miscommunication somewhere up the line, and my not-terribly-charitable-guess is that it stems from you misunderstanding the formalism of Solomonoff induction and/or the point I was making about it. I do not expect to clarify further, alas. I’d welcome someone else hopping in if they think they see the point I was making & can transmit it.