Yeah yeah, this is the problem I’m referring to :-)
I disagree that you must simulate collapse to solve this problem, though I agree that that would be one way to do it. (The way you get the right random numbers, fwiw, is from sample complexity—SI doesn’t put all its mass on the single machine that predicts the universe, it allocates mass to all machines that have not yet erred in proportion to their simplicity, so probability mass can end up on the class of machines, each individually quite complex, that describe QM and then hardcode the branch predictions. See also the proof about how the version of SI in which each TM outputs probabilities is equivalent to the version where they don’t.)
SI doesn’t put all its mass on the single machine that predicts the universe, it allocates mass to all machines that have not yet erred in proportion to their simplicity,
If your SI can’t make predictions ITFP, that’s rather beside the point. “Not erring” only has a straightforward implementation if you are expecting the predictions to be deterministic. How could an SI compare a deterministic theory to a probablistic one?
How could an SI compare a deterministic theory to a probablistic one?
The deterministic theory gets probability proportional to 2^-length + (0 if it was correct so far else -infty), the probabilistic theory gets probability proportional to 2^-length + log(probability it assigned to the observations so far).
That said, I was not suggesting a solomonoff inductor in which some machines were outputting bits and others were outputting probabilities.
I suspect that there’s a miscommunication somewhere up the line, and my not-terribly-charitable-guess is that it stems from you misunderstanding the formalism of Solomonoff induction and/or the point I was making about it. I do not expect to clarify further, alas. I’d welcome someone else hopping in if they think they see the point I was making & can transmit it.
Yeah yeah, this is the problem I’m referring to :-)
I disagree that you must simulate collapse to solve this problem, though I agree that that would be one way to do it. (The way you get the right random numbers, fwiw, is from sample complexity—SI doesn’t put all its mass on the single machine that predicts the universe, it allocates mass to all machines that have not yet erred in proportion to their simplicity, so probability mass can end up on the class of machines, each individually quite complex, that describe QM and then hardcode the branch predictions. See also the proof about how the version of SI in which each TM outputs probabilities is equivalent to the version where they don’t.)
If your SI can’t make predictions ITFP, that’s rather beside the point. “Not erring” only has a straightforward implementation if you are expecting the predictions to be deterministic. How could an SI compare a deterministic theory to a probablistic one?
The deterministic theory gets probability proportional to 2^-length + (0 if it was correct so far else -infty), the probabilistic theory gets probability proportional to 2^-length + log(probability it assigned to the observations so far).
That said, I was not suggesting a solomonoff inductor in which some machines were outputting bits and others were outputting probabilities.
I suspect that there’s a miscommunication somewhere up the line, and my not-terribly-charitable-guess is that it stems from you misunderstanding the formalism of Solomonoff induction and/or the point I was making about it. I do not expect to clarify further, alas. I’d welcome someone else hopping in if they think they see the point I was making & can transmit it.