That’s not how algorithmic information theory works. The output tape is not a factor in the complexity of the program. Just the length of the program.
And that’s the problem! You want the shortest programme that predicts your observations, but output of a TM that just runs the SWE doesn’t predict your and only your observations. You have to manually perform an extra operation to extract them, and that’s extra complexity that isn’t part of the “complexity of the programme”. The argument that MWI is algorithmically simple cheats by hiding some complexity outside the programme.
The size of the universe is not a postulate of the QFT or General Relativity.
That’s not relevant to my argument.
If you take the Many Worlds interpretation and decide to follow the perspective of a single particle as though it were special, Copenhagen is what falls out.
Operationally, something like copenhagen, ie. neglect of unobserved predictions, and renormalisation , has to occur, because otherwise you can’t make predictions. Hence my comment about SU&C. Different adds some extra baggage about what that means—occurred in a different branch versus didn’t occur—but the operation still needs to occur.
Thinking this through some more, I think the real problem is that S.I. is defined in the perspective of an agent modeling an environment, so the assumption that Many Worlds has to put any un-observable on the output tape is incorrect. It’s like stating that Copenhagen has to output all the probability amplitudes onto the output tape and maybe whatever dice god rolled to produce the final answer as well. Neither of those are true.
Well, you’ve got to test that the programme is at least correct so that you can can go on to find the simplest correct programme. How would you do that?
output of a TM that just runs the SWE doesn’t predict your and only your observations. You have to manually perform an extra operation to extract them, and that’s extra complexity that isn’t part of the “complexity of the programme”.
First, can you define “SWE”? I’m not familiar with the acronym.
Second, why is that a problem? You should want a theory that requires as few assumptions as possible to explain as much as possible. The fact that it explains more than just your point of view (POV) is a good thing. It lets you make predictions. The only requirement is that it explains at least your POV.
The point is to explain the patterns youobserve.
>The size of the universe is not a postulate of the QFT or General Relativity.
That’s not relevant to my argument.
It most certainly is. If you try to run the Copenhagen interpretation in a Turing machine to get output that matches your POV, then it has to output the whole universe and you have to find your POV on the tape somewhere.
The problem is: That’s not how theories are tested. It’s not like people are looking for a theory that explains electromagnetism and why they’re afraid of clowns and why their uncle “Bob” visited so much when they were a teenager and why their’s a white streak in their prom photo as though a cosmic ray hit the camera when the picture was taken, etc. etc.
The observations we’re talking about are experiments where a particular phenomenon is invoked with minimal disturbance from the outside world (if you’re lucky enough to work in a field like Physics which permits such experiments). In a simple universe that just has an electron traveling toward a double-slit wall and a detector, what happens? We can observe that and we can run our model to see what it predicts. We don’t have to run the Turing machine with input of 10^80 particles for 13.8 billion years then try to sift through the output tape to find what matches our observations.
Same thing for the Many Worlds interpretation. It explains the results of our experiments just as well as Copenhagen, it just doesn’t posit any special phenomenon like observation, observation is just what entanglement looks like from the perspective of one of the entangled particles (or system of particles if you’re talking about the scientist).
Operationally, something like copenhagen, ie. neglect of unobserved predictions, and renormalisation , hasto occur, because otherwise you can’t make predictions.
First of all: Of course you can use many worlds to make predictions, You do it every time you use the math of QFT. You can make predictions about entangled particles, can’t you? The only thing is: while the math of probability is about weighted sums of hypothetical paths, in MW you take it quite literally as paths the actually being traversed. That’s what you’re trading for the magic dice machine in non-deterministic theories.
Secondly: Just because Many Worlds says those worlds exist, doesn’t mean you have to invent some extra phenomenon to justify renormalization. At the end of the day the unobservable universe is still unobservable. When you’re talking about predicting what you might observe when you run experiment X, it’s fine to ultimately discard the rest of the multiverse. You just don’t need to make up some story about how your perspective is special and you have some magic power to collapse waveforms that other particles don’t have.
Hence my comment about SU&C. Different adds some extra baggage about what that means—occurred in a different branch versus didn’t occur—but the operation still needs to occur.
Please stop introducing obscure acronyms without stating what they mean. It makes your argument less clear. More often than not it results in *more* typing because of the confusion it causes. I have no idea what this sentence means. SU&C = Single Universe and Collapse? Like objective collapse? “Different” what?
S.I is a inept tool for measuring the relative complexity of CI and MWI because it is a bad match for both. It’s a bad match for MWI because of the linear, or., if you prfer sequential, nature of the output tape, and its a bad match for CI because its deterministic and CI isn’t. You can simulate collapse with a PRNG, but it won’t give you the right random numbers. Also, CI’ers think collapse is a fundamental process, so it loads the dice to represent it with a multi-step PRNG. It should be just a call to one RAND instruction to represent their views fairly.
SWE=Schroedinger Wave Equation. SU&C=Shut Up and Calculate.
You should want a theory that requires as few assumptions as possible to explain as much as possible
The topic is using S.I to quantify O’s R, and S.I is not a measure on assumptions , it is a measure on algorithmic complexity.
The fact that it explains more than just your point of view (POV) is a good thing. It lets you make predictions.
Explaining just my POV doesn’t stop me making predictions. In fact predicting the observations of one observer is exactly how S.I is supposed to work. It also prevents various forms of cheating. I don’t know why you are using “explain” rather than “predict”. Deutsch favours explanation over prediction but the very relevant point here is that how well a theory explains is an unquantifiable human judgement. Predicting observations, on the other hand, is definite an quantifiable..that’s the whole point of using S.I as a mechanistic process to quantify O’s. R.
Predicting every observers observations is a bad thing from the POV of proving that MWI is simple, because if you allow one observer to pick out their observations from a morass of data, then the easisest way of generating data that contains any substring is a PRNG. You basically ending up proving that “everything random” is the simplest explanation. Private Messaging pointed that out, too.
The point is to explain the patterns you observe.
How do you do that with S.I?
It most certainly is. If you try to run the Copenhagen interpretation in a Turing machine to get output that matches your POV, then it has to output the whole universe and you have to find your POV on the tape somewhere.
No. I run the TM with my experimental conditions as the starting state, and I keep deleting unobserved results, renormalising and re-running. That’s how physics is done any way—what I have called Shut Up and Calculate.
Same thing for the Many Worlds interpretation. It explains the results of our experiments just as well as Copenhagen, it just doesn’t posit any special phenomenon like observation, observation is just what entanglement looks like from the perspective of one of the entangled particles (or system of particles if you’re talking about the scientist)
If you perform the same operations with S.I set up to emulate MW you’ll get the same results. That’s just a way of restating the truism that all interpretations agree on results. But you need a difference in algorithmic complexity as well.
Same thing for the Many Worlds interpretation. It explains the results of our experiments just as well as Copenhagen, it just doesn’t posit any special phenomenon like observation, observation is just what entanglement looks like from the perspective of one of the entangled particles (or system of particles if you’re talking about the scientist).
You seem to be saying that MWI is a simpler ontological picture now. I dispute that, but its beside the point because what we are discussing is using SI to quantify O’s R via alorithmic complexity.
First of all: Of course you can use many worlds to make predictions,
I didn’t say MW can’t make predictions at all. I am saying that operationally, predicition-making is the same under all interpretations, and that neglect of unobserved outcomes always has to occur.
You just don’t need to make up some story about how your perspective is special
The point about predicting my observations is that they are the only ones I can test. It’s operating, not metaphysical.
That’s a link to somebody complaining about how someone else presented an argument. I have no idea what point you think it makes that’s relevant to this discussion.
And that’s the problem! You want the shortest programme that predicts your observations, but output of a TM that just runs the SWE doesn’t predict your and only your observations. You have to manually perform an extra operation to extract them, and that’s extra complexity that isn’t part of the “complexity of the programme”. The argument that MWI is algorithmically simple cheats by hiding some complexity outside the programme.
That’s not relevant to my argument.
Operationally, something like copenhagen, ie. neglect of unobserved predictions, and renormalisation , has to occur, because otherwise you can’t make predictions. Hence my comment about SU&C. Different adds some extra baggage about what that means—occurred in a different branch versus didn’t occur—but the operation still needs to occur.
Thinking this through some more, I think the real problem is that S.I. is defined in the perspective of an agent modeling an environment, so the assumption that Many Worlds has to put any un-observable on the output tape is incorrect. It’s like stating that Copenhagen has to output all the probability amplitudes onto the output tape and maybe whatever dice god rolled to produce the final answer as well. Neither of those are true.
Well, you’ve got to test that the programme is at least correct so that you can can go on to find the simplest correct programme. How would you do that?
First, can you define “SWE”? I’m not familiar with the acronym.
Second, why is that a problem? You should want a theory that requires as few assumptions as possible to explain as much as possible. The fact that it explains more than just your point of view (POV) is a good thing. It lets you make predictions. The only requirement is that it explains at least your POV.
The point is to explain the patterns you observe.
It most certainly is. If you try to run the Copenhagen interpretation in a Turing machine to get output that matches your POV, then it has to output the whole universe and you have to find your POV on the tape somewhere.
The problem is: That’s not how theories are tested. It’s not like people are looking for a theory that explains electromagnetism and why they’re afraid of clowns and why their uncle “Bob” visited so much when they were a teenager and why their’s a white streak in their prom photo as though a cosmic ray hit the camera when the picture was taken, etc. etc.
The observations we’re talking about are experiments where a particular phenomenon is invoked with minimal disturbance from the outside world (if you’re lucky enough to work in a field like Physics which permits such experiments). In a simple universe that just has an electron traveling toward a double-slit wall and a detector, what happens? We can observe that and we can run our model to see what it predicts. We don’t have to run the Turing machine with input of 10^80 particles for 13.8 billion years then try to sift through the output tape to find what matches our observations.
Same thing for the Many Worlds interpretation. It explains the results of our experiments just as well as Copenhagen, it just doesn’t posit any special phenomenon like observation, observation is just what entanglement looks like from the perspective of one of the entangled particles (or system of particles if you’re talking about the scientist).
First of all: Of course you can use many worlds to make predictions, You do it every time you use the math of QFT. You can make predictions about entangled particles, can’t you? The only thing is: while the math of probability is about weighted sums of hypothetical paths, in MW you take it quite literally as paths the actually being traversed. That’s what you’re trading for the magic dice machine in non-deterministic theories.
Secondly: Just because Many Worlds says those worlds exist, doesn’t mean you have to invent some extra phenomenon to justify renormalization. At the end of the day the unobservable universe is still unobservable. When you’re talking about predicting what you might observe when you run experiment X, it’s fine to ultimately discard the rest of the multiverse. You just don’t need to make up some story about how your perspective is special and you have some magic power to collapse waveforms that other particles don’t have.
Please stop introducing obscure acronyms without stating what they mean. It makes your argument less clear. More often than not it results in *more* typing because of the confusion it causes. I have no idea what this sentence means. SU&C = Single Universe and Collapse? Like objective collapse? “Different” what?
S.I is a inept tool for measuring the relative complexity of CI and MWI because it is a bad match for both. It’s a bad match for MWI because of the linear, or., if you prfer sequential, nature of the output tape, and its a bad match for CI because its deterministic and CI isn’t. You can simulate collapse with a PRNG, but it won’t give you the right random numbers. Also, CI’ers think collapse is a fundamental process, so it loads the dice to represent it with a multi-step PRNG. It should be just a call to one RAND instruction to represent their views fairly.
SWE=Schroedinger Wave Equation. SU&C=Shut Up and Calculate.
The topic is using S.I to quantify O’s R, and S.I is not a measure on assumptions , it is a measure on algorithmic complexity.
Explaining just my POV doesn’t stop me making predictions. In fact predicting the observations of one observer is exactly how S.I is supposed to work. It also prevents various forms of cheating. I don’t know why you are using “explain” rather than “predict”. Deutsch favours explanation over prediction but the very relevant point here is that how well a theory explains is an unquantifiable human judgement. Predicting observations, on the other hand, is definite an quantifiable..that’s the whole point of using S.I as a mechanistic process to quantify O’s. R.
Predicting every observers observations is a bad thing from the POV of proving that MWI is simple, because if you allow one observer to pick out their observations from a morass of data, then the easisest way of generating data that contains any substring is a PRNG. You basically ending up proving that “everything random” is the simplest explanation. Private Messaging pointed that out, too.
How do you do that with S.I?
No. I run the TM with my experimental conditions as the starting state, and I keep deleting unobserved results, renormalising and re-running. That’s how physics is done any way—what I have called Shut Up and Calculate.
If you perform the same operations with S.I set up to emulate MW you’ll get the same results. That’s just a way of restating the truism that all interpretations agree on results. But you need a difference in algorithmic complexity as well.
You seem to be saying that MWI is a simpler ontological picture now. I dispute that, but its beside the point because what we are discussing is using SI to quantify O’s R via alorithmic complexity.
I didn’t say MW can’t make predictions at all. I am saying that operationally, predicition-making is the same under all interpretations, and that neglect of unobserved outcomes always has to occur.
The point about predicting my observations is that they are the only ones I can test. It’s operating, not metaphysical.
Incidentally, this was pointed out before:-
https://www.lesswrong.com/posts/Kyc5dFDzBg4WccrbK/an-intuitive-explanation-of-solomonoff-induction#ceq7HLYhx4YiciKWq
That’s a link to somebody complaining about how someone else presented an argument. I have no idea what point you think it makes that’s relevant to this discussion.