if you are seeking lowest complexity description of your input, your theory needs to also locate yourself within what ever stuff it generates somehow (hence appropriate discount for something really huge like MWI)
It seems to me that such a discount exists in all interpretations (at least those that don’t successfully predict measurement outcomes beyond predicting their QM probability distributions). In Copenhagen, locating yourself corresponds to specifying random outcomes for all collapse events. In hidden variables theories, locating yourself corresponds to picking arbitrary boundary conditions for the hidden variables. Since MWI doesn’t need to specify the mechanism for the collapse or hidden variables, it’s still strictly simpler.
Well, the goal is to predict your personal observations, in MWI you have huge wavefunction on which you need to somehow select the subjective you. The predictor will need code for this, whenever you call it mechanism or not. Furthermore, you need to actually derive Born probabilities from some first principles somehow if you want to make a case for MWI. Deriving those, that’s what would be interesting, actually making it more compact (if the stuff you’re adding as extra ‘first principles’ is smaller than collapse). Also, btw, CI doesn’t have any actual mechanism for collapse, it’s strictly a very un-physical trick.
Much more interestingly, Solomonoff probability hints that one should try really to search for something that would predict beyond probability distributions. I.e. search for objective collapse of some kind. Other issue: QM actually has problem at macroscopic scale, it doesn’t add up to general relativity (without nasty hacks), so we are matter of factly missing something, and this whole issue is really silly argument over nothing as what we have is just a calculation rule that happens to work but we know is wrong somewhere anyway. I think that’s the majority opinion on the issue. Postulating a zillion worlds based on known broken model would be tad silly. I think basically most physicists believe neither in collapse as in CI (beyond believing its a trick that works) nor believe in many worlds, because forming either belief would be wrong.
Much more interestingly, Solomonoff probability hints that one should try really to search for something that would predict beyond probability distributions. I.e. search for objective collapse of some kind.
We face logical uncertainty here. We do not know if there is a theory of objective collapse that more compactly describes our current universe then MWI or random collapse does. I am inclined to believe that the answer is “no”. This issue seems very subtle, and differences on it do not seem clear enough to damn an entire organization.
because forming either belief would be wrong.
this is not really a Bayesian standard of evidence. Do you also believe that, in a Bayesian sense, it is wrong to believe those theories.
Bayesian sense as in Bayesian probability, or Bayesian sense as in local dianetics style stuff?
In Bayesian sense you have to stay on the priors and not update them because none of the ‘evidence’ actually links to either (the humans have general meta facility to say ‘i don’t know’ when it’s pure prior). In the local dianetics-like trope, you should start updating any time anyone claims that their argument is favouring either, when you come up with a vague and likely (extremely likely) incorrect handwave ‘argument’, or should do other nearly-guaranteed-to-be-faulty updates which you get when you don’t consider all possible interpretations but just two and end up putting the stuff that should update something else, as updating the MWI. Yes, I think it is wrong to do faulty updates.
I used MWI as example of local arguing that tends to aggravate the experts. Maybe it shouldn’t damn entire organization in your view, because MWI may be correct, but in the view of AI researcher who is presented with similarly faulty argument regarding AI, yes, the use of the faulty argumentation is sufficient to deem SI cranks/pseudo-scientists regardless of the truth value of the thing being argued about and regardless of the opinion on the AI risk. A believer in AI danger would still deem SI to be cranks if SI argues this way.
edit: actually, you should re-read the MWI arguments in question. This is a good example: http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ From this text it would be deduced that EY’s knowledge of Bayes, Solomonoff induction, Kolmogorov complexity, quantum mechanics, and scientific method, was much much lower than he believed it to be. The SI does exact same thing when it makes and presents bad AI danger arguments. As extreme example: suppose you told that you believe in AI risk because 3+7+12=23 . There’s no logical connection from that formula to AI risk, and there’s arithmetical mistake in the formula That sort of ‘argument’ is easy to make when you build your beliefs out of handwaves in topics that you poorly understand.
I don’t really know Solomonoff induction or MWI on a formal level, but… If I know that the universe seems to obey rule X everywhere, and I know what my local environment is like and how applying rule X to that local environment would affect it, isn’t that enough? Why would I need to include in my model a copy of the entire wavefunction that made up the universe, if having a model of my local environment is enough to predict how my local environment behaves? In other words, I don’t need to spend a lot of effort selecting the subjective me, because my model is small enough to mostly only include the subjective me in the first place.
(I acknowledge that I don’t know these topics well, and might just be talking nonsense.)
I don’t really know Solomonoff induction or MWI on a formal level
You know more about it than most of the people talking of it: you know you don’t know it. They don’t. That is the chief difference. (I also don’t know it all that well, but at least I can look at the argument that it favours something, and see if it favours the iterator over all possible worlds even more)
If I know that the universe seems to obey rule X everywhere, and I know what my local environment is like and how applying rule X to that local environment would affect it, isn’t that enough?
Formally, there’s no distinction between rules you know and the environment. You are to construct shortest self containing piece of code that will be predicting the experiment. You will have to include any local environment data as well.
If you follow this approach to the logical end, you get Copenhagen Interpretation, shut up and calculate form: you don’t need to predict all the outcomes that you’ll never see. So you are on the right track.
it doesn’t take any extra code to predict all the outcomes that you’ll never see. Just extra space/time. But those are not the minimized quantity. In fact, predicting all the outcomes that you’ll never see is exactly the sort of wasteful space/time usage that programmers engage in when they want to minimize code length—it’s hard to write code telling your processor to abandon certain threads of computation when they are no longer relevant.
you missed the point. you need code for picking some outcome that you see out of outcomes that you didn’t see, if you calculated those. It does take extra code to predict the outcome you did see if you actually calculated extra outcomes you didn’t see, and then it’s hard to tell what would require less code, one piece of code is not subset of the other and difference likely depends to encoding of programs.
The problem of locating “the subjective you” seems to me to have two parts: first, to locate a world, and second, to locate an observer in that world. For the first part, see the grandparent; the second part seems to me to be the same across interpretations.
The point is, code of a theory has to produce output matching your personal subjective input. The objective view doesn’t suffice (and if you drop that requirement, you are back to square 1 because you can iterate all physical theories). The CI has that as part of theory, MWI doesn’t, you need extra code.
The complexity argument for MWI that was presented doesn’t favour MWI, it favours iteration over all possible physical theories, because that key requirement was omitted.
And my original point is not that MWI is false, or that MWI has higher complexity, or equal complexity. My point is that argument is flawed. I don’t care about MWI being false or true, I am using argument for MWI as an example of sloppiness SI should try not to have (hopefully without this kind of sloppiness they will also be far less sure that AIs are so dangerous).
It seems to me that such a discount exists in all interpretations (at least those that don’t successfully predict measurement outcomes beyond predicting their QM probability distributions). In Copenhagen, locating yourself corresponds to specifying random outcomes for all collapse events. In hidden variables theories, locating yourself corresponds to picking arbitrary boundary conditions for the hidden variables. Since MWI doesn’t need to specify the mechanism for the collapse or hidden variables, it’s still strictly simpler.
Well, the goal is to predict your personal observations, in MWI you have huge wavefunction on which you need to somehow select the subjective you. The predictor will need code for this, whenever you call it mechanism or not. Furthermore, you need to actually derive Born probabilities from some first principles somehow if you want to make a case for MWI. Deriving those, that’s what would be interesting, actually making it more compact (if the stuff you’re adding as extra ‘first principles’ is smaller than collapse). Also, btw, CI doesn’t have any actual mechanism for collapse, it’s strictly a very un-physical trick.
Much more interestingly, Solomonoff probability hints that one should try really to search for something that would predict beyond probability distributions. I.e. search for objective collapse of some kind. Other issue: QM actually has problem at macroscopic scale, it doesn’t add up to general relativity (without nasty hacks), so we are matter of factly missing something, and this whole issue is really silly argument over nothing as what we have is just a calculation rule that happens to work but we know is wrong somewhere anyway. I think that’s the majority opinion on the issue. Postulating a zillion worlds based on known broken model would be tad silly. I think basically most physicists believe neither in collapse as in CI (beyond believing its a trick that works) nor believe in many worlds, because forming either belief would be wrong.
We face logical uncertainty here. We do not know if there is a theory of objective collapse that more compactly describes our current universe then MWI or random collapse does. I am inclined to believe that the answer is “no”. This issue seems very subtle, and differences on it do not seem clear enough to damn an entire organization.
this is not really a Bayesian standard of evidence. Do you also believe that, in a Bayesian sense, it is wrong to believe those theories.
Bayesian sense as in Bayesian probability, or Bayesian sense as in local dianetics style stuff?
In Bayesian sense you have to stay on the priors and not update them because none of the ‘evidence’ actually links to either (the humans have general meta facility to say ‘i don’t know’ when it’s pure prior). In the local dianetics-like trope, you should start updating any time anyone claims that their argument is favouring either, when you come up with a vague and likely (extremely likely) incorrect handwave ‘argument’, or should do other nearly-guaranteed-to-be-faulty updates which you get when you don’t consider all possible interpretations but just two and end up putting the stuff that should update something else, as updating the MWI. Yes, I think it is wrong to do faulty updates.
I used MWI as example of local arguing that tends to aggravate the experts. Maybe it shouldn’t damn entire organization in your view, because MWI may be correct, but in the view of AI researcher who is presented with similarly faulty argument regarding AI, yes, the use of the faulty argumentation is sufficient to deem SI cranks/pseudo-scientists regardless of the truth value of the thing being argued about and regardless of the opinion on the AI risk. A believer in AI danger would still deem SI to be cranks if SI argues this way.
There’s other glaring errors as well: http://www.ex-parrot.com/~pete/quantum-wrong.html
edit: actually, you should re-read the MWI arguments in question. This is a good example: http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ From this text it would be deduced that EY’s knowledge of Bayes, Solomonoff induction, Kolmogorov complexity, quantum mechanics, and scientific method, was much much lower than he believed it to be. The SI does exact same thing when it makes and presents bad AI danger arguments. As extreme example: suppose you told that you believe in AI risk because 3+7+12=23 . There’s no logical connection from that formula to AI risk, and there’s arithmetical mistake in the formula That sort of ‘argument’ is easy to make when you build your beliefs out of handwaves in topics that you poorly understand.
I don’t really know Solomonoff induction or MWI on a formal level, but… If I know that the universe seems to obey rule X everywhere, and I know what my local environment is like and how applying rule X to that local environment would affect it, isn’t that enough? Why would I need to include in my model a copy of the entire wavefunction that made up the universe, if having a model of my local environment is enough to predict how my local environment behaves? In other words, I don’t need to spend a lot of effort selecting the subjective me, because my model is small enough to mostly only include the subjective me in the first place.
(I acknowledge that I don’t know these topics well, and might just be talking nonsense.)
You know more about it than most of the people talking of it: you know you don’t know it. They don’t. That is the chief difference. (I also don’t know it all that well, but at least I can look at the argument that it favours something, and see if it favours the iterator over all possible worlds even more)
Formally, there’s no distinction between rules you know and the environment. You are to construct shortest self containing piece of code that will be predicting the experiment. You will have to include any local environment data as well.
If you follow this approach to the logical end, you get Copenhagen Interpretation, shut up and calculate form: you don’t need to predict all the outcomes that you’ll never see. So you are on the right track.
it doesn’t take any extra code to predict all the outcomes that you’ll never see. Just extra space/time. But those are not the minimized quantity. In fact, predicting all the outcomes that you’ll never see is exactly the sort of wasteful space/time usage that programmers engage in when they want to minimize code length—it’s hard to write code telling your processor to abandon certain threads of computation when they are no longer relevant.
you missed the point. you need code for picking some outcome that you see out of outcomes that you didn’t see, if you calculated those. It does take extra code to predict the outcome you did see if you actually calculated extra outcomes you didn’t see, and then it’s hard to tell what would require less code, one piece of code is not subset of the other and difference likely depends to encoding of programs.
The problem of locating “the subjective you” seems to me to have two parts: first, to locate a world, and second, to locate an observer in that world. For the first part, see the grandparent; the second part seems to me to be the same across interpretations.
The point is, code of a theory has to produce output matching your personal subjective input. The objective view doesn’t suffice (and if you drop that requirement, you are back to square 1 because you can iterate all physical theories). The CI has that as part of theory, MWI doesn’t, you need extra code.
The complexity argument for MWI that was presented doesn’t favour MWI, it favours iteration over all possible physical theories, because that key requirement was omitted.
And my original point is not that MWI is false, or that MWI has higher complexity, or equal complexity. My point is that argument is flawed. I don’t care about MWI being false or true, I am using argument for MWI as an example of sloppiness SI should try not to have (hopefully without this kind of sloppiness they will also be far less sure that AIs are so dangerous).