Much more interestingly, Solomonoff probability hints that one should try really to search for something that would predict beyond probability distributions. I.e. search for objective collapse of some kind.
We face logical uncertainty here. We do not know if there is a theory of objective collapse that more compactly describes our current universe then MWI or random collapse does. I am inclined to believe that the answer is “no”. This issue seems very subtle, and differences on it do not seem clear enough to damn an entire organization.
because forming either belief would be wrong.
this is not really a Bayesian standard of evidence. Do you also believe that, in a Bayesian sense, it is wrong to believe those theories.
Bayesian sense as in Bayesian probability, or Bayesian sense as in local dianetics style stuff?
In Bayesian sense you have to stay on the priors and not update them because none of the ‘evidence’ actually links to either (the humans have general meta facility to say ‘i don’t know’ when it’s pure prior). In the local dianetics-like trope, you should start updating any time anyone claims that their argument is favouring either, when you come up with a vague and likely (extremely likely) incorrect handwave ‘argument’, or should do other nearly-guaranteed-to-be-faulty updates which you get when you don’t consider all possible interpretations but just two and end up putting the stuff that should update something else, as updating the MWI. Yes, I think it is wrong to do faulty updates.
I used MWI as example of local arguing that tends to aggravate the experts. Maybe it shouldn’t damn entire organization in your view, because MWI may be correct, but in the view of AI researcher who is presented with similarly faulty argument regarding AI, yes, the use of the faulty argumentation is sufficient to deem SI cranks/pseudo-scientists regardless of the truth value of the thing being argued about and regardless of the opinion on the AI risk. A believer in AI danger would still deem SI to be cranks if SI argues this way.
edit: actually, you should re-read the MWI arguments in question. This is a good example: http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ From this text it would be deduced that EY’s knowledge of Bayes, Solomonoff induction, Kolmogorov complexity, quantum mechanics, and scientific method, was much much lower than he believed it to be. The SI does exact same thing when it makes and presents bad AI danger arguments. As extreme example: suppose you told that you believe in AI risk because 3+7+12=23 . There’s no logical connection from that formula to AI risk, and there’s arithmetical mistake in the formula That sort of ‘argument’ is easy to make when you build your beliefs out of handwaves in topics that you poorly understand.
We face logical uncertainty here. We do not know if there is a theory of objective collapse that more compactly describes our current universe then MWI or random collapse does. I am inclined to believe that the answer is “no”. This issue seems very subtle, and differences on it do not seem clear enough to damn an entire organization.
this is not really a Bayesian standard of evidence. Do you also believe that, in a Bayesian sense, it is wrong to believe those theories.
Bayesian sense as in Bayesian probability, or Bayesian sense as in local dianetics style stuff?
In Bayesian sense you have to stay on the priors and not update them because none of the ‘evidence’ actually links to either (the humans have general meta facility to say ‘i don’t know’ when it’s pure prior). In the local dianetics-like trope, you should start updating any time anyone claims that their argument is favouring either, when you come up with a vague and likely (extremely likely) incorrect handwave ‘argument’, or should do other nearly-guaranteed-to-be-faulty updates which you get when you don’t consider all possible interpretations but just two and end up putting the stuff that should update something else, as updating the MWI. Yes, I think it is wrong to do faulty updates.
I used MWI as example of local arguing that tends to aggravate the experts. Maybe it shouldn’t damn entire organization in your view, because MWI may be correct, but in the view of AI researcher who is presented with similarly faulty argument regarding AI, yes, the use of the faulty argumentation is sufficient to deem SI cranks/pseudo-scientists regardless of the truth value of the thing being argued about and regardless of the opinion on the AI risk. A believer in AI danger would still deem SI to be cranks if SI argues this way.
There’s other glaring errors as well: http://www.ex-parrot.com/~pete/quantum-wrong.html
edit: actually, you should re-read the MWI arguments in question. This is a good example: http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ From this text it would be deduced that EY’s knowledge of Bayes, Solomonoff induction, Kolmogorov complexity, quantum mechanics, and scientific method, was much much lower than he believed it to be. The SI does exact same thing when it makes and presents bad AI danger arguments. As extreme example: suppose you told that you believe in AI risk because 3+7+12=23 . There’s no logical connection from that formula to AI risk, and there’s arithmetical mistake in the formula That sort of ‘argument’ is easy to make when you build your beliefs out of handwaves in topics that you poorly understand.