Bayesian sense as in Bayesian probability, or Bayesian sense as in local dianetics style stuff?
In Bayesian sense you have to stay on the priors and not update them because none of the ‘evidence’ actually links to either (the humans have general meta facility to say ‘i don’t know’ when it’s pure prior). In the local dianetics-like trope, you should start updating any time anyone claims that their argument is favouring either, when you come up with a vague and likely (extremely likely) incorrect handwave ‘argument’, or should do other nearly-guaranteed-to-be-faulty updates which you get when you don’t consider all possible interpretations but just two and end up putting the stuff that should update something else, as updating the MWI. Yes, I think it is wrong to do faulty updates.
I used MWI as example of local arguing that tends to aggravate the experts. Maybe it shouldn’t damn entire organization in your view, because MWI may be correct, but in the view of AI researcher who is presented with similarly faulty argument regarding AI, yes, the use of the faulty argumentation is sufficient to deem SI cranks/pseudo-scientists regardless of the truth value of the thing being argued about and regardless of the opinion on the AI risk. A believer in AI danger would still deem SI to be cranks if SI argues this way.
edit: actually, you should re-read the MWI arguments in question. This is a good example: http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ From this text it would be deduced that EY’s knowledge of Bayes, Solomonoff induction, Kolmogorov complexity, quantum mechanics, and scientific method, was much much lower than he believed it to be. The SI does exact same thing when it makes and presents bad AI danger arguments. As extreme example: suppose you told that you believe in AI risk because 3+7+12=23 . There’s no logical connection from that formula to AI risk, and there’s arithmetical mistake in the formula That sort of ‘argument’ is easy to make when you build your beliefs out of handwaves in topics that you poorly understand.
Bayesian sense as in Bayesian probability, or Bayesian sense as in local dianetics style stuff?
In Bayesian sense you have to stay on the priors and not update them because none of the ‘evidence’ actually links to either (the humans have general meta facility to say ‘i don’t know’ when it’s pure prior). In the local dianetics-like trope, you should start updating any time anyone claims that their argument is favouring either, when you come up with a vague and likely (extremely likely) incorrect handwave ‘argument’, or should do other nearly-guaranteed-to-be-faulty updates which you get when you don’t consider all possible interpretations but just two and end up putting the stuff that should update something else, as updating the MWI. Yes, I think it is wrong to do faulty updates.
I used MWI as example of local arguing that tends to aggravate the experts. Maybe it shouldn’t damn entire organization in your view, because MWI may be correct, but in the view of AI researcher who is presented with similarly faulty argument regarding AI, yes, the use of the faulty argumentation is sufficient to deem SI cranks/pseudo-scientists regardless of the truth value of the thing being argued about and regardless of the opinion on the AI risk. A believer in AI danger would still deem SI to be cranks if SI argues this way.
There’s other glaring errors as well: http://www.ex-parrot.com/~pete/quantum-wrong.html
edit: actually, you should re-read the MWI arguments in question. This is a good example: http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ From this text it would be deduced that EY’s knowledge of Bayes, Solomonoff induction, Kolmogorov complexity, quantum mechanics, and scientific method, was much much lower than he believed it to be. The SI does exact same thing when it makes and presents bad AI danger arguments. As extreme example: suppose you told that you believe in AI risk because 3+7+12=23 . There’s no logical connection from that formula to AI risk, and there’s arithmetical mistake in the formula That sort of ‘argument’ is easy to make when you build your beliefs out of handwaves in topics that you poorly understand.