It occurred to me that on this forum QM/MWI discussions are a mind-killer, for the same reasons as religion and politics are:
Not particularly. To the extent that it is a mind killer it is a mind killer in the way discussions of FAI, SIAI capabilities, cryonics, Bayesianism or theories like this are. Whenever any keyword suitably similar to one of these subjects appears one of the same group of people can be expected to leap in and launch into an attack of lesswrong, its members, Eliezer, SingInst or all of the above—they may even try to include something on the subject matter as well.
The thing is most people here aren’t particularly interested in talking about those subjects—at least they aren’t interested in rehashing the same old tired arguments and posturing yet again. They have moved on to more interesting topics. This leads to the same abysmal quality of discussion—and belligerent and antisocial interactions—every time.
Any FAI discussions are mindkilling unless they are explicitly conditional on “assuming FOOM is logically possible”. After all, we don’t have enough evidence to bridge the difference in priors, and neither side (AI is a risk/AI is not a risk) explicitly acknowledges that fact (and this problem makes them sides more than partners).
Not particularly. To the extent that it is a mind killer it is a mind killer in the way discussions of FAI, SIAI capabilities, cryonics, Bayesianism or theories like this are. Whenever any keyword suitably similar to one of these subjects appears one of the same group of people can be expected to leap in and launch into an attack of lesswrong, its members, Eliezer, SingInst or all of the above—they may even try to include something on the subject matter as well.
The thing is most people here aren’t particularly interested in talking about those subjects—at least they aren’t interested in rehashing the same old tired arguments and posturing yet again. They have moved on to more interesting topics. This leads to the same abysmal quality of discussion—and belligerent and antisocial interactions—every time.
Any FAI discussions are mindkilling unless they are explicitly conditional on “assuming FOOM is logically possible”. After all, we don’t have enough evidence to bridge the difference in priors, and neither side (AI is a risk/AI is not a risk) explicitly acknowledges that fact (and this problem makes them sides more than partners).