You seem to mostly disagree in spirit with all Grognor’s points but the last, though on that point you didn’t share your impression of the H&B literature.
I’ll chime in and say that at some point about two years ago I would have more or less agreed with all six points. These days I disagree in spirit with all six points and with the approach to rationality that they represent. I’ve learned a lot in the meantime, and various people, including Anna Salamon, have said that I seem like I’ve gained fifteen or twenty IQ points. I’ve read all of Eliezer’s posts maybe three times over and I’ve read many of the cited papers and a few books, so my disagreement likely doesn’t stem from not having sufficiently appreciated Eliezer’s sundry cases. Many times when I studied the issues myself and looked at a broader set of opinions in the literature, or looked for justifications of the unstated assumptions I found, I came away feeling stupid for having been confident of Eliezer’s position: often Eliezer had very much overstated the case for his positions, and very much ignored or fought straw men of alternative positions.
His arguments and their distorted echoes lead one to think that various people or conclusions are obviously wrong and thus worth ignoring: that philosophers mostly just try to be clever and that their conclusions are worth taking seriously more-or-less only insofar as they mirror or glorify science; that supernaturalism, p-zombie-ism, theism, and other philosophical positions are clearly wrong, absurd, or incoherent; that quantum physicists who don’t accept MWI just don’t understand Occam’s razor or are making some similarly simple error; that normal people are clearly biased in all sorts of ways, and that this has been convincingly demonstrated such that you can easily explain away any popular beliefs if necessary; that religion is bad because it’s one of the biggest impediments to a bright, Enlightened future; and so on. It seems to me that many LW folk end up thinking they’re right about contentious issues where many people disagree with them, even when they haven’t looked at their opponents’ best arguments, and even when they don’t have a coherent understanding of their opponents’ position or their own position. Sometimes they don’t even seem to realize that there are important people who disagree with them, like in the case of heuristics and biases. Such unjustified confidence and self-reinforcing ignorance is a glaring, serious, fundamental, and dangerous problem with any epistemology that wishes to lay claim to rationality.
it seems much more prevalent in atheist forums than on LessWrong.
Is it less prevalent here or is it simply less vocal because people here aren’t spending their time on that particularly tribal demonstration? After all, when you’ve got Bayesianism, AI risk, and cognitive biases, you have a lot more effective methods of signaling allegiance to this narrow crowd.
Clear minority, and most comments defending such views are voted down. With the exception of Will, no one in that category is what would probably be classified as high status here, and even Will’s status is… complicated.
Depends on what connotations are implied. There are certainly people who dispute, e.g., the (practical relevance of the) H&B results on confirmations bias, overconfidence, and so on that LessWrong often brings up in support of the “the world is mad” narrative. There are also people like Chesterton who placed much faith in the common sense of the average man. But anyway I think the rest of the sentence needs to be included to give that fragment proper context.
For what it’s worth, I don’t hold that position, and it seems much more prevalent in atheist forums than on LessWrong.
Analyzing the sun miracle at Fatima seems to be a good starting point. This post has been linked from LessWrong before. Not an argument for the supernatural, but a nexus for arguments: it shows what needs to be explained, by whatever means. Also worth keeping in mind is the “capricious psi” hypothesis, reasonably well-explicated by J. E. Kennedy in a few papers and essays. Kennedy’s experience is mostly in parapsychology. He has many indicators in favor of his credibility: he has a good understanding of the relevant statistics, he exposed some fraud going on in a lab where he was working, he doesn’t try to hide that psi if it exists would seem to have weird and seemingly unlikely properties, et cetera.
But I don’t know of any arguments that really go meta and take into account how the game theory and psychology of credibility might be expected to affect the debate, e.g., emotional reactions to people who look like they’re trying to play psi-of-the-gaps, both sides’ frustration with incommunicable evidence or even the concept of incommunicable evidence, and things like that.
Hm. This… doesn’t seem particularly convincing. So it sounds like whatever convinced you is incommunicable—something that you know would be unconvincing to anyone else, but which is still enough to convince you despite knowing the alternate conclusions others would come to if informed of it?
Agreed. The actually-written-up-somewhere arguments that I know of can at most move supernaturalism from “only crazy or overly impressionable people would treat it as a live hypothesis” to “otherwise reasonable people who don’t obviously appear to have a bottom line could defensibly treat it as a Jamesian live hypothesis”. There are arguments that could easily be made that would fix specific failure modes, e.g. some LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism, and Randi-style skeptics seem to like fully general explanations/counterarguments too much. But once those basic hurdles are overcome there still seems to be a wide spread of defensible probabilities for supernaturalism based off of solely communicable evidence.
So it sounds like whatever convinced you is incommunicable—something that you know would be unconvincing to anyone else, but which is still enough to convince you despite knowing the alternate conclusions others would come to if informed of it?
some LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism
Is the point here that supernatural entities that would be too complex to specify into the universe from scratch may have been produced through some indirect process logically prior to the physics we know, sort of like humans were produced by evolution? Or is it something different?
Alien superintelligences are less speculative and emerge naturally from a simple universe program. More fundamentally the notion of simplicity that Eliezer and Luke are using is entirely based off of their assessments of which kinds of hypotheses have historically been more or less fruitful. Coming up with a notion of “simplicity” after the fact based on past observations is coding theory and has nothing to do with the universal prior, which mortals simply don’t have access to. Arguments should be about evidence, not “priors”.
More fundamentally the notion of simplicity that Eliezer and Luke are using is entirely based off of their assessments of which kinds of hypotheses have historically been more or less fruitful
...
Coming up with a notion of “simplicity” after the fact based on past observations is coding theory and has nothing to do with the universal prior. Arguments should be about evidence, not “priors”.
It isn’t technically a universal prior, but it counts as evidence because it’s historically fruitful. That leaves you with a nitpick rather than showing “LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism.”
I don’t think it’s nitpicking as such to point out that the probability of supernaturalism is unrelated to algorithmic probability. Bringing in Kolmogorov complexity is needlessly confusing, and even Bayesian probability isn’t necessary because all we’re really concerned with is the likelihood ratio. The error I want to discourage is bringing in confusing uncomputable mathematics for no reason and then asserting that said mathematics somehow justify a position one holds for what are actually entirely unrelated reasons. Such errors harm group epistemology.
I don’t think it’s nitpicking as such to point out that the probability of supernaturalism is unrelated to algorithmic probability.
I don’t see how you’ve done that. If KC isn’t a universal prior like objective isn’t technically objective but inter-subjective then you can still use KC as evidence for a class of propositions (and probably the only meaningful class of propositions). For that class of propositions you have automatic evidence for or against them (in the form of KC), and so it’s basically a ready-made prior because it passes from the posterior to the prior immediately anyway.
The error I want to discourage is bringing in confusing uncomputable mathematics for no reason and then asserting that said mathematics somehow justify a position one holds for what are actually entirely unrelated reasons.
So a) you think LWers reasons for not believing in supernaturalism have nothing to do with KC, and b) you think supernaturalism exists outside the class of propositions KC can count as evidence as for or against?
I don’t care about A, but If B is your position I wonder: why?
That’s a shame. Any chance you might have suggestions on how to go about obtaining such evidence for oneself? Possibly via PM if you’d be more comfortable with that.
I have advice. First off, if psi’s real then I think it’s clearly an intelligent agent-like or agent-caused process. In general you’d be stupid to mess around with agents with unknown preferences. That’s why witchcraft was considered serious business: messing with demons is very much like building mini uFAIs. Just say no. So I don’t recommend messing around with psi, especially if you haven’t seriously considered what the implications of the existence of agent-like psi would be. This is why I like the Catholics: they take things seriously, it’s not fun and games. “Thou shalt not tempt the Lord thy God.” If you do experiment, pre-commit not to tell anyone about at least some predetermined subset of the results. Various parapsychology experiments indicate that psi effects can be retrocausal, so experimental results can be determined by whether or not you would in the future talk about them. If psi’s capricious then pre-commiting not to blab increases likelihood of significant effects.
Various parapsychology experiments indicate that psi effects can be retrocausal, so experimental results can be determined by whether or not you would in the future talk about them. If psi’s capricious then pre-commiting not to blab increases likelihood of significant effects.
I just thought of something. What you’re saying is that psi effects are anti-inductive.
The capricious-psi literature actually includes several proposed mechanisms which could lead to “anti-inductive” psi. Some of these mechanisms are amenable to mitigation strategies (such as not trying to use psi effects for material advantage, and keeping one’s experiments confidential); others are not.
I don’t entirely agree with Will here. My issue is that there seem to be some events, e.g., Fatima, where the best “scientific explanation” is little better than the supernatural wearing a lab-coat.
Because “Catholicism” seems like a pretty terrible explanation here.
Why? Do you have a better one? (Note: I agree “Catholicism” isn’t a particularly good explanation, it’s just that it’s not noticeably worse than any other.)
I mentioned Catholicism only because it seems like the “obvious” supernatural answer, given that it’s supposed to be a Marian apparition. Though, I do think of Catholicism proper as pretty incoherent, so it’d rank fairly low on my supernatural explanation list, and well below the “scientific explanation” of “maybe some sort of weird mundane light effect, plus human psychology, plus a hundred years”. I haven’t really investigated the phenomenon myself, but I think, say, “the ghost-emperor played a trick” or “mass hypnosis to cover up UFO experiments by the lizard people” rank fairly well compared to Catholicism.
It does a little more than that. It points to a specific class of hypotheses where we have evidence that in similar contexts such mechanisms can have an impact. The real problem here is that without any ability to replicate the event, we’re not going to be able to get substantially farther than that.
Yeah, it’s not really an explanation so much as an expression of where we’d look if we could. Presumably the way to figure it out is to either induce repeat performances (difficult to get funding and review board approval, though) or to study those mechanisms further. I suspect that’d be more likely to help than reading about ghost-emperors, at least.
Quite. Seems to me that if we’re going to hold science to that standard, we should be equally or more critical of ignorance in a cassock; we should view religion as a competing hypothesis that needs to be pointed to specifically, not as a reassuring fallback whenever conventional investigation fails for whatever reason. That’s a pretty common flaw of theological explanations, actually.
Hard to explain. I’ll briefly go over my agreement/disagreement status on each point. MWI: Mixed opinion. MWI is a decent bet, but then again that’s a pretty standard opinion among quantum physicists. Eliezer’s insistence that MWI is obviously correct is not justified given his arguments: he doesn’t address the most credible alternatives to MWI, and doesn’t seem to be cognizant of much of the relevant work. I think I disagree in spirit here even though I sort of agree at face value. Cryonics: Disagree, nothing about cryonics is “obvious”. Meh science, Yay Bayes!: Mostly disagree, too vague, little supporting evidence for face value interpretation. I agree that Bayes is cool. Utilitarianism: Disagree, utilitarianism is retarded. Consequentialism is fine, but often very naively applied in practice, e.g. utilitarianism. Eliezer’s metaethics: Disagree, especially considering Eliezer’s said he thinks he’s solved meta-ethics, which is outright crazy, though hopefully he was exaggerating. “‘People are crazy, the world is mad’ is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature”: Mostly disagree, LW is much too confident in the heuristics and biases literature and it’s not nearly a sufficient explanation for lots of things that are commonly alleged to be irrational.
When making claims like this, you need to do something to distinguish yourself from most people who make such claims, who tend to harbor basic misunderstandings, such as an assumption that preference utilitarianism is the only utilitarianism.
Utilitarianism has a number of different features, and a helpful comment would spell out which of the features, specifically, is retarded. Is it retarded to attach value to people’s welfare? Is it retarded to quantify people’s welfare? Is it retarded to add people’s welfare linearly once quantified? Is it retarded to assume that the value of structures containing more than one person depends on no features other than the welfare of those persons? And so on.
I suppose it’s easiest for me to just make the blanket metaphilosophical claim that normative ethics without well-justified meta-ethics just aren’t a real contender for the position of actual morality. So I’m unsatisfied with all normative ethics. I just think that utilitarianism is an especially ugly hack. I dislike fake non-arbitrariness.
You went into the kitchen cupboard Got yourself another hour, and you gave Half of it to me We sat there looking at the faces Of the strangers in the pages ’til we knew ’em mathematically
They were in our minds Until forever But we didn’t mind We didn’t know better
So we made our own computer Out of macaroni pieces And it did our thinking While we lived our lives It counted up our feelings And divided them up even And it called our calculation Perfect love [lives?]
Didn’t even know That love was bigger Didn’t even know That love was so, so Hey hey hey
Hey this fire, this fire It’s burning us up Hey this fire,
It’s burning us Oh, oo oo oo, oo oo oo oo
So we made the hard decision And we each made an incision Past our muscles and our bones Saw our hearts were little stones
Pulled ’em out, they weren’t beating And we weren’t even bleeding As we laid them on the granite counter top
We beat ‘em up against each other We beat ‘em up against each other We struck ‘em hard against each other We struck ’em so hard, so hard until they sparked
Hey this fire, this fire It’s burning us up Hey this fire It’s burning us up Hey this fire
It’s burning us Oh, oo oo oo, oo oo oo oo Oo oo oo oo oo oo
Perhaps I show my ignorance. Pleasure-happiness and preference fulfillment are the only maximands I’ve seen suggested by utilitarians. A quick Google search hasn’t revealed any others. What are the alternatives?
I’m unfortunately too lazy to make my case for retardedness: I disagree with enough of its features and motivations that I don’t know where to begin, and I wouldn’t know where to end.
Eudaimonia. “Thousand-shardedness”. Whatever humans’ complex values decide constitutes an intrinsically good life for an individual.
It’s possible that I’ve been mistaken in claiming that, as a matter of standard definition, any maximization of linearly summed “welfare” or “happiness” counts as utilitarianism. But it seems like a more natural place to draw the boundary than “maximization of either linearly summed preference satisfaction or linearly summed pleasure indicators in the brain but not linearly summed eudaimonia”.
he doesn’t address the most credible alternatives to MWI
I don’t think you need to explicitly address the alternatives to MWI to decide in favor of MWI. You can simply note that all interpretations of quantum mechanics either 1) fail to specify which worlds exist, 2) specify which worlds exist but do so through a burdensomely detailed mechanism, 3) admit that all the worlds exist, noting that worlds splitting via decoherence is implied by the rest of the physics. Am I missing something?
admit that all the worlds exist, noting that worlds splitting via decoherence is implied by the rest of the physics.
If “all the worlds” includes the non classical worlds, MWI is observationally false. Whether and how decoherence produces classical worlds is a topic of ongoing research.
Is that a response to my point specifically or a general observation? I don’t think “simply noting” is nearly enough justification to decide strongly in favor of MWI—maybe it’s enough to decide in favor of MWI, but it’s not enough to justify confident MWI evangelism nor enough to make bold claims about the failures of science and so forth. You have to show that various specific popular interpretations fail tests 1 and 2.
ETA: Tapping out because I think this thread is too noisy.
I suppose? It’s hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome. But maybe you have something in mind?
It’s hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome.
It always seems that way until someone proposes a new theoretical framework, afterwards it seems like people were insane for not coming up with said framework sooner.
That would have been my guess. I don’t really understand the transactional interpretation; how does it pick out a single world without using a burdensomely detailed mechanism to do so?
You seem to mostly disagree in spirit with all Grognor’s points but the last, though on that point you didn’t share your impression of the H&B literature.
I’ll chime in and say that at some point about two years ago I would have more or less agreed with all six points. These days I disagree in spirit with all six points and with the approach to rationality that they represent. I’ve learned a lot in the meantime, and various people, including Anna Salamon, have said that I seem like I’ve gained fifteen or twenty IQ points. I’ve read all of Eliezer’s posts maybe three times over and I’ve read many of the cited papers and a few books, so my disagreement likely doesn’t stem from not having sufficiently appreciated Eliezer’s sundry cases. Many times when I studied the issues myself and looked at a broader set of opinions in the literature, or looked for justifications of the unstated assumptions I found, I came away feeling stupid for having been confident of Eliezer’s position: often Eliezer had very much overstated the case for his positions, and very much ignored or fought straw men of alternative positions.
His arguments and their distorted echoes lead one to think that various people or conclusions are obviously wrong and thus worth ignoring: that philosophers mostly just try to be clever and that their conclusions are worth taking seriously more-or-less only insofar as they mirror or glorify science; that supernaturalism, p-zombie-ism, theism, and other philosophical positions are clearly wrong, absurd, or incoherent; that quantum physicists who don’t accept MWI just don’t understand Occam’s razor or are making some similarly simple error; that normal people are clearly biased in all sorts of ways, and that this has been convincingly demonstrated such that you can easily explain away any popular beliefs if necessary; that religion is bad because it’s one of the biggest impediments to a bright, Enlightened future; and so on. It seems to me that many LW folk end up thinking they’re right about contentious issues where many people disagree with them, even when they haven’t looked at their opponents’ best arguments, and even when they don’t have a coherent understanding of their opponents’ position or their own position. Sometimes they don’t even seem to realize that there are important people who disagree with them, like in the case of heuristics and biases. Such unjustified confidence and self-reinforcing ignorance is a glaring, serious, fundamental, and dangerous problem with any epistemology that wishes to lay claim to rationality.
Does anybody actually dispute that?
For what it’s worth, I don’t hold that position, and it seems much more prevalent in atheist forums than on LessWrong.
Is it less prevalent here or is it simply less vocal because people here aren’t spending their time on that particularly tribal demonstration? After all, when you’ve got Bayesianism, AI risk, and cognitive biases, you have a lot more effective methods of signaling allegiance to this narrow crowd.
Well we have openly religious members of our ‘tribe’.
Clear minority, and most comments defending such views are voted down. With the exception of Will, no one in that category is what would probably be classified as high status here, and even Will’s status is… complicated.
Also I’m not religious in the seemingly relevant sense.
Well this post is currently at +6.
Depends on what connotations are implied. There are certainly people who dispute, e.g., the (practical relevance of the) H&B results on confirmations bias, overconfidence, and so on that LessWrong often brings up in support of the “the world is mad” narrative. There are also people like Chesterton who placed much faith in the common sense of the average man. But anyway I think the rest of the sentence needs to be included to give that fragment proper context.
Granted.
Could you point towards some good, coherent arguments for supernatural phenomena or the like?
Analyzing the sun miracle at Fatima seems to be a good starting point. This post has been linked from LessWrong before. Not an argument for the supernatural, but a nexus for arguments: it shows what needs to be explained, by whatever means. Also worth keeping in mind is the “capricious psi” hypothesis, reasonably well-explicated by J. E. Kennedy in a few papers and essays. Kennedy’s experience is mostly in parapsychology. He has many indicators in favor of his credibility: he has a good understanding of the relevant statistics, he exposed some fraud going on in a lab where he was working, he doesn’t try to hide that psi if it exists would seem to have weird and seemingly unlikely properties, et cetera.
But I don’t know of any arguments that really go meta and take into account how the game theory and psychology of credibility might be expected to affect the debate, e.g., emotional reactions to people who look like they’re trying to play psi-of-the-gaps, both sides’ frustration with incommunicable evidence or even the concept of incommunicable evidence, and things like that.
Hm. This… doesn’t seem particularly convincing. So it sounds like whatever convinced you is incommunicable—something that you know would be unconvincing to anyone else, but which is still enough to convince you despite knowing the alternate conclusions others would come to if informed of it?
Agreed. The actually-written-up-somewhere arguments that I know of can at most move supernaturalism from “only crazy or overly impressionable people would treat it as a live hypothesis” to “otherwise reasonable people who don’t obviously appear to have a bottom line could defensibly treat it as a Jamesian live hypothesis”. There are arguments that could easily be made that would fix specific failure modes, e.g. some LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism, and Randi-style skeptics seem to like fully general explanations/counterarguments too much. But once those basic hurdles are overcome there still seems to be a wide spread of defensible probabilities for supernaturalism based off of solely communicable evidence.
Essentially, yes.
Is the point here that supernatural entities that would be too complex to specify into the universe from scratch may have been produced through some indirect process logically prior to the physics we know, sort of like humans were produced by evolution? Or is it something different?
Alien superintelligences are less speculative and emerge naturally from a simple universe program. More fundamentally the notion of simplicity that Eliezer and Luke are using is entirely based off of their assessments of which kinds of hypotheses have historically been more or less fruitful. Coming up with a notion of “simplicity” after the fact based on past observations is coding theory and has nothing to do with the universal prior, which mortals simply don’t have access to. Arguments should be about evidence, not “priors”.
...
It isn’t technically a universal prior, but it counts as evidence because it’s historically fruitful. That leaves you with a nitpick rather than showing “LW folk (including I think Eliezer and lukeprog) mistakenly believe that algorithmic probability theory implies a low prior for supernaturalism.”
I don’t think it’s nitpicking as such to point out that the probability of supernaturalism is unrelated to algorithmic probability. Bringing in Kolmogorov complexity is needlessly confusing, and even Bayesian probability isn’t necessary because all we’re really concerned with is the likelihood ratio. The error I want to discourage is bringing in confusing uncomputable mathematics for no reason and then asserting that said mathematics somehow justify a position one holds for what are actually entirely unrelated reasons. Such errors harm group epistemology.
I don’t see how you’ve done that. If KC isn’t a universal prior like objective isn’t technically objective but inter-subjective then you can still use KC as evidence for a class of propositions (and probably the only meaningful class of propositions). For that class of propositions you have automatic evidence for or against them (in the form of KC), and so it’s basically a ready-made prior because it passes from the posterior to the prior immediately anyway.
So a) you think LWers reasons for not believing in supernaturalism have nothing to do with KC, and b) you think supernaturalism exists outside the class of propositions KC can count as evidence as for or against?
I don’t care about A, but If B is your position I wonder: why?
That’s a shame. Any chance you might have suggestions on how to go about obtaining such evidence for oneself? Possibly via PM if you’d be more comfortable with that.
I have advice. First off, if psi’s real then I think it’s clearly an intelligent agent-like or agent-caused process. In general you’d be stupid to mess around with agents with unknown preferences. That’s why witchcraft was considered serious business: messing with demons is very much like building mini uFAIs. Just say no. So I don’t recommend messing around with psi, especially if you haven’t seriously considered what the implications of the existence of agent-like psi would be. This is why I like the Catholics: they take things seriously, it’s not fun and games. “Thou shalt not tempt the Lord thy God.” If you do experiment, pre-commit not to tell anyone about at least some predetermined subset of the results. Various parapsychology experiments indicate that psi effects can be retrocausal, so experimental results can be determined by whether or not you would in the future talk about them. If psi’s capricious then pre-commiting not to blab increases likelihood of significant effects.
I just thought of something. What you’re saying is that psi effects are anti-inductive.
The capricious-psi literature actually includes several proposed mechanisms which could lead to “anti-inductive” psi. Some of these mechanisms are amenable to mitigation strategies (such as not trying to use psi effects for material advantage, and keeping one’s experiments confidential); others are not.
Indeed.
Ok, I feel like we should now attempt to work out a theory of psi caused by some kind of market-like game theory among entities.
Thanks for the advice! Though I suppose I won’t tell you if it turns out to have been helpful?
As lukeprog says here.
I don’t entirely agree with Will here. My issue is that there seem to be some events, e.g., Fatima, where the best “scientific explanation” is little better than the supernatural wearing a lab-coat.
Are there any good supernatural explanations for that one?! Because “Catholicism” seems like a pretty terrible explanation here.
Why? Do you have a better one? (Note: I agree “Catholicism” isn’t a particularly good explanation, it’s just that it’s not noticeably worse than any other.)
I mentioned Catholicism only because it seems like the “obvious” supernatural answer, given that it’s supposed to be a Marian apparition. Though, I do think of Catholicism proper as pretty incoherent, so it’d rank fairly low on my supernatural explanation list, and well below the “scientific explanation” of “maybe some sort of weird mundane light effect, plus human psychology, plus a hundred years”. I haven’t really investigated the phenomenon myself, but I think, say, “the ghost-emperor played a trick” or “mass hypnosis to cover up UFO experiments by the lizard people” rank fairly well compared to Catholicism.
This isn’t really an explanation so much as clothing our ignorance in a lab coat.
It does a little more than that. It points to a specific class of hypotheses where we have evidence that in similar contexts such mechanisms can have an impact. The real problem here is that without any ability to replicate the event, we’re not going to be able to get substantially farther than that.
Yeah, it’s not really an explanation so much as an expression of where we’d look if we could. Presumably the way to figure it out is to either induce repeat performances (difficult to get funding and review board approval, though) or to study those mechanisms further. I suspect that’d be more likely to help than reading about ghost-emperors, at least.
Quite. Seems to me that if we’re going to hold science to that standard, we should be equally or more critical of ignorance in a cassock; we should view religion as a competing hypothesis that needs to be pointed to specifically, not as a reassuring fallback whenever conventional investigation fails for whatever reason. That’s a pretty common flaw of theological explanations, actually.
Disagree in spirit? What exactly does that mean?
(I happen to mostly agree with your comment while mostly agreeing with Grognor’s points—hence my confusion in what you mean, exactly.)
Hard to explain. I’ll briefly go over my agreement/disagreement status on each point. MWI: Mixed opinion. MWI is a decent bet, but then again that’s a pretty standard opinion among quantum physicists. Eliezer’s insistence that MWI is obviously correct is not justified given his arguments: he doesn’t address the most credible alternatives to MWI, and doesn’t seem to be cognizant of much of the relevant work. I think I disagree in spirit here even though I sort of agree at face value. Cryonics: Disagree, nothing about cryonics is “obvious”. Meh science, Yay Bayes!: Mostly disagree, too vague, little supporting evidence for face value interpretation. I agree that Bayes is cool. Utilitarianism: Disagree, utilitarianism is retarded. Consequentialism is fine, but often very naively applied in practice, e.g. utilitarianism. Eliezer’s metaethics: Disagree, especially considering Eliezer’s said he thinks he’s solved meta-ethics, which is outright crazy, though hopefully he was exaggerating. “‘People are crazy, the world is mad’ is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature”: Mostly disagree, LW is much too confident in the heuristics and biases literature and it’s not nearly a sufficient explanation for lots of things that are commonly alleged to be irrational.
When making claims like this, you need to do something to distinguish yourself from most people who make such claims, who tend to harbor basic misunderstandings, such as an assumption that preference utilitarianism is the only utilitarianism.
Utilitarianism has a number of different features, and a helpful comment would spell out which of the features, specifically, is retarded. Is it retarded to attach value to people’s welfare? Is it retarded to quantify people’s welfare? Is it retarded to add people’s welfare linearly once quantified? Is it retarded to assume that the value of structures containing more than one person depends on no features other than the welfare of those persons? And so on.
I suppose it’s easiest for me to just make the blanket metaphilosophical claim that normative ethics without well-justified meta-ethics just aren’t a real contender for the position of actual morality. So I’m unsatisfied with all normative ethics. I just think that utilitarianism is an especially ugly hack. I dislike fake non-arbitrariness.
— Regina Spektor, The Calculation
Perhaps I show my ignorance. Pleasure-happiness and preference fulfillment are the only maximands I’ve seen suggested by utilitarians. A quick Google search hasn’t revealed any others. What are the alternatives?
I’m unfortunately too lazy to make my case for retardedness: I disagree with enough of its features and motivations that I don’t know where to begin, and I wouldn’t know where to end.
Eudaimonia. “Thousand-shardedness”. Whatever humans’ complex values decide constitutes an intrinsically good life for an individual.
It’s possible that I’ve been mistaken in claiming that, as a matter of standard definition, any maximization of linearly summed “welfare” or “happiness” counts as utilitarianism. But it seems like a more natural place to draw the boundary than “maximization of either linearly summed preference satisfaction or linearly summed pleasure indicators in the brain but not linearly summed eudaimonia”.
That sounds basically the same as was what I’d been thinking of as preference utilitarianism. Maybe I should actually read Hare.
What’s your general approach to utilitarianism’s myriad paradoxes and mathematical difficulties?
I don’t think you need to explicitly address the alternatives to MWI to decide in favor of MWI. You can simply note that all interpretations of quantum mechanics either 1) fail to specify which worlds exist, 2) specify which worlds exist but do so through a burdensomely detailed mechanism, 3) admit that all the worlds exist, noting that worlds splitting via decoherence is implied by the rest of the physics. Am I missing something?
If “all the worlds” includes the non classical worlds, MWI is observationally false. Whether and how decoherence produces classical worlds is a topic of ongoing research.
Is that a response to my point specifically or a general observation? I don’t think “simply noting” is nearly enough justification to decide strongly in favor of MWI—maybe it’s enough to decide in favor of MWI, but it’s not enough to justify confident MWI evangelism nor enough to make bold claims about the failures of science and so forth. You have to show that various specific popular interpretations fail tests 1 and 2.
ETA: Tapping out because I think this thread is too noisy.
I suppose? It’s hard for me to see how there could even theoretically exist a mechanism such as in 2 that failed to be burdensome. But maybe you have something in mind?
It always seems that way until someone proposes a new theoretical framework, afterwards it seems like people were insane for not coming up with said framework sooner.
Well the Transactional Interpretation for example.
That would have been my guess. I don’t really understand the transactional interpretation; how does it pick out a single world without using a burdensomely detailed mechanism to do so?