I read your post on habit theory, and I liked it, but I don’t think it’s an answer to the question “What should I do?”
It’s interesting to say that if you’re an artist, you might get more practical use out of virtue theory, and if you’re a politician, you might get more practical use out of consequentialism. I’m not sure who it is that faces more daily temptations to break the rules than the rest of us; bankers, I suppose, and maybe certain kinds of computer security experts.
Anyway, saying that morality is a tool doesn’t get you out of the original need to decide which lifestyle you want in the first place. Should I be an artist, or a politician, or a banker? Why? Eliezer’s answer is that there are no shortcuts and no frills here; you check and see what your brain says about what you ‘should’ do, and that’s all there is to it. This is not exactly a brilliant answer, but it may nevertheless be the best one out there. I’ve never yet heard a moral theory that made more sense than that, and believe me, I’ve looked.
It’s reasonable to insist that people put their conclusions in easily digestible bullet points to convince you to read the rest of what they’ve written...but if, noting that there are no such bullet points, you make the decision not to read the body text—you should probably refrain from commenting on the body text. A license to opt-out is not the same thing as a license to offer serious criticism. Eliezer may be wrong, but he’s not stupid, and he’s not crazy. If you want to offer a meaningful critique of his ideas, you’ll have to read them first.
but if, noting that there are no such bullet points, you make the decision not to read the body text—you should probably refrain from commenting on the body text. A license to opt-out is not the same thing as a license to offer serious criticism. Eliezer may be wrong, but he’s not stupid, and he’s not crazy.
This is sound general advice, but at least one observation makes this situation exceptional: Yudkowsky’s conclusions about ethics are never summarized in terms that contradict my take. I don’t think your rendition, for example, contradicts mine. I’m certainly not surprised to hear his position described the way you describe it:
Anyway, saying that morality is a tool doesn’t get you out of the original need to decide which lifestyle you want in the first place. Should I be an artist, or a politician, or a banker? Why? Eliezer’s answer is that there are no shortcuts and no frills here; you check and see what your brain says about what you ‘should’ do, and that’s all there is to it.
Now, I don’t think the decision of whether to be an artist, politician, or banker is a moral decision. It isn’t one you make primarily because of what’s ethically right or wrong. To the extent you do (and in the restricted sense that you do), your prior moral habits are your only guide.
But we’re looking at whether Yudkowsky’s position is intellectually respectable, not whether objective morality—which he’s committed to but I deny—exists. To say we look at what our brain says when we’re fully informed says essentially that we seek a reflective equilibrium in solving moral problems. So far so good. But it goes further in saying brains compute some specific function that determines generally when individuals reach that equilibrium. Leaving aside that this is implausible speculation, requiring that the terms of moral judgments be hardwired—and hardwired identically for each individual—it also simply fails to answer Moore’s open question, although Yudkowsky claims he has that answer. There’s nothing prima facie compelling ethically about what our brains happen to tell us is moral; no reason we should necessarily follow our brains’ hardwiring. I could consistently choose to consider my brain’s hardwired moralisms maladaptive or even despicable holdovers from the evolutionary past that I choose to override as much as I can.
Robin Hanson actually asked the right question. If what the brain computes is moral, what does it correspond to that makes it moral? Unless you think the brain is computing a fact about the world, you can’t coherently regard its computation as “accurate.” But if not, what makes it special and not just a reflex?
I do feel a bit guilty about criticizing Yudkowsky without reading all of him. But he seems to express his ideas at excessive and obfuscating length, and if there were more to them, I feel somewhat confident I’d come across his answers. It isn’t as though I haven’t skimmed many of these essays. And his answers would certainly deserve some reflection in his summation essay.
There’s no question Yudkowsky is no idiot. But he has some ideas that I think are stupid—like his “metaethics”—and he expresses them in a somewhat “crazy” manner, exuding grandiose self-confidence. Being surrounded and discussing mostly with people who agree with him is probably part of the cause.
As someone who has read Eliezer’s metaethics sequence, let me say that what you think his position is, is only somewhat related to what it actually is; and also, that he has answered those of your objections that are relevant.
It’s fine that you don’t want to read 30+ fairly long blog posts, especially if you dislike the writing style. But then, don’t try to criticize what you’re ignorant about. And no, openly admitting that you haven’t read the arguments you’re criticizing, and claiming that you feel guilty about it, doesn’t magically make it more acceptable. Or honest.
One doesn’t need to have read the whole Bible to criticize it. But the Bible is a fairly short work, so an even more extreme example might be better: one doesn’t need to have read the entire Talmud to criticize it.
It’s fine that you don’t want to read 30+ fairly long blog posts, especially if you dislike the writing style. But then, don’t try to criticize what you’re ignorant about. And no, openly admitting that you haven’t read the arguments you’re criticizing, and claiming that you feel guilty about it, doesn’t magically make it more acceptable. Or honest.
It’s hardly “dishonest” to criticize a position based on a 7,000-word summary statement while admitting you haven’t read the whole corpus! You’re playing with words to make a moralistic debating point: dishonesty involves deceit, and everyone has been informed of the basis for my opinions.
Consider the double standard involved. Yudkowsky lambasts “philosophers” and their “confusions”—their supposedly misguided concerns with the issues other philosophers have commented on to the detriment of inquiry. Has Yudkowsky read even a single book by each of the philosophers he dismisses?
In a normal forum, participants supply the arguments supposedly missed by critics who are only partially informed. Here there are vague allusions to what the Apostle Yudkowsky (prophet of the Singularity God) “answered” without any substance. An objective reader will conclude that the Prophet stands naked; the prolixity is probably intended to discourage criticism.
I think the argument you make in this comment isn’t a bad one, but the unnecessary and unwarranted “Apostle Yudkowsky (prophet of the Singularity God)” stuff amounts to indirectly insulting the people you’re talking with and, makes them far less likely to realize that you’re actually also saying something sensible. If you want to get your points across, as opposed to just enjoying a feeling of smug moral superiority while getting downvoted into oblivion, I strongly recommend leaving that stuff out.
Thanks for the advice, but my purpose—given that I’m an amoralist—isn’t to enjoy a sense of moral superiority. Rather, to test a forum toward which I’ve felt ambivalent for several years, mainly for my benefit but also for that of any objective observers.
Strong rhetoric is often necessary in an unreceptive forum because it announces that the writer considers his criticisms fundamental. If I state the criticisms neutrally, something I’ve often tried, they are received as minor—like the present post. They may even be voted up, but they have little impact. Strong language is appropriate in expressing severe criticisms.
How should a rationalist forum respond to harsh criticism? It isn’t rational to fall prey to the primate tendency to in-group thinking by neglecting to adjust for any sense of personal insult when the group leader is lambasted. Judging by reactions, the tendency to in-group thought is stronger here than in many forums that don’t claim the mantle of rationalism. This is partly because the members are more intelligent than in most other forums, and intelligence affords more adept self-deception. This is why it is particularly important for intelligent people to be rationalists but only if they honestly strive to apply rational principles to their own thinking. Instead, rationality here serves to excuse participants’ own irrationality. Participants simply accept their own tendencies to reject posts as worthless because they contain matter they find insulting. Evolutionary psychology, for instance, here serves to produce rationalizations rather than rationality. (Overcoming Bias is a still more extreme advocacy of this perversion of rationalism, although the tendency isn’t expressed in formal comment policies.)
“Karma” means nothing to me except as it affects discourse; I despise even the term, which stinks of Eastern mysticism. I’m told that the karma system of incentives, which any rationalist should understand vitally affects the character of discussion, was transplanted from reddit. How is a failure to attend to the vital mechanics of discussion and incentives rational? Laziness? How could policies so essential be accorded the back seat?
Participants, I’m told, don’t question the karma system because it works. A rationalist doesn’t think that way. He says, “If a system of incentives introduced without forethought and subject to sound criticisms (where even its name is an insult to rationality) produces the discourse that we want, then something must be wrong with what we want!” What’s wanted is the absence of any tests of ideology by fundamental dissent.
I think the argument you make in this comment isn’t a bad one, but the unnecessary and unwarranted “Apostle Yudkowsky (prophet of the Singularity God)” stuff amounts to indirectly insulting the people you’re talking with and, makes them far less likely to realize that you’re actually also saying something sensible. If you want to get your points across, as opposed to just enjoying a feeling of smug moral superiority while getting downvoted into oblivion, I strongly recommend leaving that stuff out.
Consider the double standard involved. Yudkowsky lambasts “philosophers” and their “confusions”—their supposedly misguided concerns with the issues other philosophers have commented on to the detriment of inquiry. Has Yudkowsky read even a single book by each of the philosophers he dismisses?
Some of them are simply not great writers. Hegel for example is just awful- the few coherent ideas in Hegel are more usefully described by other later writers. There’s also a strange aspect to this in that you are complaining about Eliezer not having read books while simultaneously defending your criticism of Eliezer’s metaethics positions without having read all his posts. Incidentally, if one wants to criticize Eliezer’s level of knowledge of philosophy, a better point is not so much the philosophers that he criticizes without reading, but rather his lack of knowledge of relevant philosophers that Eliezer seems unaware of, many of whom would agree with some of his points. Quine and Lakatos are the most obvious ones.
Here there are vague allusions to what the Apostle Yudkowsky (prophet of the Singularity God) “answered” without any substance. An objective reader will conclude that the Prophet stands naked; the prolixity is probably intended to discourage criticism.
I strongly suspect that your comments would be responded to more positively if they didn’t frequently end with this sort of extreme rhetoric that has more emotional content than rational dialogue. It is particularly a problem because on theLW interface, the up/down buttons are at the end of everything one has read, so what the last sentences say may have a disproportionate impact on whether people upvote or downvote and what they focus on in their replies.
Frankly, you have some valid points, but they are getting lost in the rhetoric. We know that you think that LW pattern matches to religion. Everyone gets the point. You don’t need to repeat that every single time you make a criticism.
I could consistently choose to consider my brain’s hardwired moralisms maladaptive or even despicable holdovers from the evolutionary past that I choose to override as much as I can.
And you would be making the decision to override with… what, your spleen?
I read your post on habit theory, and I liked it, but I don’t think it’s an answer to the question “What should I do?”
It’s interesting to say that if you’re an artist, you might get more practical use out of virtue theory, and if you’re a politician, you might get more practical use out of consequentialism. I’m not sure who it is that faces more daily temptations to break the rules than the rest of us; bankers, I suppose, and maybe certain kinds of computer security experts.
Anyway, saying that morality is a tool doesn’t get you out of the original need to decide which lifestyle you want in the first place. Should I be an artist, or a politician, or a banker? Why? Eliezer’s answer is that there are no shortcuts and no frills here; you check and see what your brain says about what you ‘should’ do, and that’s all there is to it. This is not exactly a brilliant answer, but it may nevertheless be the best one out there. I’ve never yet heard a moral theory that made more sense than that, and believe me, I’ve looked.
It’s reasonable to insist that people put their conclusions in easily digestible bullet points to convince you to read the rest of what they’ve written...but if, noting that there are no such bullet points, you make the decision not to read the body text—you should probably refrain from commenting on the body text. A license to opt-out is not the same thing as a license to offer serious criticism. Eliezer may be wrong, but he’s not stupid, and he’s not crazy. If you want to offer a meaningful critique of his ideas, you’ll have to read them first.
This is sound general advice, but at least one observation makes this situation exceptional: Yudkowsky’s conclusions about ethics are never summarized in terms that contradict my take. I don’t think your rendition, for example, contradicts mine. I’m certainly not surprised to hear his position described the way you describe it:
Now, I don’t think the decision of whether to be an artist, politician, or banker is a moral decision. It isn’t one you make primarily because of what’s ethically right or wrong. To the extent you do (and in the restricted sense that you do), your prior moral habits are your only guide.
But we’re looking at whether Yudkowsky’s position is intellectually respectable, not whether objective morality—which he’s committed to but I deny—exists. To say we look at what our brain says when we’re fully informed says essentially that we seek a reflective equilibrium in solving moral problems. So far so good. But it goes further in saying brains compute some specific function that determines generally when individuals reach that equilibrium. Leaving aside that this is implausible speculation, requiring that the terms of moral judgments be hardwired—and hardwired identically for each individual—it also simply fails to answer Moore’s open question, although Yudkowsky claims he has that answer. There’s nothing prima facie compelling ethically about what our brains happen to tell us is moral; no reason we should necessarily follow our brains’ hardwiring. I could consistently choose to consider my brain’s hardwired moralisms maladaptive or even despicable holdovers from the evolutionary past that I choose to override as much as I can.
Robin Hanson actually asked the right question. If what the brain computes is moral, what does it correspond to that makes it moral? Unless you think the brain is computing a fact about the world, you can’t coherently regard its computation as “accurate.” But if not, what makes it special and not just a reflex?
I do feel a bit guilty about criticizing Yudkowsky without reading all of him. But he seems to express his ideas at excessive and obfuscating length, and if there were more to them, I feel somewhat confident I’d come across his answers. It isn’t as though I haven’t skimmed many of these essays. And his answers would certainly deserve some reflection in his summation essay.
There’s no question Yudkowsky is no idiot. But he has some ideas that I think are stupid—like his “metaethics”—and he expresses them in a somewhat “crazy” manner, exuding grandiose self-confidence. Being surrounded and discussing mostly with people who agree with him is probably part of the cause.
As someone who has read Eliezer’s metaethics sequence, let me say that what you think his position is, is only somewhat related to what it actually is; and also, that he has answered those of your objections that are relevant.
It’s fine that you don’t want to read 30+ fairly long blog posts, especially if you dislike the writing style. But then, don’t try to criticize what you’re ignorant about. And no, openly admitting that you haven’t read the arguments you’re criticizing, and claiming that you feel guilty about it, doesn’t magically make it more acceptable. Or honest.
One doesn’t need to have read the whole Bible to criticize it. But the Bible is a fairly short work, so an even more extreme example might be better: one doesn’t need to have read the entire Talmud to criticize it.
It’s hardly “dishonest” to criticize a position based on a 7,000-word summary statement while admitting you haven’t read the whole corpus! You’re playing with words to make a moralistic debating point: dishonesty involves deceit, and everyone has been informed of the basis for my opinions.
Consider the double standard involved. Yudkowsky lambasts “philosophers” and their “confusions”—their supposedly misguided concerns with the issues other philosophers have commented on to the detriment of inquiry. Has Yudkowsky read even a single book by each of the philosophers he dismisses?
In a normal forum, participants supply the arguments supposedly missed by critics who are only partially informed. Here there are vague allusions to what the Apostle Yudkowsky (prophet of the Singularity God) “answered” without any substance. An objective reader will conclude that the Prophet stands naked; the prolixity is probably intended to discourage criticism.
I think the argument you make in this comment isn’t a bad one, but the unnecessary and unwarranted “Apostle Yudkowsky (prophet of the Singularity God)” stuff amounts to indirectly insulting the people you’re talking with and, makes them far less likely to realize that you’re actually also saying something sensible. If you want to get your points across, as opposed to just enjoying a feeling of smug moral superiority while getting downvoted into oblivion, I strongly recommend leaving that stuff out.
Thanks for the advice, but my purpose—given that I’m an amoralist—isn’t to enjoy a sense of moral superiority. Rather, to test a forum toward which I’ve felt ambivalent for several years, mainly for my benefit but also for that of any objective observers.
Strong rhetoric is often necessary in an unreceptive forum because it announces that the writer considers his criticisms fundamental. If I state the criticisms neutrally, something I’ve often tried, they are received as minor—like the present post. They may even be voted up, but they have little impact. Strong language is appropriate in expressing severe criticisms.
How should a rationalist forum respond to harsh criticism? It isn’t rational to fall prey to the primate tendency to in-group thinking by neglecting to adjust for any sense of personal insult when the group leader is lambasted. Judging by reactions, the tendency to in-group thought is stronger here than in many forums that don’t claim the mantle of rationalism. This is partly because the members are more intelligent than in most other forums, and intelligence affords more adept self-deception. This is why it is particularly important for intelligent people to be rationalists but only if they honestly strive to apply rational principles to their own thinking. Instead, rationality here serves to excuse participants’ own irrationality. Participants simply accept their own tendencies to reject posts as worthless because they contain matter they find insulting. Evolutionary psychology, for instance, here serves to produce rationalizations rather than rationality. (Overcoming Bias is a still more extreme advocacy of this perversion of rationalism, although the tendency isn’t expressed in formal comment policies.)
“Karma” means nothing to me except as it affects discourse; I despise even the term, which stinks of Eastern mysticism. I’m told that the karma system of incentives, which any rationalist should understand vitally affects the character of discussion, was transplanted from reddit. How is a failure to attend to the vital mechanics of discussion and incentives rational? Laziness? How could policies so essential be accorded the back seat?
Participants, I’m told, don’t question the karma system because it works. A rationalist doesn’t think that way. He says, “If a system of incentives introduced without forethought and subject to sound criticisms (where even its name is an insult to rationality) produces the discourse that we want, then something must be wrong with what we want!” What’s wanted is the absence of any tests of ideology by fundamental dissent.
Some of them are simply not great writers. Hegel for example is just awful- the few coherent ideas in Hegel are more usefully described by other later writers. There’s also a strange aspect to this in that you are complaining about Eliezer not having read books while simultaneously defending your criticism of Eliezer’s metaethics positions without having read all his posts. Incidentally, if one wants to criticize Eliezer’s level of knowledge of philosophy, a better point is not so much the philosophers that he criticizes without reading, but rather his lack of knowledge of relevant philosophers that Eliezer seems unaware of, many of whom would agree with some of his points. Quine and Lakatos are the most obvious ones.
I strongly suspect that your comments would be responded to more positively if they didn’t frequently end with this sort of extreme rhetoric that has more emotional content than rational dialogue. It is particularly a problem because on theLW interface, the up/down buttons are at the end of everything one has read, so what the last sentences say may have a disproportionate impact on whether people upvote or downvote and what they focus on in their replies.
Frankly, you have some valid points, but they are getting lost in the rhetoric. We know that you think that LW pattern matches to religion. Everyone gets the point. You don’t need to repeat that every single time you make a criticism.
And you would be making the decision to override with… what, your spleen?
Another part of my brain—besides the part computing the morality function Yudkowsky posits.
Surely you can’t believe Yudkowsky simply means whatever our brain decides is “moral”—and that he offers that as a solution to anything?
I’m not saying he’s right, just that your proposed alternative isn’t even wrong.
I’m not saying he’s right, I’m saying your proposed alternative isn’t even wrong.