I do apologize for coming late to the party; I’ve been reading, and really feel like I’m missing an inferential step that someone can point me towards.
I’ll try to briefly summarize, knowing that I’ll gloss over some details; hopefully, the details so glossed over will help anyone who wishes to help me find the missing step.
It seems to me that Eliezer’s philosophy of morality (as presented in the metaethics sequence) is: morality is the computation which decides which action is right (or which of N actions is the most right) by determining which action maximizes a complex system of interrelated goals (eg. happiness, freedom, beauty, etc.). Each goal is assumed to be stated in such a way that “maximizes” is the appropriate word (ie. given a choice between “maximize X” and “minimize ~X”, the former wording is chosen; “maximize happiness” rather than “minimize unhappiness”).
Further, humanity must necessarily share the same fundamental morality (system of goals) due to evolutionary psychology, by analogy with natural selection’s insistence that we share the same fundamental design.
One of Eliezer’s primary examples is Alice and Bob’s apparent disagreement over the morality of abortion, which must, it seems, come down to one of them having incomplete information (at least relative to the other). The other is the Pebblesorting People who have a completely different outlook on life, to the point where they don’t recognize “right” but “p-right”.
My first problem (which may well be a missed inferential step) is with the assumed universality, within humanity, of a system of goals. Most humans agree that freedom is a good thing, but a large minority doesn’t believe it the most important thing (China comes to mind; my understanding is that a great many of China’s citizens don’t care that their government is censoring their own history). In point of of fact, it seems that “freedom”, and especially individual freedom, is a relatively modern invention. But, let’s visit Alice and Bob.
Alice believes that abortion is morally acceptable, while Bob disagrees. Eliezer’s assertion seems to be that this disagreement means that either Alice or Bob has incomplete information (a missing fact, argument, or both, but information). Why is it not possible simply that Alice holds freedom as more important that life and Bob the reverse? A common argument against abortion holds that the future life of the fetus is more important than the restriction on the pregnant woman’s freedom of choice. Eliezer’s morality seems to imply that that statement must, apriori be either true or false; it cannot be an opinion like “walnuts taste better than almonds” or even “Christmas is more important than Easter” (without the former, the latter would not be possible; without the latter, the former would not be important).
It seems to be apriori true that 2+2=4. Why does it necessarily hold that “life is more (or less) important than choice”? And, for the latter, how would we decide between “more” and “less”? What experiment would tell us the difference; how would a “more” world differ from a “less” world?
I also have a question about the Pebble-people: how is it that humans have discovered “right” while the Pebble-people have discovered only “p-right”? Even if I grant the assertion that all humans are using the same fundamental morality, and Alice and Bob would necessarily agree if they had access to the same information, how is it that humans have discovered “right” and not “h-right”? H-morality values abstract concepts like beauty, art, love, and freedom; p-morality values concrete things like pleasingly prime piles of pebbles. P-morality doesn’t prohibit beauty or art, but doesn’t value them—it is apathetic towards them except so far as they further proper pebble piles. Similarly, h-morality is apathetic towards prime piles of pebbles, except so far as they are beautiful or artistic. Yet Eliezer asserts that h-morality is better, is closer to morality. I don’t see the supporting evidence, though, except that humans invented h-morality, so h-morality matches what we expect to see when we look for morality, so h-morality must be closer to morality than p-morality is. This looks like a circular trip through Cultural Relativism and Rule Utilitarianism (albeit with complex rules) with strong Anthropic favoritism. I really feel like I’m missing something here.
A standard science fiction story involves two true aliens meeting for the first time, and trying to exist in the same universe without exterminating each other in the process. How would this philosophy of morality influence the aliens that humans should be allowed to flourish even if we may in principle become a threat to their existence (or vice-versa)? H-morality includes freedom of individual choice as a good thing; a-morality (alien, not “amoral”) may not—perhaps they evolved something more akin to an ant colony where only a few have any real choice. How would h-morality (being closer to morality than a-morality, of course) influence them to stop using slave labor to power their star-ships or gladiatorial combat to entertain their citizens?
Fundamentally, though, I don’t see what the difference is between Eliezer’s philosophy of morality and simply saying “morality is the process by which one decided what is right, and ‘what is right’ is the answer one gets back from running their possible actions through morality”. It doesn’t seem to offer any insight into how to live, or how to choose between two mutually-exclusive actions, both of which are in classically morally gray areas (eg. might it be right to steal to feed one’s hungry family?); as I understand it, Eliezer’s morality simply says “do whatever the computation tells you to do” without offering any help on what that computation actually looks like (though, as I said, it looks suspiciously like Cultural [Personal?] Relativism blended with Rule Utilitarianism).
As I said, I really feel like I’m missing some small, key detail or inferential step. Please, take pity on this neophyte and help me find The Way.
My first problem (which may well be a missed inferential step) is with the assumed universality, within humanity, of a system of goals.
From what I’ve seen, others have the same objection; I do as well, and I have not seen an adequate response.
how is it that humans have discovered “right” while the Pebble-people have discovered only “p-right”? Even if I grant the assertion that all humans are using the same fundamental morality, and Alice and Bob would necessarily agree if they had access to the same information, how is it that humans have discovered “right” and not “h-right”?
From what I understand, everyone except Eliezer is more likely to hold the view that he found “h-right”, but he seems unwilling to call it that even when pressed on the matter. It’s another point on which I agree with your confusion.
as I understand it, Eliezer’s morality simply says “do whatever the computation tells you to do” without offering any help on what that computation actually looks like
We don’t have quite the skill to articulate it just yet, but possibly AI and neuroscience will help. If not, we might be in trouble.
As I said, I really feel like I’m missing some small, key detail or inferential step. Please, take pity on this neophyte and help me find The Way.
I assign a high probability that Eliezer is wrong, or at the least, providing a very incomplete model for metaethics. This sequence is the one I disagree with most. Personally, I think you have a good grasp of what he’s said, and its weaknesses.
Yet Eliezer asserts that h-morality is better, is closer to morality. I don’t see the supporting evidence, though, except that humans invented h-morality, so h-morality matches what we expect to see when we look for morality, so h-morality must be closer to morality than p-morality is.
“Better” and “closer to morality” and “h-morality” refer to the same thing here. “H-morality is better” roughly means “better is better”. Seeing no evidence that h-morality is better is like seeing no evidence that 2=2.
As far as I can see this is a reason why Eliezer doesn’t bother with calling morality “h-morality” though I might be erring.
I do apologize for coming late to the party; I’ve been reading, and really feel like I’m missing an inferential step that someone can point me towards.
I’ll try to briefly summarize, knowing that I’ll gloss over some details; hopefully, the details so glossed over will help anyone who wishes to help me find the missing step.
It seems to me that Eliezer’s philosophy of morality (as presented in the metaethics sequence) is: morality is the computation which decides which action is right (or which of N actions is the most right) by determining which action maximizes a complex system of interrelated goals (eg. happiness, freedom, beauty, etc.). Each goal is assumed to be stated in such a way that “maximizes” is the appropriate word (ie. given a choice between “maximize X” and “minimize ~X”, the former wording is chosen; “maximize happiness” rather than “minimize unhappiness”).
Further, humanity must necessarily share the same fundamental morality (system of goals) due to evolutionary psychology, by analogy with natural selection’s insistence that we share the same fundamental design.
One of Eliezer’s primary examples is Alice and Bob’s apparent disagreement over the morality of abortion, which must, it seems, come down to one of them having incomplete information (at least relative to the other). The other is the Pebblesorting People who have a completely different outlook on life, to the point where they don’t recognize “right” but “p-right”.
My first problem (which may well be a missed inferential step) is with the assumed universality, within humanity, of a system of goals. Most humans agree that freedom is a good thing, but a large minority doesn’t believe it the most important thing (China comes to mind; my understanding is that a great many of China’s citizens don’t care that their government is censoring their own history). In point of of fact, it seems that “freedom”, and especially individual freedom, is a relatively modern invention. But, let’s visit Alice and Bob.
Alice believes that abortion is morally acceptable, while Bob disagrees. Eliezer’s assertion seems to be that this disagreement means that either Alice or Bob has incomplete information (a missing fact, argument, or both, but information). Why is it not possible simply that Alice holds freedom as more important that life and Bob the reverse? A common argument against abortion holds that the future life of the fetus is more important than the restriction on the pregnant woman’s freedom of choice. Eliezer’s morality seems to imply that that statement must, a priori be either true or false; it cannot be an opinion like “walnuts taste better than almonds” or even “Christmas is more important than Easter” (without the former, the latter would not be possible; without the latter, the former would not be important).
It seems to be a priori true that 2+2=4. Why does it necessarily hold that “life is more (or less) important than choice”? And, for the latter, how would we decide between “more” and “less”? What experiment would tell us the difference; how would a “more” world differ from a “less” world?
I also have a question about the Pebble-people: how is it that humans have discovered “right” while the Pebble-people have discovered only “p-right”? Even if I grant the assertion that all humans are using the same fundamental morality, and Alice and Bob would necessarily agree if they had access to the same information, how is it that humans have discovered “right” and not “h-right”? H-morality values abstract concepts like beauty, art, love, and freedom; p-morality values concrete things like pleasingly prime piles of pebbles. P-morality doesn’t prohibit beauty or art, but doesn’t value them—it is apathetic towards them except so far as they further proper pebble piles. Similarly, h-morality is apathetic towards prime piles of pebbles, except so far as they are beautiful or artistic. Yet Eliezer asserts that h-morality is better, is closer to morality. I don’t see the supporting evidence, though, except that humans invented h-morality, so h-morality matches what we expect to see when we look for morality, so h-morality must be closer to morality than p-morality is. This looks like a circular trip through Cultural Relativism and Rule Utilitarianism (albeit with complex rules) with strong Anthropic favoritism. I really feel like I’m missing something here.
A standard science fiction story involves two true aliens meeting for the first time, and trying to exist in the same universe without exterminating each other in the process. How would this philosophy of morality influence the aliens that humans should be allowed to flourish even if we may in principle become a threat to their existence (or vice-versa)? H-morality includes freedom of individual choice as a good thing; a-morality (alien, not “amoral”) may not—perhaps they evolved something more akin to an ant colony where only a few have any real choice. How would h-morality (being closer to morality than a-morality, of course) influence them to stop using slave labor to power their star-ships or gladiatorial combat to entertain their citizens?
Fundamentally, though, I don’t see what the difference is between Eliezer’s philosophy of morality and simply saying “morality is the process by which one decided what is right, and ‘what is right’ is the answer one gets back from running their possible actions through morality”. It doesn’t seem to offer any insight into how to live, or how to choose between two mutually-exclusive actions, both of which are in classically morally gray areas (eg. might it be right to steal to feed one’s hungry family?); as I understand it, Eliezer’s morality simply says “do whatever the computation tells you to do” without offering any help on what that computation actually looks like (though, as I said, it looks suspiciously like Cultural [Personal?] Relativism blended with Rule Utilitarianism).
As I said, I really feel like I’m missing some small, key detail or inferential step. Please, take pity on this neophyte and help me find The Way.
From what I’ve seen, others have the same objection; I do as well, and I have not seen an adequate response.
From what I understand, everyone except Eliezer is more likely to hold the view that he found “h-right”, but he seems unwilling to call it that even when pressed on the matter. It’s another point on which I agree with your confusion.
We don’t have quite the skill to articulate it just yet, but possibly AI and neuroscience will help. If not, we might be in trouble.
I assign a high probability that Eliezer is wrong, or at the least, providing a very incomplete model for metaethics. This sequence is the one I disagree with most. Personally, I think you have a good grasp of what he’s said, and its weaknesses.
“Better” and “closer to morality” and “h-morality” refer to the same thing here. “H-morality is better” roughly means “better is better”. Seeing no evidence that h-morality is better is like seeing no evidence that 2=2.
As far as I can see this is a reason why Eliezer doesn’t bother with calling morality “h-morality” though I might be erring.