To me, it seems you are engaging with the ChatGPT summary. You can find more about in Metaethics, e.g.
Eliezer Yudkowsky wrote a Sequence about metaethics, the Metaethics sequence, which Yudkowsky worried failed to convey his central point (this post by Luke tried to clarify); he approached the same problem again from a different angle in Highly Advanced Epistemology 101 for Beginners. From a standard philosophical standpoint, Yudkowsky’s philosophy is closest to Frank Jackson’s moral functionalism / analytic descriptivism; Yudkowsky could be loosely characterized as moral cognitivist—someone who believes moral sentences are either true or false—but not a moral realist—thus denying that moral sentences refer to facts about the world. Yudkowsky believes that moral cognition in any single human is at least potentially about a subject matter that is ‘logical’ in the sense that its semantics can be pinned down by axioms, and hence that moral cognition can bear truth-values; also that human beings both using similar words like “morality” can be talking about highly overlapping subject matter; but not that all possible minds would find the truths about this subject matter to be psychologically compelling.
That there is no gears level specification is exactly the problem he points out! We don’t know how specify human values and I think he makes a good case pointing that out—and that it is needed for alignment.
That there is no gears level specification is exactly the problem he points out! We don’t know how specify human values
And we don’t know that human values exist as a coherent object either. So his metaethics is “ethics is X” where X is undefined and possibly non existent.
I am not asking because I want to know. I was asking because I wanted you to think about those sequences,and what they are actually saying , and how clear they are. Which you didn’t.
Why would it matter what an individual commenter says about the clarity of the Sequences? I think a better measure would be what a large number of readers think about how clear they are. We could do a poll but I think there is already a measure: The votes. But these don’t measure clarity. More something like how useful people found them. And maybe that is a better measure? Another metric would be the increase in the number of readers while the Sequences were published. By that measure, esp. given the niche subject, they seem to be of excellent quality.
But just to check I read one high (130) and one low-vote (21) post from the Metaethics sequence and I think they are clear and readable.
I don’t think these are mutually exclusive? The Sequences are long and some of the posts were better than others. Also, what is considered “clear” can depend on one’s background. All authors have to make some assumptions about the audience’s knowledge. (E.g., at minimum, what language do they speak?) When Eliezer guessed wrong, or was read by those outside his intended audience, they might not be able to fill in the gaps and clarity suffers—for them, but not for everyone.
To me, it seems you are engaging with the ChatGPT summary. You can find more about in Metaethics, e.g.
That there is no gears level specification is exactly the problem he points out! We don’t know how specify human values and I think he makes a good case pointing that out—and that it is needed for alignment.
And we don’t know that human values exist as a coherent object either. So his metaethics is “ethics is X” where X is undefined and possibly non existent.
He doesn’t say “ethics is X” and I disagree that this is a summary that advances the conversation.
For any value of X?
I am not asking because I want to know. I was asking because I wanted you to think about those sequences,and what they are actually saying , and how clear they are. Which you didn’t.
Why would it matter what an individual commenter says about the clarity of the Sequences? I think a better measure would be what a large number of readers think about how clear they are. We could do a poll but I think there is already a measure: The votes. But these don’t measure clarity. More something like how useful people found them. And maybe that is a better measure? Another metric would be the increase in the number of readers while the Sequences were published. By that measure, esp. given the niche subject, they seem to be of excellent quality.
But just to check I read one high (130) and one low-vote (21) post from the Metaethics sequence and I think they are clear and readable.
Yes, lots of people think the sequences are great. Lots of people also complain about EY’s lack of clarity. So something has to give.
The fact that it seems to be hugely difficult for even favourably inclined people to distill his arguments is evidence in favour of unclarity.
I don’t think these are mutually exclusive? The Sequences are long and some of the posts were better than others. Also, what is considered “clear” can depend on one’s background. All authors have to make some assumptions about the audience’s knowledge. (E.g., at minimum, what language do they speak?) When Eliezer guessed wrong, or was read by those outside his intended audience, they might not be able to fill in the gaps and clarity suffers—for them, but not for everyone.
I agree that it is evidence to that end.