Luckily, Eliezer has written enough about it that you can just ask ChatGPT exactly that:
EY’s Views on Consciousness and Ethics
Model: GPT-4
User
summarise EY’s views on consciousness and on ethics
ChatGPT
EY, or Eliezer Yudkowsky, is a prominent artificial intelligence (AI) researcher and writer known for his work on rationality, AI alignment, and friendly AI. While I can’t provide an exhaustive account of his views, I can give a brief summary of some of his key ideas on consciousness and ethics.
Consciousness:
Reductionism: Yudkowsky approaches consciousness from a reductionist perspective, which means he believes that consciousness can be understood and explained in terms of simpler, lower-level components.
No “magic”: He argues against the idea that there is some sort of “magic” or inherently inexplicable aspect to consciousness. Instead, he sees it as a natural phenomenon that can be understood through scientific inquiry.
Importance of AI: Yudkowsky believes that understanding consciousness is crucial for the development of AI, particularly AI alignment, as it will help us build artificial intelligences that can comprehend and interact with human values and emotions.
Ethics:
Meta-ethics: Yudkowsky’s approach to ethics is rooted in meta-ethics, which focuses on understanding the underlying nature and structure of ethics itself.
Reflective equilibrium: He advocates for the concept of “reflective equilibrium,” in which we arrive at ethical conclusions by balancing and adjusting our intuitions, principles, and beliefs to achieve a coherent and stable ethical framework.
Coherent extrapolated volition (CEV): Yudkowsky has proposed the concept of CEV as a way to align AI systems with human values. CEV involves using AI to model the values that humanity would converge upon if given enough time and resources to reflect, discuss, and reason together.
Friendly AI: Yudkowsky emphasizes the importance of developing AI that is both intelligent and aligned with human values. He believes that ensuring AI safety and value alignment is a critical challenge for humanity.
In conclusion, Eliezer Yudkowsky’s views on consciousness emphasize reductionism and the importance of understanding it for AI development. His ethical approach is grounded in meta-ethics, with a focus on reflective equilibrium and coherent extrapolated volition for developing friendly AI.
Neither of those is an actual theory. Saying consciousness of is reductionistic, somehow, doesn’t tell you how it works, any more than saying it is emergent, somehow. Saying the true ethics are what you get out CEV also doesn’t specify anything, because there is no gears level specification of CEV.
To me, it seems you are engaging with the ChatGPT summary. You can find more about in Metaethics, e.g.
Eliezer Yudkowsky wrote a Sequence about metaethics, the Metaethics sequence, which Yudkowsky worried failed to convey his central point (this post by Luke tried to clarify); he approached the same problem again from a different angle in Highly Advanced Epistemology 101 for Beginners. From a standard philosophical standpoint, Yudkowsky’s philosophy is closest to Frank Jackson’s moral functionalism / analytic descriptivism; Yudkowsky could be loosely characterized as moral cognitivist—someone who believes moral sentences are either true or false—but not a moral realist—thus denying that moral sentences refer to facts about the world. Yudkowsky believes that moral cognition in any single human is at least potentially about a subject matter that is ‘logical’ in the sense that its semantics can be pinned down by axioms, and hence that moral cognition can bear truth-values; also that human beings both using similar words like “morality” can be talking about highly overlapping subject matter; but not that all possible minds would find the truths about this subject matter to be psychologically compelling.
That there is no gears level specification is exactly the problem he points out! We don’t know how specify human values and I think he makes a good case pointing that out—and that it is needed for alignment.
That there is no gears level specification is exactly the problem he points out! We don’t know how specify human values
And we don’t know that human values exist as a coherent object either. So his metaethics is “ethics is X” where X is undefined and possibly non existent.
I am not asking because I want to know. I was asking because I wanted you to think about those sequences,and what they are actually saying , and how clear they are. Which you didn’t.
Why would it matter what an individual commenter says about the clarity of the Sequences? I think a better measure would be what a large number of readers think about how clear they are. We could do a poll but I think there is already a measure: The votes. But these don’t measure clarity. More something like how useful people found them. And maybe that is a better measure? Another metric would be the increase in the number of readers while the Sequences were published. By that measure, esp. given the niche subject, they seem to be of excellent quality.
But just to check I read one high (130) and one low-vote (21) post from the Metaethics sequence and I think they are clear and readable.
I don’t think these are mutually exclusive? The Sequences are long and some of the posts were better than others. Also, what is considered “clear” can depend on one’s background. All authors have to make some assumptions about the audience’s knowledge. (E.g., at minimum, what language do they speak?) When Eliezer guessed wrong, or was read by those outside his intended audience, they might not be able to fill in the gaps and clarity suffers—for them, but not for everyone.
Can you summarise EY’s views on consciousness and on ethics?
Luckily, Eliezer has written enough about it that you can just ask ChatGPT exactly that:
EY’s Views on Consciousness and Ethics
Model: GPT-4
User summarise EY’s views on consciousness and on ethics
ChatGPT EY, or Eliezer Yudkowsky, is a prominent artificial intelligence (AI) researcher and writer known for his work on rationality, AI alignment, and friendly AI. While I can’t provide an exhaustive account of his views, I can give a brief summary of some of his key ideas on consciousness and ethics.
Consciousness:
Reductionism: Yudkowsky approaches consciousness from a reductionist perspective, which means he believes that consciousness can be understood and explained in terms of simpler, lower-level components. No “magic”: He argues against the idea that there is some sort of “magic” or inherently inexplicable aspect to consciousness. Instead, he sees it as a natural phenomenon that can be understood through scientific inquiry. Importance of AI: Yudkowsky believes that understanding consciousness is crucial for the development of AI, particularly AI alignment, as it will help us build artificial intelligences that can comprehend and interact with human values and emotions. Ethics:
Meta-ethics: Yudkowsky’s approach to ethics is rooted in meta-ethics, which focuses on understanding the underlying nature and structure of ethics itself. Reflective equilibrium: He advocates for the concept of “reflective equilibrium,” in which we arrive at ethical conclusions by balancing and adjusting our intuitions, principles, and beliefs to achieve a coherent and stable ethical framework. Coherent extrapolated volition (CEV): Yudkowsky has proposed the concept of CEV as a way to align AI systems with human values. CEV involves using AI to model the values that humanity would converge upon if given enough time and resources to reflect, discuss, and reason together. Friendly AI: Yudkowsky emphasizes the importance of developing AI that is both intelligent and aligned with human values. He believes that ensuring AI safety and value alignment is a critical challenge for humanity. In conclusion, Eliezer Yudkowsky’s views on consciousness emphasize reductionism and the importance of understanding it for AI development. His ethical approach is grounded in meta-ethics, with a focus on reflective equilibrium and coherent extrapolated volition for developing friendly AI.
Neither of those is an actual theory. Saying consciousness of is reductionistic, somehow, doesn’t tell you how it works, any more than saying it is emergent, somehow. Saying the true ethics are what you get out CEV also doesn’t specify anything, because there is no gears level specification of CEV.
To me, it seems you are engaging with the ChatGPT summary. You can find more about in Metaethics, e.g.
That there is no gears level specification is exactly the problem he points out! We don’t know how specify human values and I think he makes a good case pointing that out—and that it is needed for alignment.
And we don’t know that human values exist as a coherent object either. So his metaethics is “ethics is X” where X is undefined and possibly non existent.
He doesn’t say “ethics is X” and I disagree that this is a summary that advances the conversation.
For any value of X?
I am not asking because I want to know. I was asking because I wanted you to think about those sequences,and what they are actually saying , and how clear they are. Which you didn’t.
Why would it matter what an individual commenter says about the clarity of the Sequences? I think a better measure would be what a large number of readers think about how clear they are. We could do a poll but I think there is already a measure: The votes. But these don’t measure clarity. More something like how useful people found them. And maybe that is a better measure? Another metric would be the increase in the number of readers while the Sequences were published. By that measure, esp. given the niche subject, they seem to be of excellent quality.
But just to check I read one high (130) and one low-vote (21) post from the Metaethics sequence and I think they are clear and readable.
Yes, lots of people think the sequences are great. Lots of people also complain about EY’s lack of clarity. So something has to give.
The fact that it seems to be hugely difficult for even favourably inclined people to distill his arguments is evidence in favour of unclarity.
I don’t think these are mutually exclusive? The Sequences are long and some of the posts were better than others. Also, what is considered “clear” can depend on one’s background. All authors have to make some assumptions about the audience’s knowledge. (E.g., at minimum, what language do they speak?) When Eliezer guessed wrong, or was read by those outside his intended audience, they might not be able to fill in the gaps and clarity suffers—for them, but not for everyone.
I agree that it is evidence to that end.