I think the standard LW argument for there being only one morality is based on the psychological unity of mankind. Human minds do not occupy an arbitrary or even a particularly large region of mindspace: the region they occupy is quite small for good reasons. Likewise, the moral theories that human minds adopt occupy quite a small region of moralityspace. The arguments around CEV suggest that these moral theories ought to converge if we extrapolate enough. I am not sure if this exact argument is defended in a LW post.
But essentially, what is grounding morality? Are moral facts contingent; could morality have been different? Is it possible to make it different in the future?
Because you and I are not arbitrary minds in mindspace. Viewed against the entirety of mindspace we are practically identical, and we have minds that care about that.
the region they occupy is quite small for good reasons.
The region is exactly as large as it is. The fact that is has size, and is not a single point, tells you that our moralities are different. In some things, the difference will not matter, and in some it will. It seems we don’t have any problem finding things to fight over. However small you want to say that the differences are, there’s a lot of conflict over them.
The more I look around, the more I see people with fundamentally different ways of thinking and valuing. Now I suppose they have more commonality between them and banana slugs, and likely they would band together should the banana slugs rise up and launch a sneak attack. But these different kinds of people with different values often don’t seem to want to live in the same world.
Hitchens writes in Newsweek magazine: “Winston Churchill … found it intolerable even to breathe the same air, or share the same continent or planet, as the Nazis.”
(By the way, if anyone can find the original source from Churchill, I’d appreciate it.)
I’d also note that even having contextually identical moralities doesn’t imply a lack of conflict. We could all be psychopaths. Some percentage of us are already there.
Viewed against the entirety of mindspace we are practically identical, and we have minds that care about that.
Seems like our minds care quite a lot about the differences, however small you think they are. The differences aren’t small, by the measure of how much we care about them.
No amount of reality need have the slightest impact on moral realists.
Is there any experiment that could be run that would refute moral realism?
Maybe Clippy is right, we should all be clippists, and we’re just all “wrong” to think otherwise. Clippism—the true objective morality. Clippy seems to think so. I don’t, and I don’t care what Clippy thinks in this regard.
It also doesn’t equate to non-empiricism. Eg “Fish do not feel pain, so angling is not cruel”>
3.If you are like a clippy—an entity that only uses rationality to fulfil arbitrary aims—you won’t be convinced/.Guess what? That has no impact on realism whatsoever. A compelling argument is an argument capable of compelling an agent capable of understanding it, and with a commitment to rationality as an end.
You asked two questions. My reply was meant to indicate that arbitrariness depends on coherence and extrapolation (revision, reflection), both of which Clippy has rather less of whichthan I do.
I think the standard LW argument for there being only one morality is based on the psychological unity of mankind.
I would very much doubt such an argument: most humans also share the same mechanisms for language learning, but still end up speaking quite different languages. (Yes, you can translate between them and learn new languages, but that doesn’t mean that all languages are the same: there are things that just don’t translate well from one language to another.) Global structure, local content.
I don’t think the analogy to languages holds water. Substituting a word for a different word doesn’t have the kind of impact on what people do that substituting a moral rule for a different moral rule does. Put another way, there are selection pressures constraining what human moralities look like that don’t constrain what human languages look like.
Substituting a word for a different word doesn’t have the kind of impact on what people do that substituting a moral rule for a different moral rule does.
This sounds like a strawman argument to me. It doesn’t refute the argument that part of morality is cultural but based on a shared morality-learning mechanism.
there are selection pressures constraining what human moralities look like that don’t constrain what human languages look like.
There are also selection pressures constraining what human languages look like, that don’t constrain what human moralities look like. Or to give another example: there are selection pressures that constrain what dogs look like that don’t constrain what catfish look like, and vice-versa. That doesn’t mean that they also have similarities.
I think the standard LW argument for there being only one morality is based on the psychological unity of mankind.
Having read the Meta-Ethics sequence, this is my belief too. Indeed, Elizier cares to index human-evaluation and pebblesorters-evaluation algorithms, by calling the first “morality” and the second “pebblesorting”, but he is careful to avoid talking about Elizier-morality and MrMind-morality, or even Elizier-yesterday-morality and Elizier-today-morality. Of course his aims were different, and compared to differently evolved aliens (or AI’s) our morality is truly one of a kind.
But if we magnify our view on morality-space, I think it’s impossible not to recognize that there are differences!
I think that this state of affair can be explained in this way: while there’s a psychological unity of mankind, it concerns only very primitive aspects of our lives: the existence of joy, sadness, the importance of sex, etc. But our innermost and basic evaluation algorithm doesn’t cover every aspects of our lives, mainly because our culture poses problems too new for a genetic solution to have been spread to the whole population. Thus ad-hoc solutions, derived from culture and circumstances, step in: justice, fairness, laws, and so on. Those solutions may very well vary in time and space, and our brains being what they are, sometimes they overwrite what should have been the most primitive output. When we talk about morality, we are usually already assuming the most primitive basic facts about human evaluation algorithm, and we try to argue about the finer point not covered by the genetic wiring of our brains, as for example if murder is always wrong.
In comparison with pebble-sorters or clipping AI, humanity exhibits a very narrow way of evaluating reality, to the point that you can talk about a single human-algorithm and call it “morality”. But if you zoom in, it is clear that the bedrock of morality doesn’t cover every problems that cultures naturally throw at pepole, and that’s why you need to invent “patches” or “add-ons” to the original algorithm, in form of morality concepts like justice, fairness, the sacrality of life, etc. Obviously, different groups of people will come up with different patches. But there are add-ons that were invented a long time ago, and they are now so widespread and ingrained in certain group’s education, that they feel as if they are part of the original primitive morality, while infact they are not. There are also new problems that require the (sometimes urgent) invention of new patches (e.g.: nuclear proliferation, genetic manipulation, birth control), and they are even more problematic and still in a state of transition nowadays.
Is this view unitary, or even realist? In my opinion, philosophical distinctions are too crude and simplicistic to categorize correctly the view of morality as “algorithm + local patches”. Maybe it needs its whole new category, something like the “algorithmic theories of morality” (although the category of “synthetic etical naturalism” comes close to capture the concept).
The arguments around CEV suggest that these moral theories ought to converge.
In the practical sense, only something in particular can be done with the world, so if “morality” is taken to refer to the goal given to a world-optimizing AI, it should be something specific by construction. If we take “morality” as given by the data of individual people, we can define personal moralities for each of them that would almost certainly be somewhat different from each other. Given the task of arriving at a single goal for the world, it might prove useful to exploit the similarities between personal moralities, or to sidestep this concept altogether, but eventual “convergence” is more of a design criterion than a prediction. In a world that had both humans and pebblesorters in it, arriving at a single goal would still be an important problem, even though we wouldn’t expect these goals to “naturally” converge under reflection.
I think the standard LW argument for there being only one morality is based on the psychological unity of mankind. Human minds do not occupy an arbitrary or even a particularly large region of mindspace: the region they occupy is quite small for good reasons. Likewise, the moral theories that human minds adopt occupy quite a small region of moralityspace. The arguments around CEV suggest that these moral theories ought to converge if we extrapolate enough. I am not sure if this exact argument is defended in a LW post.
This sounds like ethical subjectivism (that ethical sentences are propositions about the attitudes of people). I’m quite amenable to ethical subjectivism but it’s an anti-realist position.
See this comment. If Omega changed the attitudes of all people, that would change what those people mean when they say morality-in-our-world, but it would not change what I mean (here, in the real world rather than the counterfactual world) when I say morality-in-the-counterfactual-world, in the same way that if Omega changed the brains of all people so that the meanings of “red” and “yellow” were switched, that would change what those people mean when they say red, but it would not change what I mean when I say red-in-the-counterfactual-world.
I deal with exactly this issue in a post I made a while back (admittedly it is too long). It’s an issue of levels of recursion in our process of modelling reality (or a counterfactual reality). Your moral judgments aren’t dependent on the attitudes of people (including yourself) that you are modeling (in this world or in a counterfactual world): they’re dependent on the cognitive algorithms in your actual brain.
In other words, the subjectivist account of morality doesn’t say that people look at the attitudes of people in the world and then conclude from that what morality says. We don’t map attitudes and then conclude from those attitudes what is and is moral. Rather, we map the world and then out brains react emotionally to facts about that world and project our attitudes onto them. So morality doesn’t change in a world where people’s attitudes change because you’re using the same brain to make moral judgments about the counterfactual world as you use to make moral judgments about this world.
The post I linked to has some diagrams that make this clearer.
As for the linked comment, I am unsure there is a single, distinct, and unchanging logical object to define—but if there is one I agree with the comment and think that defining the algorithm that produces human attitudes is a crucial project. But clearly an anti-realist one.
Right, but that is strong evidence that morality isn’t an externally existing object.
I’m not sure what you mean by this.
Real objects are subject to counterfactual alterations.
Yes, but logical objects aren’t.
...if there is one I agree with the comment and think that defining the algorithm that produces human attitudes is a crucial project. But clearly an anti-realist one.
If I said “when we talk about Peano arithmetic, we are referring to a logical object. If counterfactually Peano had proposed a completely different set of axioms, that would change what people in the counterfactual world mean by Peano arithmetic, but it wouldn’t change what I mean by Peano-arithmetic-in-the-counterfactual-world,” would that imply that I’m not a mathematical Platonist?
I literally just edited my comment for clarity. It might make more sense now. I will edit this comment with a response to your point here.
Edit:
If I said “when we talk about Peano arithmetic, we are referring to a logical object. If counterfactually Peano had proposed a completely different set of axioms, that would change what people in the counterfactual world mean by Peano arithmetic, but it wouldn’t change what I mean by Peano-arithmetic-in-the-counterfactual-world,” would that imply that I’m not a mathematical Platonist?
Any value system is a logical object. For that matter, any model of anything is a logical object. Any false theory of physics is a logical object. Theories of morality and of physics (logical objects both) are interesting because they purport to describe something in the world. The question before us is do normative theories purport to describe an object that is mind-independent or an object that is subjective?
Okay. I don’t think we actually disagree about anything. I just don’t know what you mean by “realist.”
So morality doesn’t change in a world where people’s attitudes change because you’re using the same brain to make moral judgments about the counterfactual world as you use to make moral judgments about this world.
This sounds like ethical subjectivism (that ethical sentences are propositions about the attitudes of people). I’m quite amenable to ethical subjectivism but it’s an anti-realist position.
OK, suppose that this is an anti-realism position. People’s attitudes exist, but this isn’t what we mean by morality existing. Is that how it follows as an anti-realist position?
I was intrigued by a comment you made some time ago that you are not a realist, so you wonder what it is that everyone is arguing about. What is your position on ethical subjectivism?
OK, suppose that this is an anti-realism position. People’s attitudes exist, but this isn’t what we mean by morality existing. Is that how it follows as an anti-realist position?
So here is a generic definition of realism (in general, not for morality in particular
a, b, and c and so on exist, and the fact that they exist and have properties such as F-ness, G-ness, and H-ness is (apart from mundane empirical dependencies of the sort sometimes encountered in everyday life) independent of anyone’s beliefs, linguistic practices, conceptual schemes, and so on.
E.g. A realist position on ghosts doesn’t include the position that “ghost” is a kind of hallucination people have even though there is something that exists there.
What is your position on ethical subjectivism?
I think it is less wrong than every variety of moral realism but I am unsure if moral claims are reports of subjective attitudes (subjectivism) or expressions of subjective attitudes (non-cognitivism). But I don’t think that distinction matters very much.
Luckily, I live in a world populated by entities who mostly concur with my attitudes regarding how the universe should be. This lets us cooperate and formalize procedures for determining outcomes that are convivial to our attitudes. But these attitudes are the result of a cognitive structure determined by natural selection and culture transmission, altered by reason and language. As such, they contain all manner of kludgey artifacts and heuristics that respond oddly to novel circumstances. So I find it weird that anyone thinks they can be described by something like preference utilitarianism of Kantian deontology. Those are the kind of parsimonious, elegant theories that we expect to find governing natural laws, not culturally and biologically evolved structures. In fact, Kant was emulating Newton.
Attitudes produced by human brains are going to be contextually inconsistent, subject to framing effects, unable to process most novel inputs, cluttered etc. What’s more, since our attitudes aren’t produced by a single, universal utility function but a cluster of heuristics, most moral disagreements are going to be the result of certain heuristics being more dominant in some people than others. That makes these grand theories about these attitudes silly to argue about: positions aren’t determined by things in the universe or by logic. They’re determined by the cognitive styles of individuals and the cultural conditioning they receive. Most of Less Wrong is robustly consequentialist because most people here share a particular cognitive style—we don’t have any grand insights into reality when it comes to normative theory.
E.g. A realist position on ghosts doesn’t include the position that “ghost” is a kind of hallucination people have even though there is something that exists there.
I see, thanks for that distinction! I now need to reread parts of the metaethics sequence since I believe I came away with the thesis that morality is real in this sense… That is, that morality is real because we have bits of code (evolutionary, mental, etc) that output positive or negative feelings about different states of the universe and this code is “real” even if the positive and negative doesn’t exist external to that code.
So I find it weird that anyone thinks they can be described by something like preference utilitarianism of Kantian deontology. Those are the kind of parsimonious, elegant theories that we expect to find governing natural laws, not culturally and biologically evolved structures.
I agree...
That makes these grand theories about these attitudes silly to argue about: positions aren’t determined by things in the universe or by logic. They’re determined by the cognitive styles of individuals and the cultural conditioning they receive.
and I don’t disagree with this. I do hope/half expect that there should be some patterns to our attitudes, not as simplistic as natural laws but perhaps guessable to someone who thought about it the right way.
Thanks for describing your positions in more detail.
I think the standard LW argument for there being only one morality is based on the psychological unity of mankind.
I think you’re mixing up CEV with morality. CEV is an instance of the strategy “cooperate with humans” in some sort of AI-building prisoner’s dilemma. It gives the AI some preferences, and the only guarantee that those preferences will be good is that humans are similar.
There is “only one” “morality” (kinda) because when I say “this is right” I am executing a function, and functions are unique-ish. But Me.right can be different from You.right. You just happen to be wrong sometimes, because You.right isn’t right, because when I say right I mean Me.right.
So that “good” from the first paragraph would be Me.good, not CEV.good.
It is a factual statement that when I say something is “right,” I don’t mean CEV.right, I mean Me.right, and I’m not even particularly trying to approximate CEV.
I think the standard LW argument for there being only one morality is based on the psychological unity of mankind. Human minds do not occupy an arbitrary or even a particularly large region of mindspace: the region they occupy is quite small for good reasons. Likewise, the moral theories that human minds adopt occupy quite a small region of moralityspace. The arguments around CEV suggest that these moral theories ought to converge if we extrapolate enough. I am not sure if this exact argument is defended in a LW post.
See this comment.
Because you and I are not arbitrary minds in mindspace. Viewed against the entirety of mindspace we are practically identical, and we have minds that care about that.
The region is exactly as large as it is. The fact that is has size, and is not a single point, tells you that our moralities are different. In some things, the difference will not matter, and in some it will. It seems we don’t have any problem finding things to fight over. However small you want to say that the differences are, there’s a lot of conflict over them.
The more I look around, the more I see people with fundamentally different ways of thinking and valuing. Now I suppose they have more commonality between them and banana slugs, and likely they would band together should the banana slugs rise up and launch a sneak attack. But these different kinds of people with different values often don’t seem to want to live in the same world.
Hitchens writes in Newsweek magazine: “Winston Churchill … found it intolerable even to breathe the same air, or share the same continent or planet, as the Nazis.”
(By the way, if anyone can find the original source from Churchill, I’d appreciate it.)
I’d also note that even having contextually identical moralities doesn’t imply a lack of conflict. We could all be psychopaths. Some percentage of us are already there.
Seems like our minds care quite a lot about the differences, however small you think they are. The differences aren’t small, by the measure of how much we care about them.
No amount of difference or disagreement makes the slightest impact on realism. Realists accept that some many or all people are wrong.
Of course.
No amount of reality need have the slightest impact on moral realists.
Is there any experiment that could be run that would refute moral realism?
Maybe Clippy is right, we should all be clippists, and we’re just all “wrong” to think otherwise. Clippism—the true objective morality. Clippy seems to think so. I don’t, and I don’t care what Clippy thinks in this regard.
Realism does not equate to empiricism
It also doesn’t equate to non-empiricism. Eg “Fish do not feel pain, so angling is not cruel”>
3.If you are like a clippy—an entity that only uses rationality to fulfil arbitrary aims—you won’t be convinced/.Guess what? That has no impact on realism whatsoever. A compelling argument is an argument capable of compelling an agent capable of understanding it, and with a commitment to rationality as an end.
Are your aims arbitrary? If not, why are Clippy’s aims arbitrary, and your’s not arbitrary?
Clippy doesn’t care aboiut having a coherent set of aims, or about revising and improving its aims.
That doesn’t answer my question.
You asked two questions. My reply was meant to indicate that arbitrariness depends on coherence and extrapolation (revision, reflection), both of which Clippy has rather less of whichthan I do.
I would very much doubt such an argument: most humans also share the same mechanisms for language learning, but still end up speaking quite different languages. (Yes, you can translate between them and learn new languages, but that doesn’t mean that all languages are the same: there are things that just don’t translate well from one language to another.) Global structure, local content.
I don’t think the analogy to languages holds water. Substituting a word for a different word doesn’t have the kind of impact on what people do that substituting a moral rule for a different moral rule does. Put another way, there are selection pressures constraining what human moralities look like that don’t constrain what human languages look like.
This sounds like a strawman argument to me. It doesn’t refute the argument that part of morality is cultural but based on a shared morality-learning mechanism.
There are also selection pressures constraining what human languages look like, that don’t constrain what human moralities look like. Or to give another example: there are selection pressures that constrain what dogs look like that don’t constrain what catfish look like, and vice-versa. That doesn’t mean that they also have similarities.
Having read the Meta-Ethics sequence, this is my belief too. Indeed, Elizier cares to index human-evaluation and pebblesorters-evaluation algorithms, by calling the first “morality” and the second “pebblesorting”, but he is careful to avoid talking about Elizier-morality and MrMind-morality, or even Elizier-yesterday-morality and Elizier-today-morality.
Of course his aims were different, and compared to differently evolved aliens (or AI’s) our morality is truly one of a kind.
But if we magnify our view on morality-space, I think it’s impossible not to recognize that there are differences!
I think that this state of affair can be explained in this way: while there’s a psychological unity of mankind, it concerns only very primitive aspects of our lives: the existence of joy, sadness, the importance of sex, etc.
But our innermost and basic evaluation algorithm doesn’t cover every aspects of our lives, mainly because our culture poses problems too new for a genetic solution to have been spread to the whole population.
Thus ad-hoc solutions, derived from culture and circumstances, step in: justice, fairness, laws, and so on. Those solutions may very well vary in time and space, and our brains being what they are, sometimes they overwrite what should have been the most primitive output.
When we talk about morality, we are usually already assuming the most primitive basic facts about human evaluation algorithm, and we try to argue about the finer point not covered by the genetic wiring of our brains, as for example if murder is always wrong.
In comparison with pebble-sorters or clipping AI, humanity exhibits a very narrow way of evaluating reality, to the point that you can talk about a single human-algorithm and call it “morality”. But if you zoom in, it is clear that the bedrock of morality doesn’t cover every problems that cultures naturally throw at pepole, and that’s why you need to invent “patches” or “add-ons” to the original algorithm, in form of morality concepts like justice, fairness, the sacrality of life, etc. Obviously, different groups of people will come up with different patches. But there are add-ons that were invented a long time ago, and they are now so widespread and ingrained in certain group’s education, that they feel as if they are part of the original primitive morality, while infact they are not. There are also new problems that require the (sometimes urgent) invention of new patches (e.g.: nuclear proliferation, genetic manipulation, birth control), and they are even more problematic and still in a state of transition nowadays.
Is this view unitary, or even realist? In my opinion, philosophical distinctions are too crude and simplicistic to categorize correctly the view of morality as “algorithm + local patches”. Maybe it needs its whole new category, something like the “algorithmic theories of morality” (although the category of “synthetic etical naturalism” comes close to capture the concept).
In the practical sense, only something in particular can be done with the world, so if “morality” is taken to refer to the goal given to a world-optimizing AI, it should be something specific by construction. If we take “morality” as given by the data of individual people, we can define personal moralities for each of them that would almost certainly be somewhat different from each other. Given the task of arriving at a single goal for the world, it might prove useful to exploit the similarities between personal moralities, or to sidestep this concept altogether, but eventual “convergence” is more of a design criterion than a prediction. In a world that had both humans and pebblesorters in it, arriving at a single goal would still be an important problem, even though we wouldn’t expect these goals to “naturally” converge under reflection.
This sounds like ethical subjectivism (that ethical sentences are propositions about the attitudes of people). I’m quite amenable to ethical subjectivism but it’s an anti-realist position.
See this comment. If Omega changed the attitudes of all people, that would change what those people mean when they say morality-in-our-world, but it would not change what I mean (here, in the real world rather than the counterfactual world) when I say morality-in-the-counterfactual-world, in the same way that if Omega changed the brains of all people so that the meanings of “red” and “yellow” were switched, that would change what those people mean when they say red, but it would not change what I mean when I say red-in-the-counterfactual-world.
I deal with exactly this issue in a post I made a while back (admittedly it is too long). It’s an issue of levels of recursion in our process of modelling reality (or a counterfactual reality). Your moral judgments aren’t dependent on the attitudes of people (including yourself) that you are modeling (in this world or in a counterfactual world): they’re dependent on the cognitive algorithms in your actual brain.
In other words, the subjectivist account of morality doesn’t say that people look at the attitudes of people in the world and then conclude from that what morality says. We don’t map attitudes and then conclude from those attitudes what is and is moral. Rather, we map the world and then out brains react emotionally to facts about that world and project our attitudes onto them. So morality doesn’t change in a world where people’s attitudes change because you’re using the same brain to make moral judgments about the counterfactual world as you use to make moral judgments about this world.
The post I linked to has some diagrams that make this clearer.
As for the linked comment, I am unsure there is a single, distinct, and unchanging logical object to define—but if there is one I agree with the comment and think that defining the algorithm that produces human attitudes is a crucial project. But clearly an anti-realist one.
Edit: rewrote for clarity.
I’m not sure what you mean by this.
Yes, but logical objects aren’t.
If I said “when we talk about Peano arithmetic, we are referring to a logical object. If counterfactually Peano had proposed a completely different set of axioms, that would change what people in the counterfactual world mean by Peano arithmetic, but it wouldn’t change what I mean by Peano-arithmetic-in-the-counterfactual-world,” would that imply that I’m not a mathematical Platonist?
I literally just edited my comment for clarity. It might make more sense now. I will edit this comment with a response to your point here.
Edit:
Any value system is a logical object. For that matter, any model of anything is a logical object. Any false theory of physics is a logical object. Theories of morality and of physics (logical objects both) are interesting because they purport to describe something in the world. The question before us is do normative theories purport to describe an object that is mind-independent or an object that is subjective?
Okay. I don’t think we actually disagree about anything. I just don’t know what you mean by “realist.”
Yes, that sounds right.
OK, suppose that this is an anti-realism position. People’s attitudes exist, but this isn’t what we mean by morality existing. Is that how it follows as an anti-realist position?
I was intrigued by a comment you made some time ago that you are not a realist, so you wonder what it is that everyone is arguing about. What is your position on ethical subjectivism?
So here is a generic definition of realism (in general, not for morality in particular
E.g. A realist position on ghosts doesn’t include the position that “ghost” is a kind of hallucination people have even though there is something that exists there.
I think it is less wrong than every variety of moral realism but I am unsure if moral claims are reports of subjective attitudes (subjectivism) or expressions of subjective attitudes (non-cognitivism). But I don’t think that distinction matters very much.
Luckily, I live in a world populated by entities who mostly concur with my attitudes regarding how the universe should be. This lets us cooperate and formalize procedures for determining outcomes that are convivial to our attitudes. But these attitudes are the result of a cognitive structure determined by natural selection and culture transmission, altered by reason and language. As such, they contain all manner of kludgey artifacts and heuristics that respond oddly to novel circumstances. So I find it weird that anyone thinks they can be described by something like preference utilitarianism of Kantian deontology. Those are the kind of parsimonious, elegant theories that we expect to find governing natural laws, not culturally and biologically evolved structures. In fact, Kant was emulating Newton.
Attitudes produced by human brains are going to be contextually inconsistent, subject to framing effects, unable to process most novel inputs, cluttered etc. What’s more, since our attitudes aren’t produced by a single, universal utility function but a cluster of heuristics, most moral disagreements are going to be the result of certain heuristics being more dominant in some people than others. That makes these grand theories about these attitudes silly to argue about: positions aren’t determined by things in the universe or by logic. They’re determined by the cognitive styles of individuals and the cultural conditioning they receive. Most of Less Wrong is robustly consequentialist because most people here share a particular cognitive style—we don’t have any grand insights into reality when it comes to normative theory.
I see, thanks for that distinction! I now need to reread parts of the metaethics sequence since I believe I came away with the thesis that morality is real in this sense… That is, that morality is real because we have bits of code (evolutionary, mental, etc) that output positive or negative feelings about different states of the universe and this code is “real” even if the positive and negative doesn’t exist external to that code.
I agree...
and I don’t disagree with this. I do hope/half expect that there should be some patterns to our attitudes, not as simplistic as natural laws but perhaps guessable to someone who thought about it the right way.
Thanks for describing your positions in more detail.
I think you’re mixing up CEV with morality. CEV is an instance of the strategy “cooperate with humans” in some sort of AI-building prisoner’s dilemma. It gives the AI some preferences, and the only guarantee that those preferences will be good is that humans are similar.
There is “only one” “morality” (kinda) because when I say “this is right” I am executing a function, and functions are unique-ish. But Me.right can be different from You.right. You just happen to be wrong sometimes, because You.right isn’t right, because when I say right I mean Me.right.
So that “good” from the first paragraph would be Me.good, not CEV.good.
You don’t think morality should just be CEV?
It is a factual statement that when I say something is “right,” I don’t mean CEV.right, I mean Me.right, and I’m not even particularly trying to approximate CEV.
A “winning” CEV should result in people with wildly divergent moralities all being deliriously happy.