None of this requires that you pretend to know more than you do.
I don’t have to pretend to know whether I’m in a simulation or not. I can admit my ignorance, and then act, knowing that I do not know for certain if my actions will serve.
I think of this in levels
I can’t prove that logic is true. So I don’t claim to know it is with probability 1. I don’t pretend to.
But, IF it is true, then my reasonings are better than nothing for understanding things.
So, my statements end up looking something like: “(IF logic works) the fact that this seems logical means it’s probably true.”
But, I don’t really know if my senses are accurate messengers of knowledge (Matrix, etc). That’s on another level But I don’t have to pretend that I know they are and I don’t.
So my statements end up looking like: “((IF logic works) and my senses are reporting accurately) the fact that this seems logical means it’s probably true.”
We just have to learn to act amid uncertainty. We don’t have to pretend that we know anything to do so.
Morals are not arbitrary. I’m just talking about the Sequences’ take on morality. If you care about a different set of things, then morality doesn’t follow that change, it just means that you now care about something other than morality.
If you love circles, and then start loving ovals, that doesn’t make ovals into circles, it just means you’ve stopped caring about circles and started caring about something else.
Morality is a fixed equation.
To say you “should” be moral is tautological. It’s just saying you “should” do what you “should” do.
Yes, if you can’t solve the presupposition problem, the main alternative is to carry on as before, at the object level, but with less confidence at the meta level. But who is failing to take that advice? As far as I can see, it is Yudkowsky. He makes no claim to have solved the problem of unfounded foundations, but continues putting very high probabilities on ideas he likes, and vehemently denying ones he doesn’t.
To say you “should” be moral is tautological. It’s just saying you “should” do what you “should” do.
Ok. You should be moral. But there is no strong reason why you should folliow arbitrary values. Therefore, arbitrary values are not morality.
Morals are not arbitrary. I’m just talking about the Sequences’ take on morality. If you care about a different set of things, then morality doesn’t follow that change, it just means that you now care about something other than morality.
Well, in the normal course of life, on the object level, some things are more probable than others.
If you push me about if I REALLY know they’re true, then I admit that my reasoning and data could be confounded by a Matrix or whatever.
Maybe it’s clearer like so:
Colloquially, I know how to judge relative probabilities.
Philosophically (strictly), I don’t know the probability that any of my conclusions are true (because they rest on concepts I don’t pretend to know are true).
About the moral values thing, it sounds kinda like you haven’t read the sequence on metaethics. If not, then I’m glad to be the one to introduce you to the idea, and I can give you the broad strokes in a few sentences in a comment, but you might want to ponder the sequence if you want more.
Morality is a set of things humans care about. Each person has their own set, but as humans with a common psychology, those sets greatly overlap, creating a general morality.
But, humans don’t have access to our source code. We can’t see all that we care about. Figuring out the specific values, and how much to weight them against each other is just the old game of thought experiments and considering trade-offs, etcetera.
Nothing that can be reduced to some one-word or one-sentence idea that sums it all up. So we don’t know what all the values are or how they’re weighted. You might read about “Coherent Extrapolated Volition,” if you like.
Morality is not arbitrary any more than circularity is arbitrary. Both refer to a specific thing with specific qualities. If you change the qualities of the thing, that doesn’t change morality or change circularity, it just means that the thing you have no longer has morality, no longer has circularity.
A great example is Alexander Wales’ short story “The Last Christmas” (particularly chapter 2 and 3). See below.
The elves care about Christmas Spirit, not right and wrong, or morality, or fairness.
When it’s pointed out that what they’re doing isn’t fair, they don’t protest, they just say “We don’t care. Fairness isn’t part of the Christmas Spirit.”
And we might say, “Santa being fat? We don’t care, that’s not part of morality. We don’t deny that it’s part of the Christmas Spirit; we just don’t care that it is.”
If aliens care about different things, it’s not about our morality versus “their” morality. It would be about THE morality versus THE Glumpshizzle. The paper-clipper is used also as example. It doesn’t care about morality. It cares about clippiness.
The moral thing and the clippy thing to do are both fixed calculations. Once you know the answer, it’s a feature of your mind if you happen to respond to morality, or clippiness, or Glumpshizzle, or Christmas Spirit.
If anybody thinks I’ve misunderstood part of this, please, do let me know. I’ve tried to understand, and would like to correct any mistakes if I have them.
“You wouldn’t even make any arguments for why you should live?” asked Charles.
“My life is meaningless in the face of the Christmas spirit,” said Matilda.
“But if it didn’t matter to the Christmas spirit,” said Charles, “If I just wanted to see you die for fun?”
“Allowing you to satisfy your desires is part of maintaining the Christmas spirit, Santa,”
“It’s unfair,” said Charles.
“Life is unfair,” said Matilda.
“Does it have to be?” asked Charles. “Is that the Christmas spirit?”
“I don’t know,” said Matilda. “Fairness doesn’t enter into it, I don’t think. Why should Christmas be fair if life isn’t fair?”
About the moral values thing, it sounds kinda like you haven’t read the sequence on metaethics
More a case of read but not believed.
Morality is a set of things humans care about. Each person has their own set, but as humans with a common psychology, those sets greatly overlap, creating a general morality
That isn’t saying anything cogent. If moral values are some specific subset of human values, you haven’t said what the criterion of inclusion in that subset is. On the other hand, if you are saying all human values are moral values, that is incredible:-
Human values can conflict.
Morality is a decision theory, it tells you what you should do.
A ragbag of conflicting values cannot be used to make a definitive decision.
Therefore morality is not a ragbag of conflicting values.
Perhaps you think CEV solves the problem of value conflict. But if human morality is broadly defined, then th CEV process will be doing almost all the lifting, and CEV is almost entirely unspecified. On the other hand, if you narrow down the specification of human values , you increase the amount of arbitrariness.
Morality is not arbitrary any more than circularity is arbitrary. Both refer to a specific thing with specific qualities. If you change the qualities of the thing, that doesn’t change morality or change circularity, it just means that the thing you have no longer has morality, no longer has circularity.
Your theory of morality is arbitrary because you are not explaining why only human (twenty first century? Western?) values count as morality. Rather. you are using “morality” as something like a place name or personal name. No reason need be given why Istanbul is Istanbul, that’s just a label someone put in an area of Earths surface.
But morality cannot be a matter of arbitrary labeling, because it is about having a principled reason why you should do one thing and not another......however no such reason could be founded on an arbitrary naming ceremony! No more than everyone should obey me just because I dub myself the King of the World! To show that human values are morality, you have to show that they should be followed, which you don’t do just by calling them morality. That doesn’t remove the arbitrariness in the right way.
Because the map is not the territory, normative force does not come from labels or naming ceremonies. You can’t change what is by relabelling it, and you can’t change what ought to be that way either.
Note how we have different rules about proper names and meaningful terms. You can name things as you wish , because nothing follows from it, because names are labels, not contentfull terms. You can make inferences from contentfull terms, but you should apply them carefully, since argument from tendentiously applied terms us a common form of bad argument. Folow the rules and you have no causal series going from map to territory. Choose one from column A, and one from column B and you do.
Morality is a fixed equation.
What you are describing isn’t fixed in the expected sense of being derivable from first principles.
If aliens care about different things, it’s not about our morality versus “their” morality. It would be about THE morality versus THE Glumpshizzle. The paper-clipper is used also as example. It doesn’t care about morality. It cares about clippiness
How does that pan out in practice? If (1) humans have the one true morality, then we should apply it, and even force it in others. If (2) morality is just a set of arbitrary values, there is little reason humans should folow it, and even less justification to impose it.
These are contradictory ideas, yet you are asserting both of them!
BTW, denial of your claims that morality is a unique but arbitrary thing doesn’t entail believing that clipping is morality. You can have N things that are morality, according to some criteria, without Clipping being amongst them.
Moreover, alternative r theories don’t have to disclaim any connection between morality and human values.
[Disclaimer: My ethics and metaethics are not necessarily the same as those of Bound_up; in fact I think they are not. More below.]
Human values can conflict. Morality [...] tells you what you should do. A ragbag of conflicting values cannot be used to make a definitive decision. Therefore morality is not a ragbag of conflicting values.
I think this argument, in order to work, needs some further premise to the effect that a decision only counts as “definitive” if it is universal, if in some suitable sense everyone would/should arrive at the same decision; and then the second step (“Morality tells you what you should do”) needs to say explicitly that morality does this universally.
In that case, the argument works—but, I think, it works in a rather uninteresting way because the real work is being done by defining “morality” to be universal. It comes down to this: If we define “morality” to be universal, then no account of morality that doesn’t make it universal will do. Which is true enough, but doesn’t really tell us anything we didn’t already know.
I think I largely agree with what I take to be one of your main objections to Eliezer’s “metaethics sequence”. I think Eliezer’s is a nonrealist theory masquerading as a realist theory. He sketches, or at least suggests the existence of, some set of moral values broadly shared by humanity—so far, so good, though as you say there are a lot of details to be filled in and it may or may not actually be possible to do that. He then says “let us call this Morality, and let us define terms like should and good in terms of these values”—which is OK in so far as anyone can define any words however they like, I guess. And then he says “and this solves a key problem of metaethics, namely how we can see human values as non-arbitrary even though they look arbitrary: human values are non-arbitrary because they are what words like should and right and bad are about”—which is mere sophistry, because if you were worried before about human values being arbitrary then you should be equally worried after his definitional move about the definitions of terms like should being arbitrary.
But I don’t think (as, IIUC, Eliezer and Bound_up also don’t think) we need to be terribly worried about that. Supposing—and it’s a big supposition—that we are able to identify some reasonably coherent set of values as “human moral values” via CEV or anything else, I don’t think the arbitrariness of this set of values is any reason why we shouldn’t care about it, strive to live accordingly, program our superpowerful superintelligent godlike AIs to use it, etc. Yes, it’s “just a label”, but it’s a label distinguished by being (in some sense that depends on just where we get this set of values from) what we and the rest of the human race care about.
I think this argument, in order to work, needs some further premise to the effect that a decision only counts as “definitive” if it is universal,
Ok, but it would have been helpful to have argued the point.
if in some suitable sense everyone would/should arrive at the same decision; and then the second step (“Morality tells you what you should do”) needs to say explicitly that morality does this universally.
AFAICT, it is only necessary for to have the same decision across a certain reference class, not universally.
In that case, the argument works—but, I think, it works in a rather uninteresting way because the real work is being done by defining “morality” to be universal. It comes down to this: If we define “morality” to be universal, then no account of morality that doesn’t make it universal will do. Which is true enough, but doesn’t really tell us anything we didn’t already know.
Who is defining morality to be universal? I dont think it is me. I think my argument works in a fairly general sense. If morality is a ragbag of values, then in the general case it is going to contain contradictions, and that will stop you making any kind of decision based on it.
I disagree with this objection to Eliezer’s ethics because I think the distinction between “realist” and “nonrealist” theories is a confusion that needs to be done away with. The question is not whether morality (or anything else) is “something real,” but whether or not moral claims are actually true or false. Because that is all the reality that actually matter: tables and chairs are real, as far as I am concerned, because “there is a table in this room” is actually true. (This is also relevant to our previous discussion about consciousness.)
And in Eliezer’s theory, some moral claims are actually true, and some are actually false. So I agree with him that his theory is realist.
I do disagree with his theory, however, insofar as it implies that “what we care about” is essentially arbitrary, even if it is what it is.
The question is not whether morality (or anything else) is “something real”, but whether or not moral claims are actually true or false.
That (whether moral claims are actually true or false) is exactly how I distinguish moral realism from moral nonrealism, and I think this is a standardway to understand the terms.
But any nonrealist theory can be made into one in which moral claims have truth values by redefining the key words; my suggestion is that Eliezer’s theory is of this kind, that it is nearer to a straightforwardly nonrealist theory, which it becomes if e.g. you replace his use of terms like “good” with terms that are explicit about what value system the reference (“good according to human values”) than to typical more ambitious realist theories that claim that moral judgements are true or false according to some sort of moral authority that goes beyond any particular person’s or group’s or system’s values.
I agree that the typical realist theory implies more objectivity than is present in Eliezer’s theory. But in the same way, the typical non-realist theory implies less objectivity than is present there. E.g. someone who says that “this action is good” just means “I want to do this action” has less objectivity, because it will vary from person to person, which is not the case in Eliezer’s theory.
I think we are largely agreed as to facts and disagree only on whether it’s better to call Eliezer’s theory, which is intermediate between many realist theories and many non-realist theories, “realist” or “non-realist”.
I’m not sure, though, that someone who says that “this is good” = “I want to do this” is really a typical non-realist. My notion of a typical non-realist—typical, I mean, among people who’ve actually thought seriously about this stuff—is somewhat nearer to Eliezer’s position than that.
Anyway, the reason why I class Eliezer’s position as non-realist is that the distinction between Eliezer’s position and that of many (other?) non-realists is purely terminological—he agrees that there are all these various value systems, and that if ours seems special to us that’s because it’s ours rather than because of some agent-independent feature of the universe that picks ours out in preference to others, but he wants to use words like “good” to refer to one particular value system—whereas the distinction between his position and that of most (other?) realists goes beyond terminology: they say that the value system they regard as real is actually built into the fabric of reality in some way that goes beyond the mere fact that it’s our (or their) value system.
I think he wants a system which works like realism, in that there are definite answers to ethical questions (“fixed”, “frozen”) ,but without spookiness.
Yudkowsky,’s theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there’s no fixed standard of morality. The label “moral” has been placed on a moving target. (Standard relativism usually has this problem synchronously
, ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)
You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God’s commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don’t think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory. If the Moral Equation is something ideal and abstract, why can’t aliens partake?
Colloquially, I know how to judge relative probabilities.
Philosophically (strictly), I don’t know the probability that any of my conclusions are true (because they rest on concepts I don’t pretend to know are true).
Again. my point is that it that to do justice to philosophical doubt, you need to avoid high probabilities in practical reasoning a laTaleb. But not everyone gets that. A lot of people think that using probability alone us sufficient.
None of this requires that you pretend to know more than you do.
I don’t have to pretend to know whether I’m in a simulation or not. I can admit my ignorance, and then act, knowing that I do not know for certain if my actions will serve.
I think of this in levels
I can’t prove that logic is true. So I don’t claim to know it is with probability 1. I don’t pretend to.
But, IF it is true, then my reasonings are better than nothing for understanding things.
So, my statements end up looking something like: “(IF logic works) the fact that this seems logical means it’s probably true.”
But, I don’t really know if my senses are accurate messengers of knowledge (Matrix, etc). That’s on another level But I don’t have to pretend that I know they are and I don’t. So my statements end up looking like: “((IF logic works) and my senses are reporting accurately) the fact that this seems logical means it’s probably true.”
We just have to learn to act amid uncertainty. We don’t have to pretend that we know anything to do so.
Morals are not arbitrary. I’m just talking about the Sequences’ take on morality. If you care about a different set of things, then morality doesn’t follow that change, it just means that you now care about something other than morality.
If you love circles, and then start loving ovals, that doesn’t make ovals into circles, it just means you’ve stopped caring about circles and started caring about something else.
Morality is a fixed equation.
To say you “should” be moral is tautological. It’s just saying you “should” do what you “should” do.
Yes, if you can’t solve the presupposition problem, the main alternative is to carry on as before, at the object level, but with less confidence at the meta level. But who is failing to take that advice? As far as I can see, it is Yudkowsky. He makes no claim to have solved the problem of unfounded foundations, but continues putting very high probabilities on ideas he likes, and vehemently denying ones he doesn’t.
Ok. You should be moral. But there is no strong reason why you should folliow arbitrary values. Therefore, arbitrary values are not morality.
So what are the correct moral values?
Well, in the normal course of life, on the object level, some things are more probable than others.
If you push me about if I REALLY know they’re true, then I admit that my reasoning and data could be confounded by a Matrix or whatever.
Maybe it’s clearer like so:
Colloquially, I know how to judge relative probabilities.
Philosophically (strictly), I don’t know the probability that any of my conclusions are true (because they rest on concepts I don’t pretend to know are true).
About the moral values thing, it sounds kinda like you haven’t read the sequence on metaethics. If not, then I’m glad to be the one to introduce you to the idea, and I can give you the broad strokes in a few sentences in a comment, but you might want to ponder the sequence if you want more.
Morality is a set of things humans care about. Each person has their own set, but as humans with a common psychology, those sets greatly overlap, creating a general morality.
But, humans don’t have access to our source code. We can’t see all that we care about. Figuring out the specific values, and how much to weight them against each other is just the old game of thought experiments and considering trade-offs, etcetera.
Nothing that can be reduced to some one-word or one-sentence idea that sums it all up. So we don’t know what all the values are or how they’re weighted. You might read about “Coherent Extrapolated Volition,” if you like.
Morality is not arbitrary any more than circularity is arbitrary. Both refer to a specific thing with specific qualities. If you change the qualities of the thing, that doesn’t change morality or change circularity, it just means that the thing you have no longer has morality, no longer has circularity.
A great example is Alexander Wales’ short story “The Last Christmas” (particularly chapter 2 and 3). See below.
The elves care about Christmas Spirit, not right and wrong, or morality, or fairness.
When it’s pointed out that what they’re doing isn’t fair, they don’t protest, they just say “We don’t care. Fairness isn’t part of the Christmas Spirit.”
And we might say, “Santa being fat? We don’t care, that’s not part of morality. We don’t deny that it’s part of the Christmas Spirit; we just don’t care that it is.”
If aliens care about different things, it’s not about our morality versus “their” morality. It would be about THE morality versus THE Glumpshizzle. The paper-clipper is used also as example. It doesn’t care about morality. It cares about clippiness.
The moral thing and the clippy thing to do are both fixed calculations. Once you know the answer, it’s a feature of your mind if you happen to respond to morality, or clippiness, or Glumpshizzle, or Christmas Spirit.
If anybody thinks I’ve misunderstood part of this, please, do let me know. I’ve tried to understand, and would like to correct any mistakes if I have them.
“You wouldn’t even make any arguments for why you should live?” asked Charles.
“My life is meaningless in the face of the Christmas spirit,” said Matilda.
“But if it didn’t matter to the Christmas spirit,” said Charles, “If I just wanted to see you die for fun?”
“Allowing you to satisfy your desires is part of maintaining the Christmas spirit, Santa,”
“It’s unfair,” said Charles.
“Life is unfair,” said Matilda.
“Does it have to be?” asked Charles. “Is that the Christmas spirit?”
“I don’t know,” said Matilda. “Fairness doesn’t enter into it, I don’t think. Why should Christmas be fair if life isn’t fair?”
http://alexanderwales.com/the-last-christmas-chapter-1-2/
More a case of read but not believed.
That isn’t saying anything cogent. If moral values are some specific subset of human values, you haven’t said what the criterion of inclusion in that subset is. On the other hand, if you are saying all human values are moral values, that is incredible:-
Human values can conflict.
Morality is a decision theory, it tells you what you should do.
A ragbag of conflicting values cannot be used to make a definitive decision.
Therefore morality is not a ragbag of conflicting values.
Perhaps you think CEV solves the problem of value conflict. But if human morality is broadly defined, then th CEV process will be doing almost all the lifting, and CEV is almost entirely unspecified. On the other hand, if you narrow down the specification of human values , you increase the amount of arbitrariness.
Your theory of morality is arbitrary because you are not explaining why only human (twenty first century? Western?) values count as morality. Rather. you are using “morality” as something like a place name or personal name. No reason need be given why Istanbul is Istanbul, that’s just a label someone put in an area of Earths surface.
But morality cannot be a matter of arbitrary labeling, because it is about having a principled reason why you should do one thing and not another......however no such reason could be founded on an arbitrary naming ceremony! No more than everyone should obey me just because I dub myself the King of the World! To show that human values are morality, you have to show that they should be followed, which you don’t do just by calling them morality. That doesn’t remove the arbitrariness in the right way.
Because the map is not the territory, normative force does not come from labels or naming ceremonies. You can’t change what is by relabelling it, and you can’t change what ought to be that way either.
Note how we have different rules about proper names and meaningful terms. You can name things as you wish , because nothing follows from it, because names are labels, not contentfull terms. You can make inferences from contentfull terms, but you should apply them carefully, since argument from tendentiously applied terms us a common form of bad argument. Folow the rules and you have no causal series going from map to territory. Choose one from column A, and one from column B and you do.
What you are describing isn’t fixed in the expected sense of being derivable from first principles.
How does that pan out in practice? If (1) humans have the one true morality, then we should apply it, and even force it in others. If (2) morality is just a set of arbitrary values, there is little reason humans should folow it, and even less justification to impose it.
These are contradictory ideas, yet you are asserting both of them!
BTW, denial of your claims that morality is a unique but arbitrary thing doesn’t entail believing that clipping is morality. You can have N things that are morality, according to some criteria, without Clipping being amongst them.
Moreover, alternative r theories don’t have to disclaim any connection between morality and human values.
[Disclaimer: My ethics and metaethics are not necessarily the same as those of Bound_up; in fact I think they are not. More below.]
I think this argument, in order to work, needs some further premise to the effect that a decision only counts as “definitive” if it is universal, if in some suitable sense everyone would/should arrive at the same decision; and then the second step (“Morality tells you what you should do”) needs to say explicitly that morality does this universally.
In that case, the argument works—but, I think, it works in a rather uninteresting way because the real work is being done by defining “morality” to be universal. It comes down to this: If we define “morality” to be universal, then no account of morality that doesn’t make it universal will do. Which is true enough, but doesn’t really tell us anything we didn’t already know.
I think I largely agree with what I take to be one of your main objections to Eliezer’s “metaethics sequence”. I think Eliezer’s is a nonrealist theory masquerading as a realist theory. He sketches, or at least suggests the existence of, some set of moral values broadly shared by humanity—so far, so good, though as you say there are a lot of details to be filled in and it may or may not actually be possible to do that. He then says “let us call this Morality, and let us define terms like should and good in terms of these values”—which is OK in so far as anyone can define any words however they like, I guess. And then he says “and this solves a key problem of metaethics, namely how we can see human values as non-arbitrary even though they look arbitrary: human values are non-arbitrary because they are what words like should and right and bad are about”—which is mere sophistry, because if you were worried before about human values being arbitrary then you should be equally worried after his definitional move about the definitions of terms like should being arbitrary.
But I don’t think (as, IIUC, Eliezer and Bound_up also don’t think) we need to be terribly worried about that. Supposing—and it’s a big supposition—that we are able to identify some reasonably coherent set of values as “human moral values” via CEV or anything else, I don’t think the arbitrariness of this set of values is any reason why we shouldn’t care about it, strive to live accordingly, program our superpowerful superintelligent godlike AIs to use it, etc. Yes, it’s “just a label”, but it’s a label distinguished by being (in some sense that depends on just where we get this set of values from) what we and the rest of the human race care about.
Ok, but it would have been helpful to have argued the point.
AFAICT, it is only necessary for to have the same decision across a certain reference class, not universally.
Who is defining morality to be universal? I dont think it is me. I think my argument works in a fairly general sense. If morality is a ragbag of values, then in the general case it is going to contain contradictions, and that will stop you making any kind of decision based on it.
I disagree with this objection to Eliezer’s ethics because I think the distinction between “realist” and “nonrealist” theories is a confusion that needs to be done away with. The question is not whether morality (or anything else) is “something real,” but whether or not moral claims are actually true or false. Because that is all the reality that actually matter: tables and chairs are real, as far as I am concerned, because “there is a table in this room” is actually true. (This is also relevant to our previous discussion about consciousness.)
And in Eliezer’s theory, some moral claims are actually true, and some are actually false. So I agree with him that his theory is realist.
I do disagree with his theory, however, insofar as it implies that “what we care about” is essentially arbitrary, even if it is what it is.
That (whether moral claims are actually true or false) is exactly how I distinguish moral realism from moral nonrealism, and I think this is a standard way to understand the terms.
But any nonrealist theory can be made into one in which moral claims have truth values by redefining the key words; my suggestion is that Eliezer’s theory is of this kind, that it is nearer to a straightforwardly nonrealist theory, which it becomes if e.g. you replace his use of terms like “good” with terms that are explicit about what value system the reference (“good according to human values”) than to typical more ambitious realist theories that claim that moral judgements are true or false according to some sort of moral authority that goes beyond any particular person’s or group’s or system’s values.
I agree that the typical realist theory implies more objectivity than is present in Eliezer’s theory. But in the same way, the typical non-realist theory implies less objectivity than is present there. E.g. someone who says that “this action is good” just means “I want to do this action” has less objectivity, because it will vary from person to person, which is not the case in Eliezer’s theory.
I think we are largely agreed as to facts and disagree only on whether it’s better to call Eliezer’s theory, which is intermediate between many realist theories and many non-realist theories, “realist” or “non-realist”.
I’m not sure, though, that someone who says that “this is good” = “I want to do this” is really a typical non-realist. My notion of a typical non-realist—typical, I mean, among people who’ve actually thought seriously about this stuff—is somewhat nearer to Eliezer’s position than that.
Anyway, the reason why I class Eliezer’s position as non-realist is that the distinction between Eliezer’s position and that of many (other?) non-realists is purely terminological—he agrees that there are all these various value systems, and that if ours seems special to us that’s because it’s ours rather than because of some agent-independent feature of the universe that picks ours out in preference to others, but he wants to use words like “good” to refer to one particular value system—whereas the distinction between his position and that of most (other?) realists goes beyond terminology: they say that the value system they regard as real is actually built into the fabric of reality in some way that goes beyond the mere fact that it’s our (or their) value system.
You may weight these differences differently.
I think he wants a system which works like realism, in that there are definite answers to ethical questions (“fixed”, “frozen”) ,but without spookiness.
Yudkowsky,’s theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there’s no fixed standard of morality. The label “moral” has been placed on a moving target. (Standard relativism usually has this problem synchronously , ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)
You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God’s commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don’t think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory. If the Moral Equation is something ideal and abstract, why can’t aliens partake?
I agree.
Again. my point is that it that to do justice to philosophical doubt, you need to avoid high probabilities in practical reasoning a laTaleb. But not everyone gets that. A lot of people think that using probability alone us sufficient.