Who gets to set the axioms and rules for ethicality?
Axioms are what we use to logically pinpoint what it is we are talking about. If our world and theirs has different axioms for “ethicality”, then they simply don’t have what we mean by “ethicality”—and we don’t have what they mean by the word “ethicality”.
Our two worlds would then not actually disagree about ethics the concept, they instead disagree about “ethics” the word, much like ‘tier’ means one thing in English and another thing in german.
Unfortunately, words of natural language have the annoying property that it’s often very hard to tell if people are disagreeing about the extension or the meaning. It’s also hard to tell what disagreement about the meaning of a word actually is.
Our two worlds would then not actually disagree about ethics the concept, they instead disagree about “ethics” the word, much like ‘tier’ means one thing in English and another thing in german.
The analogy is flawed. German and English speakers don’t disagree about the word (conceived as a string of phonemes; otherwise “tier” and “Tier” are not identical), and it’s not at all clear that disagreement about the meaning of words is the same thing as speaking two different languages. It’s certainly phenomenologically pretty different.
I do agree that reducing it to speaking different languages is one way to dissolve disagreement about meaning. But I’m not convinced that this is the right approach. Some words are in acute danger of being dissolved with the question in that it will turn out that almost everyone has their own meaning for the word, and everybody is talking past each other. It also leaves you with a need to explain where this persistent illusion that people are disagreeing when they’re in fact just talking past each other (which persists even when you explain to them that they’re just speaking two different languages; they’ll often say no, they’re not, they’re speaking the same language but the other person is using the word wrongly) comes from.
Of course, all of this is connected to the problem that nobody seems to know what kind of thing a meaning is.
So there is an objective measure for what’s “right” and “wrong” regardless of the frame of reference, there is such a thing as correct, individual independent ethics, but other people may just decide not to give a hoot, using some other definition of ethics?
Well, let’s define a series of ethics, from ethics1 to ethicsn. Let’s call your system of ethics which contains a “correct” conclusion such as “murder is WONG”, say, ethics211412312312.
Why should anyone care about ethics211412312312?
(If you don’t mind, let’s consolidate this into the other sub-thread we have going.)
No. it’s much wider than that. There are rational and instrumental should’s.
ETA:
here’s no reason to expect an arbitrary mind to be compelled by ethics.
Depends how arbitrary. Many philosophers think a rational mind could be compelled by ethical arguments...that ethical-should can be built out of rational-should.
There’s no reason to expect an arbitrary mind to be compelled by ethics.
As one should not expect an arbitrary mind with its own notions of “right” or “wrong” to yield to any human’s proselytizing about objectively correct ethics, “murder is bad”, and trying to provide a “correct” solution for that arbitrary mind to adopt.
The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours. There is no axiom-free metaethical framework which would provide the “should” in “you should choose ethics211412312312″, that was my point. Calling some church’s (or other group’s) ethical doctrine objectively correct for all minds doesn’t make a dint of difference, and doesn’t go beyond “my ethics are right! no, mine are!”
As one should not expect an arbitrary mind with its own notions of “right” or “wrong” to yield to any human’s proselytizing about objectively correct ethics, “murder is bad”, and trying to provide a “correct” solution for that arbitrary mind to adopt.
But humans can proselytise each other, despite their different notions of right and wrong. You seem to be assuming that morally-rght and -wrong are fundamentals. But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning and previously unknown facts. As happens when one person morally exhorts another. I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality.
But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning (...) I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality
Does that mean that, in your opinion, if we constructed an AI mind that uses a rational reasoning mechanism (such as Bayes), we wouldn’t need to worry since we could persuade it to act morally correct?
I’m not sure if that is necessarily true, or even highly likely. But it is a possibility which is extensively discussed in non-LW philosophy that is standardly ignored or bypassed on LW for some reason. As per my original comment. Is moral relativism really just obviously true?
Depends on how you define “moral relativism”. Kawomba thinks a particularly strong version is obviously true, but I think the LW consensus is that a weak version is.
The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours.
If someone defines ethics differently, then WHAT are the common characteristics that makes you call them both “ethics”? You surely don’t mean that they just happened to use the same sound or the same letters and that they may be meaning basketball instead? So there must already exist some common elements you are thinking of that make both versions be logically categorizable as “ethics”.
What are those common elements?
What would it mean for an alien to e.g. define “tetration” differently than we do? Either they define it in the same way, or they haven’t defined it at all. To define it differently means that they’re not describing what we mean by tetration at all.
Axioms have a lot to do with truth, and little to do with meaning.
Would that make the Euclidean axioms just “false” according to you, instead of meaningfully defining the concept of a Euclidean space that turned out not to be completely corresponding to reality, but is still both quite useful and certainly meaningful as a concept?
I first read the concept of axioms as means of logical pinpointing in this and it struck me as brilliant insight which may dissolve a lot of confusions.
Axioms are what we use to logically pinpoint what it is we are talking about. If our world and theirs has different axioms for “ethicality”, then they simply don’t have what we mean by “ethicality”—and we don’t have what they mean by the word “ethicality”.
Our two worlds would then not actually disagree about ethics the concept, they instead disagree about “ethics” the word, much like ‘tier’ means one thing in English and another thing in german.
Unfortunately, words of natural language have the annoying property that it’s often very hard to tell if people are disagreeing about the extension or the meaning. It’s also hard to tell what disagreement about the meaning of a word actually is.
The analogy is flawed. German and English speakers don’t disagree about the word (conceived as a string of phonemes; otherwise “tier” and “Tier” are not identical), and it’s not at all clear that disagreement about the meaning of words is the same thing as speaking two different languages. It’s certainly phenomenologically pretty different.
I do agree that reducing it to speaking different languages is one way to dissolve disagreement about meaning. But I’m not convinced that this is the right approach. Some words are in acute danger of being dissolved with the question in that it will turn out that almost everyone has their own meaning for the word, and everybody is talking past each other. It also leaves you with a need to explain where this persistent illusion that people are disagreeing when they’re in fact just talking past each other (which persists even when you explain to them that they’re just speaking two different languages; they’ll often say no, they’re not, they’re speaking the same language but the other person is using the word wrongly) comes from.
Of course, all of this is connected to the problem that nobody seems to know what kind of thing a meaning is.
So there is an objective measure for what’s “right” and “wrong” regardless of the frame of reference, there is such a thing as correct, individual independent ethics, but other people may just decide not to give a hoot, using some other definition of ethics?
Well, let’s define a series of ethics, from ethics1 to ethicsn. Let’s call your system of ethics which contains a “correct” conclusion such as “murder is WONG”, say, ethics211412312312.
Why should anyone care about ethics211412312312?
(If you don’t mind, let’s consolidate this into the other sub-thread we have going.)
If what they have can’t do what ethics is supposed to do, why call it ethics?
What is ethics supposed to do?
Reconcile one’s preferences with those of others.
That’s one specific goal that you ascribe to your ethics-subroutine, the definition entails no such ready answer.
Ethics:
“Moral principles that govern a person’s or group’s behavior”
“The moral correctness of specified conduct”
Moral:
“of or relating to principles of right and wrong in behavior”
What about Ferengi ethics?
I don’t know what you mean. Your dictionary definitions are typically useless for philosophical purposes.
ETA
Well...what?
You are saying “the (true, objective, actual) purpose of ethics is to reconcile one’s preferences with those of others”.
Where do you take that from, and what makes it right?
I got it from thinking and reading. It might not be right. It’s a philosophical claim. Feel free to counterargue.
“Should” is an ethical word. To use your (rather misleading) naming convention, it refers to a component of ethics211412312312.
Of course one should not confuse this with “would”. There’s no reason to expect an arbitrary mind to be compelled by ethics.
No. it’s much wider than that. There are rational and instrumental should’s.
ETA:
Depends how arbitrary. Many philosophers think a rational mind could be compelled by ethical arguments...that ethical-should can be built out of rational-should.
As one should not expect an arbitrary mind with its own notions of “right” or “wrong” to yield to any human’s proselytizing about objectively correct ethics, “murder is bad”, and trying to provide a “correct” solution for that arbitrary mind to adopt.
The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours. There is no axiom-free metaethical framework which would provide the “should” in “you should choose ethics211412312312″, that was my point. Calling some church’s (or other group’s) ethical doctrine objectively correct for all minds doesn’t make a dint of difference, and doesn’t go beyond “my ethics are right! no, mine are!”
But humans can proselytise each other, despite their different notions of right and wrong. You seem to be assuming that morally-rght and -wrong are fundamentals. But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning and previously unknown facts. As happens when one person morally exhorts another. I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality.
Does that mean that, in your opinion, if we constructed an AI mind that uses a rational reasoning mechanism (such as Bayes), we wouldn’t need to worry since we could persuade it to act morally correct?
I’m not sure if that is necessarily true, or even highly likely. But it is a possibility which is extensively discussed in non-LW philosophy that is standardly ignored or bypassed on LW for some reason. As per my original comment. Is moral relativism really just obviously true?
Depends on how you define “moral relativism”. Kawomba thinks a particularly strong version is obviously true, but I think the LW consensus is that a weak version is.
I don’t think there is a consensus, just a belief in a consensus. EY seems unable or unwiing to clarify his posiition even when asked directly.
If someone defines ethics differently, then WHAT are the common characteristics that makes you call them both “ethics”? You surely don’t mean that they just happened to use the same sound or the same letters and that they may be meaning basketball instead? So there must already exist some common elements you are thinking of that make both versions be logically categorizable as “ethics”.
What are those common elements?
What would it mean for an alien to e.g. define “tetration” differently than we do? Either they define it in the same way, or they haven’t defined it at all. To define it differently means that they’re not describing what we mean by tetration at all.
Cannot upvote enough.
Also, pretty sure I’ve made this exact argument to Kawoomba before, but I didn’t phrase it as well, so good luck!
Axioms have a lot to do with truth, and little to do with meaning.
Would that make the Euclidean axioms just “false” according to you, instead of meaningfully defining the concept of a Euclidean space that turned out not to be completely corresponding to reality, but is still both quite useful and certainly meaningful as a concept?
I first read the concept of axioms as means of logical pinpointing in this and it struck me as brilliant insight which may dissolve a lot of confusions.
Corresponding to reality is physical truth, not mathematical truth.