So there is an objective measure for what’s “right” and “wrong” regardless of the frame of reference, there is such a thing as correct, individual independent ethics, but other people may just decide not to give a hoot, using some other definition of ethics?
Well, let’s define a series of ethics, from ethics1 to ethicsn. Let’s call your system of ethics which contains a “correct” conclusion such as “murder is WONG”, say, ethics211412312312.
Why should anyone care about ethics211412312312?
(If you don’t mind, let’s consolidate this into the other sub-thread we have going.)
No. it’s much wider than that. There are rational and instrumental should’s.
ETA:
here’s no reason to expect an arbitrary mind to be compelled by ethics.
Depends how arbitrary. Many philosophers think a rational mind could be compelled by ethical arguments...that ethical-should can be built out of rational-should.
There’s no reason to expect an arbitrary mind to be compelled by ethics.
As one should not expect an arbitrary mind with its own notions of “right” or “wrong” to yield to any human’s proselytizing about objectively correct ethics, “murder is bad”, and trying to provide a “correct” solution for that arbitrary mind to adopt.
The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours. There is no axiom-free metaethical framework which would provide the “should” in “you should choose ethics211412312312″, that was my point. Calling some church’s (or other group’s) ethical doctrine objectively correct for all minds doesn’t make a dint of difference, and doesn’t go beyond “my ethics are right! no, mine are!”
As one should not expect an arbitrary mind with its own notions of “right” or “wrong” to yield to any human’s proselytizing about objectively correct ethics, “murder is bad”, and trying to provide a “correct” solution for that arbitrary mind to adopt.
But humans can proselytise each other, despite their different notions of right and wrong. You seem to be assuming that morally-rght and -wrong are fundamentals. But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning and previously unknown facts. As happens when one person morally exhorts another. I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality.
But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning (...) I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality
Does that mean that, in your opinion, if we constructed an AI mind that uses a rational reasoning mechanism (such as Bayes), we wouldn’t need to worry since we could persuade it to act morally correct?
I’m not sure if that is necessarily true, or even highly likely. But it is a possibility which is extensively discussed in non-LW philosophy that is standardly ignored or bypassed on LW for some reason. As per my original comment. Is moral relativism really just obviously true?
Depends on how you define “moral relativism”. Kawomba thinks a particularly strong version is obviously true, but I think the LW consensus is that a weak version is.
The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours.
If someone defines ethics differently, then WHAT are the common characteristics that makes you call them both “ethics”? You surely don’t mean that they just happened to use the same sound or the same letters and that they may be meaning basketball instead? So there must already exist some common elements you are thinking of that make both versions be logically categorizable as “ethics”.
What are those common elements?
What would it mean for an alien to e.g. define “tetration” differently than we do? Either they define it in the same way, or they haven’t defined it at all. To define it differently means that they’re not describing what we mean by tetration at all.
So there is an objective measure for what’s “right” and “wrong” regardless of the frame of reference, there is such a thing as correct, individual independent ethics, but other people may just decide not to give a hoot, using some other definition of ethics?
Well, let’s define a series of ethics, from ethics1 to ethicsn. Let’s call your system of ethics which contains a “correct” conclusion such as “murder is WONG”, say, ethics211412312312.
Why should anyone care about ethics211412312312?
(If you don’t mind, let’s consolidate this into the other sub-thread we have going.)
If what they have can’t do what ethics is supposed to do, why call it ethics?
What is ethics supposed to do?
Reconcile one’s preferences with those of others.
That’s one specific goal that you ascribe to your ethics-subroutine, the definition entails no such ready answer.
Ethics:
“Moral principles that govern a person’s or group’s behavior”
“The moral correctness of specified conduct”
Moral:
“of or relating to principles of right and wrong in behavior”
What about Ferengi ethics?
I don’t know what you mean. Your dictionary definitions are typically useless for philosophical purposes.
ETA
Well...what?
You are saying “the (true, objective, actual) purpose of ethics is to reconcile one’s preferences with those of others”.
Where do you take that from, and what makes it right?
I got it from thinking and reading. It might not be right. It’s a philosophical claim. Feel free to counterargue.
“Should” is an ethical word. To use your (rather misleading) naming convention, it refers to a component of ethics211412312312.
Of course one should not confuse this with “would”. There’s no reason to expect an arbitrary mind to be compelled by ethics.
No. it’s much wider than that. There are rational and instrumental should’s.
ETA:
Depends how arbitrary. Many philosophers think a rational mind could be compelled by ethical arguments...that ethical-should can be built out of rational-should.
As one should not expect an arbitrary mind with its own notions of “right” or “wrong” to yield to any human’s proselytizing about objectively correct ethics, “murder is bad”, and trying to provide a “correct” solution for that arbitrary mind to adopt.
The ethics as defined by China, or an arbitrary mind, have as much claim to be correct as ours. There is no axiom-free metaethical framework which would provide the “should” in “you should choose ethics211412312312″, that was my point. Calling some church’s (or other group’s) ethical doctrine objectively correct for all minds doesn’t make a dint of difference, and doesn’t go beyond “my ethics are right! no, mine are!”
But humans can proselytise each other, despite their different notions of right and wrong. You seem to be assuming that morally-rght and -wrong are fundamentals. But if they are outcomes of reasoning and facts, then they can be changed by the presentation of better reasoning and previously unknown facts. As happens when one person morally exhorts another. I think you need to assume that your arbitrary mind has nothing in common with a human one, not even rationality.
Does that mean that, in your opinion, if we constructed an AI mind that uses a rational reasoning mechanism (such as Bayes), we wouldn’t need to worry since we could persuade it to act morally correct?
I’m not sure if that is necessarily true, or even highly likely. But it is a possibility which is extensively discussed in non-LW philosophy that is standardly ignored or bypassed on LW for some reason. As per my original comment. Is moral relativism really just obviously true?
Depends on how you define “moral relativism”. Kawomba thinks a particularly strong version is obviously true, but I think the LW consensus is that a weak version is.
I don’t think there is a consensus, just a belief in a consensus. EY seems unable or unwiing to clarify his posiition even when asked directly.
If someone defines ethics differently, then WHAT are the common characteristics that makes you call them both “ethics”? You surely don’t mean that they just happened to use the same sound or the same letters and that they may be meaning basketball instead? So there must already exist some common elements you are thinking of that make both versions be logically categorizable as “ethics”.
What are those common elements?
What would it mean for an alien to e.g. define “tetration” differently than we do? Either they define it in the same way, or they haven’t defined it at all. To define it differently means that they’re not describing what we mean by tetration at all.