Out of curiousity, take the original, basic A case on it’s own. What rational argument, metaethical or otherwise, can you see for making A act in a consequentialist manner? Assume for the sake of argument that whilst getting A to act in such a manner is realistic, making him not feel guilty isn’t so you’re trying to get him to act despite his guilt.
A- The fact that different human beings have different values at a core level implies that a single, unified human ethical theory is impossible. Eliezer, at the very least, has provided no reasonable argument for why the differences between human feelings should be skipped over. Even if he has, I don’t see how he can reconcile this with “The Moral Void”.
I’ll repeat again- Eliezer has a problem with any case where he wants a person to act against their moral feelings of right and wrong.
B- The wireheading argument is a refutation of your argument that humans should self-modify to eliminate what you see as problematic parts of their metaethics. I am comparing such things to wireheading.
As for your claims on the metaethics, Point 1 is so obvious that although some people don’t know it it is hardly worth mentioning in serious philosophical discussion. Point 3 I agree with. Point 4 is technically correct but if we define “choose” in the sense of “choose” used on Lesswrong instead of the sense of “choose” used by those who believe in free will then it is safe to say that humans can choose to a small extent. Point 2 is correct in that we don’t spontaneously decide to have an ethics then start following it, but you may mean something else on this.
However, what you miss is that part of Eliezer’s system of metaethics is the implicit assumption that “ethics” is a field in which ethics for all human beings can be talked about without trouble. These two assumptions must work for his metaethics to work. I demonstrate that no deontological, no consequentialist, and no virtue-ethical (although virtue-ethics is far more bunk than deontology or consequentialism for a variety of reasons so I only mention it in case of nitpicking) system is compatible with the basic reason to be moral Eliezer has at the core of his system- which amounts to the idea that humans WANT to be moral- if he is going to also assume that there is a universial ethics at the same time.
This is why I contend my meta-ethical system is better. As a description of how humans see ethical claims (which is generally a simplistic, unreflective idea of Right or Wrong) it doesn’t work, but it fullfills the more important role of describing how the prescriptive parts of ethics actually works, to the extent that a prescriptive theory of ethics as a question of fact can be coherent.
We seem to be talking past each other. I’m not entirely sure where the misunderstading lies, but I’ll give it one more shot.
Nobody’s arguing for consequentialism. Nobody’s saying that agent A “should” do the thing that makes A feel guilty. Nobody’s saying that A should self-modify to remove the guilt.
You seem to have misconstrued my claim that rational agents should strive to be self-modifying. I made no claim that agents “should” self-modify to eliminate “problematic parts of their metaethics”. Rather, I point out that many agents will find themselves inconsistent and can often benefit from making themselves consistent. Note that I explicitly acknowledge the existence of agents whose values prevent them from making themselves consistent, and acknowledge that such agents will be frustrated.
All of this seems obvious. Nobody’s trying to convince you otherwise. It’s still not metaethics.
what you miss is that part of Eliezer’s system of metaethics is the implicit assumption that “ethics” is a field in which ethics for all human beings can be talked about without trouble
Perhaps this is the root of the misunderstanding. I posit that the metaethics sequence makes no such assumption, and that you are fighting a phantom.
Other people on this website seem to think I’m not fighting a phantom and that the Metaethics sequence really does seem to talk of an ethics universial to almost all humans, with psycopaths being a rare exception.
One of my attacks on Eliezer is for inconsistency- he argues for consequentialism and a Metaethics of which the logical conclusion is deontology.
How can you describe somebody as “benefiting” unless you define the set of values the perspective of which they benefit from? If it is their own, this is probably not correct. Besides, inconsistency is a kind of problematic metaethics.
Other people on this website seem to think I’m not fighting a phantom
Feel free to take it up with them :-)
And how is it not metaethics?
Metaethics is about the status of moral claims. It’s about where “should”, “good”, “right”, “wrong” etc. come from, their validity, and so on. What a person should do in any given scenario (as in your questions above) is pure ethics.
Out of curiousity, take the original, basic A case on it’s own. What rational argument, metaethical or otherwise, can you see for making A act in a consequentialist manner? Assume for the sake of argument that whilst getting A to act in such a manner is realistic, making him not feel guilty isn’t so you’re trying to get him to act despite his guilt.
A- The fact that different human beings have different values at a core level implies that a single, unified human ethical theory is impossible. Eliezer, at the very least, has provided no reasonable argument for why the differences between human feelings should be skipped over. Even if he has, I don’t see how he can reconcile this with “The Moral Void”.
I’ll repeat again- Eliezer has a problem with any case where he wants a person to act against their moral feelings of right and wrong.
B- The wireheading argument is a refutation of your argument that humans should self-modify to eliminate what you see as problematic parts of their metaethics. I am comparing such things to wireheading.
As for your claims on the metaethics, Point 1 is so obvious that although some people don’t know it it is hardly worth mentioning in serious philosophical discussion. Point 3 I agree with. Point 4 is technically correct but if we define “choose” in the sense of “choose” used on Lesswrong instead of the sense of “choose” used by those who believe in free will then it is safe to say that humans can choose to a small extent. Point 2 is correct in that we don’t spontaneously decide to have an ethics then start following it, but you may mean something else on this.
However, what you miss is that part of Eliezer’s system of metaethics is the implicit assumption that “ethics” is a field in which ethics for all human beings can be talked about without trouble. These two assumptions must work for his metaethics to work. I demonstrate that no deontological, no consequentialist, and no virtue-ethical (although virtue-ethics is far more bunk than deontology or consequentialism for a variety of reasons so I only mention it in case of nitpicking) system is compatible with the basic reason to be moral Eliezer has at the core of his system- which amounts to the idea that humans WANT to be moral- if he is going to also assume that there is a universial ethics at the same time.
This is why I contend my meta-ethical system is better. As a description of how humans see ethical claims (which is generally a simplistic, unreflective idea of Right or Wrong) it doesn’t work, but it fullfills the more important role of describing how the prescriptive parts of ethics actually works, to the extent that a prescriptive theory of ethics as a question of fact can be coherent.
We seem to be talking past each other. I’m not entirely sure where the misunderstading lies, but I’ll give it one more shot.
Nobody’s arguing for consequentialism. Nobody’s saying that agent A “should” do the thing that makes A feel guilty. Nobody’s saying that A should self-modify to remove the guilt.
You seem to have misconstrued my claim that rational agents should strive to be self-modifying. I made no claim that agents “should” self-modify to eliminate “problematic parts of their metaethics”. Rather, I point out that many agents will find themselves inconsistent and can often benefit from making themselves consistent. Note that I explicitly acknowledge the existence of agents whose values prevent them from making themselves consistent, and acknowledge that such agents will be frustrated.
All of this seems obvious. Nobody’s trying to convince you otherwise. It’s still not metaethics.
Perhaps this is the root of the misunderstanding. I posit that the metaethics sequence makes no such assumption, and that you are fighting a phantom.
Other people on this website seem to think I’m not fighting a phantom and that the Metaethics sequence really does seem to talk of an ethics universial to almost all humans, with psycopaths being a rare exception.
One of my attacks on Eliezer is for inconsistency- he argues for consequentialism and a Metaethics of which the logical conclusion is deontology.
How can you describe somebody as “benefiting” unless you define the set of values the perspective of which they benefit from? If it is their own, this is probably not correct. Besides, inconsistency is a kind of problematic metaethics.
And how is it not metaethics?
Feel free to take it up with them :-)
Metaethics is about the status of moral claims. It’s about where “should”, “good”, “right”, “wrong” etc. come from, their validity, and so on. What a person should do in any given scenario (as in your questions above) is pure ethics.