Why? That’s an assertion—it won’t convince anyone who doesn’t already agree with you. And you’re using two meanings of the word “bad”—an unpleasant subjective experience, and badness according to a moral system. Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it’s a logical transition.
Could you explain more at length for me?
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference). However, I do not assume that this moral system can be deduced from universal or logical principles, for the reasons stated above. Most humans will have moral systems not too far removed from ours (in the sense of Kolmogorov complexity—there are many human cultural universals, and our moral instincts are generally similar), but this isn’t a logical argument for the correctness of something.
Why? That’s an assertion—it won’t convince anyone who doesn’t already agree with you. And you’re using two meanings of the word “bad”—an unpleasant subjective experience, and badness according to a moral system.
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics. It seems to be a matter of including something in a verbal definition, so it seems to be correct. Moral realism would follow. It is not undesirable, but helpful, since anti-realism implies that our values are not really valuable, but just fiction.
Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I agree, this would be a special case, of incomplete knowledge about conscious animals. This would be possible for instance in some artificial intelligences, but they might learn about it indirectly by observing animals, humans, and getting contact with human culture in various forms. Otherwise, they might become morally anti-realist.
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference).
Could you explain a bit this emphasis on preference?
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics.
Which is exactly why I critiqued using the word “bad” for the conscious experiences, using “negative” or “unpleasant”, words which describe the conscious experience in a similar way without sneaking in normative claims.
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference).
Could you explain a bit this emphasis on preference?
Er, nothing complex—in my ethics, there are cases where preferences trump feelings (eg experience machines) and cases where feelings trump preferences (eg drug users who are very unhappy). That’s all I’m saying.
Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.
The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.
Could you give me examples of any reasonable preferences that could not be reducible to good and bad feelings in that sense?
Anyway, there is also the argument from personal identity which calls for equalization of values taking into account all subjects (equally valued, if ceteris paribus), and their reasoning, if contextually equivalent. This could be in itself a partial refutation of the orthogonality thesis, a refutation in theory and for autonomous and free general superintelligent agents, but not necessarily for imprisoned and tampered ones.
I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.
Why? That’s an assertion—it won’t convince anyone who doesn’t already agree with you. And you’re using two meanings of the word “bad”—an unpleasant subjective experience, and badness according to a moral system. Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference). However, I do not assume that this moral system can be deduced from universal or logical principles, for the reasons stated above. Most humans will have moral systems not too far removed from ours (in the sense of Kolmogorov complexity—there are many human cultural universals, and our moral instincts are generally similar), but this isn’t a logical argument for the correctness of something.
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics. It seems to be a matter of including something in a verbal definition, so it seems to be correct. Moral realism would follow. It is not undesirable, but helpful, since anti-realism implies that our values are not really valuable, but just fiction.
I agree, this would be a special case, of incomplete knowledge about conscious animals. This would be possible for instance in some artificial intelligences, but they might learn about it indirectly by observing animals, humans, and getting contact with human culture in various forms. Otherwise, they might become morally anti-realist.
Could you explain a bit this emphasis on preference?
Which is exactly why I critiqued using the word “bad” for the conscious experiences, using “negative” or “unpleasant”, words which describe the conscious experience in a similar way without sneaking in normative claims.
Er, nothing complex—in my ethics, there are cases where preferences trump feelings (eg experience machines) and cases where feelings trump preferences (eg drug users who are very unhappy). That’s all I’m saying.
Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.
The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.
Could you give me examples of any reasonable preferences that could not be reducible to good and bad feelings in that sense?
Anyway, there is also the argument from personal identity which calls for equalization of values taking into account all subjects (equally valued, if ceteris paribus), and their reasoning, if contextually equivalent. This could be in itself a partial refutation of the orthogonality thesis, a refutation in theory and for autonomous and free general superintelligent agents, but not necessarily for imprisoned and tampered ones.
Then they are no longer purely descriptive, and I can’t agree that they are logically or empirically true.
Apart from that, what do you think of the other points? If you wish, we could continue a conversation on another online medium.
Certainly, but I don’t have much time for the next few weeks :-(
Send me a message in mid-April if you’re still interested!
I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.