While the physical world has no ethical value except from conscious perceptions, conscious perceptions can be ethical value, and only by being good or bad conscious perceptions, or feelings. This seems to be so by definition, because ethical value is being good or bad.
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty.
A bit unclear, but I’m assuming you mean something like “we have good or bad (technically, pleasant or unpleasant) conscious experiences, and we know this with great certainty”. That seems fine.
This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings, because it is logical that this will make the universe better.
Why? This is the whole core of the disagreement, and you’re zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned—we want things we don’t like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other?
You seem to get words to do too much of the work. We have innate senses of positivity and negativity for certain experiences; we also have an innate sense that morality exists. But those together do not make positive experiences good “by definition” (nor does calling them “good” rather than “positive”).
But those are relatively minor points—if there was a single consciousness in the universe, them maybe your argument could get off the ground. But we have many current and potential consciousnesses, with competing values and conscious experiences. You seem to be saying that we should logically be altruists, because we have conscious experiences. I agree we should be altruists; but that’s a personal preference, and there’s no logic to it. Following your argument (consciousness before physics) one could perfectly become a solipsist, believing only one’s own mind exists, and ignoring others. Or your could be a racist altruist, preferring certain individuals or conscious experiences. Or you could put all experiences together on an infinite numbers of comparative scales (there is no intrinsic measure to compare the quality of two positive experiences in different people).
But in a way, that’s entirely a moot point. Your claim is that a certain ethics logically follows from our conscious reality. There I must ask you to prove it. State your assumptions, show your claims, present the deductions. You’ll need to do that, before we can start critiquing your position properly.
Why? This is the whole core of the disagreement, and you’re zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned—we want things we don’t like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other?
Indeed, wanting and liking do not always correspond, also from a neurological perspective. Wanting involves planning and planning often involves error. We often want things mistakenly, be it by evolutionary selected reasons, cultural reasons, or just bad planning. Liking is what matters, because it can be immediately and directly determined to be good, with the highest certainty. This is an empirical confirmation of its value, while wanting is like an empty promise.
We have good and bad feelings associated with some evolutionarily or culturally determined things. Theoretically, the result of good and bad feelings could be associated with any inputs. The inputs don’t matter, nor does wanting necessarily matter, nor innate intuitions of morality. The only thing that has direct value, which is empirically confirmed, is good and bad feelings.
if there was a single consciousness in the universe, them maybe your argument could get off the ground. But we have many current and potential consciousnesses, with competing values and conscious experiences.
Well noticed. That comment was not well elaborated and is not a complete explanation. It is also necessary for that point you mentioned to consider the philosophy of personal identities, which is a point that I examine in my more complete essay on Less Wrong, and also in my essay Universal Identity.
But in a way, that’s entirely a moot point. Your claim is that a certain ethics logically follows from our conscious reality. There I must ask you to prove it. State your assumptions, show your claims, present the deductions. You’ll need to do that, before we can start critiquing your position properly.
I have a small essay written on ethics, but it’s a detailed topic, and my article may be too concise, assuming much previous reading on the subject. It is here. I propose that we instead focus on questions as they come up.
Liking is what matters, because it can be immediately and directly determined to be good, with the highest certainty.
That is your opinion. Others believe wanting is fundamental and rational, that can be checked and explained and shared—while liking is a misleading emotional response (that probably shows much less consistency, too).
How would you resolve the difference? They say something is more important, you say something else is. Neither of your disagree about the facts of the world, just about what is important and what isn’t. What can you point to that makes this into a logical disagreement?
One argument is that from empiricism or verification. Wanting can be and often is wrong. Simple examples can show this, but I assume that they won’t be needed because you understand. Liking can be misleading in terms of motivation or in terms of the external object which is liked, but it cannot be misleading or wrong in itself, in that it is a good feeling. For instance, a person could like to use cocaine, and this might be misleading in terms of being a wrong motivation, that in the long-term would prove destructive and dislikeable. However, immediately, in terms of the sensation of liking itself, and all else being equal, then it is certainly good, and this is directly verifiable by consciousness.
Taking this into account, some would argue for wanting values X, Y, or Z, but not values A, B, or C. This is another matter. I’m arguing that good and bad feelings are the direct values that have validity and should be wanted. Other valid values are those that are instrumentally reducible to these, which are very many, and most of what we do.
Liking can be misleading in terms of motivation or in terms of the external object which is liked, but it cannot be misleading or wrong in itself, in that it is a good feeling.
“Wanting can be misleading in terms of the long term or in terms of the internal emotional state with which it is connected, but it cannot be misleading or wrong in itself, in that it is a clear preference.”
Indeed, but what separates wanting and liking is that preferences can be wrong, they require no empirical basis, while liking in itself cannot be wrong, and it has an empirical basis.
When rightfully wanting something, that something gets a justification. Liking, understood as good feelings, is a justification, while another is avoiding bad feelings, and this can be causally extended to include instrumental actions that will cause this in indirect ways.
Then how can wanting be wrong? They’re there, they’re conscious preferences (you can introspect and get them, just as liking), and they have as much empirical basis as liking.
And wanting can be seen as more fundamental—they are your preferences, and inform your actions (along with your world model), whereas using liking to take action involve having a (potentially flawed) mental model of what will increase your good experiences and diminish bad ones.
The game can be continued endlessly—what you’re saying is that your moral system revolves around liking, and that the arguments that this should be so are convincing to you. But you can’t convince wanters with the same argument—their convictions are different, and neither set of arguments are “logical”. It becomes a taste-based debate.
Sorry, I thought you already understood why wanting can be wrong.
Example 1: imagine a person named Eliezer walks to an ice cream stand, and picks a new flavor X. Eliezer wants to try the flavor X of ice cream. Eliezer buys it and eats it. The taste is awful and Eliezer vomits it. Eliezer concludes that wanting can be wrong and that it is different from liking in this sense.
Example 2: imagine Eliezer watched a movie in which some homophobic gangsters go about killing homosexuals. Eliezer gets inspired and wants to kill homosexuals too, so he picks a knife and finds a nice looking young man and prepares to torture and kill him. Eliezer looks at the muscular body of the young man, and starts to feel homosexual urges and desires, and instead he makes love with the homosexual young man. Eliezer concludes that he wanted something wrong and that he had been a bigot and homosexual all along, liking men, but not wanting to kill them.
I understand why those examples are wrong. Because I have certain beliefs (broadly, but not universally, shared). But I don’t see how any of those beliefs can be logically deduced.
Quite a lot follows from “positive conscious experiences are intrinsically valuable”, but that axiom won’t be accepted unless you already partially agree with it anyway.
I don’t think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?
Because I have certain beliefs (broadly, but not universally, shared). But I don’t see how any of those beliefs can be logically deduced.
Can you elaborate? I don’t understand… Many valid wants or beliefs can be ultimately reduced as to good and bad feelings, in the present or future, for oneself or for others, as instrumental values, such as peace, learning, curiosity, love, security, longevity, health, science...
I don’t think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?
I do disagree with it! :-) Here is what I agree with:
That humans have positive and negative conscious experiences.
That humans have an innate sense that morality exists: that good and bad mean something.
That humans have preferences.
I’ll also agree that preferences often (but not always) track the positive or negative conscious experiences of that human. That human impressions of good and bad sometimes (but not always) track positive or negative conscious experiences of humans in general, at least approximately.
But I don’t see any grounds for saying “positive conscious experiences are intrinsically (or logically) good”. That seems to be putting in far to many extra connotations, and moving far beyond the facts we know.
I think that the argument for the intrinsic value (goodness or badness) of conscious feelings goes like this:
Conscious experiences are real, and are the most certain data about the world, because they are directly accessible, and don’t depend on inference, unlike the external world as we perceive it. It would not be possible to dismiss conscious experiences as unreal, inferring that they not be part of the external world, since they are more certain than the external world is. The external world could be an illusion, and we could be living inside a simulated virtual world, in an underlying universe that be alien and with different physical laws.
Even though conscious experiences are representations (sometimes of external physical states, sometimes of abstract internal states), apart from what they represent they do exist in themselves as real phenomena (likely physical).
Conscious experiences can be felt as intrinsically neutral, good, or bad in value, sometimes intensely so. For example, the bad value of having deep surgery without anesthesia is felt as intrinsically and intensely bad, and this badness is a real occurrence in the world. Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.
Ethical value is, by definition, what is good and what is bad. We have directly accessible data of occurrences of intrinsic goodness and badness. They are ethical value.
Did you read my article Arguments against the Orthogonality Thesis?
Of course!
Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.
-> Likewise, an experience of extreme success or pleasure is often intrinsically felt as good, and this feeling of goodness is a real occurrence in the world.
And that renders the 4th point moot—your extra axiom (the one that goes from “is” to “ought”) is “feelings of goodness are actually goodness”. I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it’s a logical transition.
I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it’s a logical transition.
Could you explain more at length for me?
The feeling of badness is something bad (imagine yourself or someone being tortured and tell me it’s not bad), and it is a real occurrence, because conscious contents are real occurrences. It is then a bad occurrence. A bad occurrence must be a bad ethical value. All this is data, since conscious perceptions have a directly accessible nature, they are “is”, and the “ought” is part of the definition of ethical value, that what is good ought to be promoted, and what is bad ought to be avoided.
This does not mean that we should seek direct good and avoid direct bad on the immediate present, such as making parties to no end, but it means that we should seek it in the present and the future, seeking indirect values such as working, learning, promoting peace and equality, so that the future, even in the longest-term, will have direct value.
(To the anonymous users who down-voted this, do me the favor of posting a comment saying why you disagree, if you are sure that you are right and I am wrong, otherwise it’s just rudeness, the down-vote should be used as a censoring mechanism for inappropriate posts rather than to express disagreement with a reasonable point of view. I’m using my time to freely explain this as a favor to whoever is reading, and it’s a bit insulting and bad mannered to down-vote it).
Why? That’s an assertion—it won’t convince anyone who doesn’t already agree with you. And you’re using two meanings of the word “bad”—an unpleasant subjective experience, and badness according to a moral system. Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it’s a logical transition.
Could you explain more at length for me?
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference). However, I do not assume that this moral system can be deduced from universal or logical principles, for the reasons stated above. Most humans will have moral systems not too far removed from ours (in the sense of Kolmogorov complexity—there are many human cultural universals, and our moral instincts are generally similar), but this isn’t a logical argument for the correctness of something.
Why? That’s an assertion—it won’t convince anyone who doesn’t already agree with you. And you’re using two meanings of the word “bad”—an unpleasant subjective experience, and badness according to a moral system.
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics. It seems to be a matter of including something in a verbal definition, so it seems to be correct. Moral realism would follow. It is not undesirable, but helpful, since anti-realism implies that our values are not really valuable, but just fiction.
Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I agree, this would be a special case, of incomplete knowledge about conscious animals. This would be possible for instance in some artificial intelligences, but they might learn about it indirectly by observing animals, humans, and getting contact with human culture in various forms. Otherwise, they might become morally anti-realist.
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference).
Could you explain a bit this emphasis on preference?
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics.
Which is exactly why I critiqued using the word “bad” for the conscious experiences, using “negative” or “unpleasant”, words which describe the conscious experience in a similar way without sneaking in normative claims.
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference).
Could you explain a bit this emphasis on preference?
Er, nothing complex—in my ethics, there are cases where preferences trump feelings (eg experience machines) and cases where feelings trump preferences (eg drug users who are very unhappy). That’s all I’m saying.
Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.
The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.
Could you give me examples of any reasonable preferences that could not be reducible to good and bad feelings in that sense?
Anyway, there is also the argument from personal identity which calls for equalization of values taking into account all subjects (equally valued, if ceteris paribus), and their reasoning, if contextually equivalent. This could be in itself a partial refutation of the orthogonality thesis, a refutation in theory and for autonomous and free general superintelligent agents, but not necessarily for imprisoned and tampered ones.
I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.
A bit unclear, but I’m assuming you mean something like “we have good or bad (technically, pleasant or unpleasant) conscious experiences, and we know this with great certainty”. That seems fine.
Why? This is the whole core of the disagreement, and you’re zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned—we want things we don’t like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other?
You seem to get words to do too much of the work. We have innate senses of positivity and negativity for certain experiences; we also have an innate sense that morality exists. But those together do not make positive experiences good “by definition” (nor does calling them “good” rather than “positive”).
But those are relatively minor points—if there was a single consciousness in the universe, them maybe your argument could get off the ground. But we have many current and potential consciousnesses, with competing values and conscious experiences. You seem to be saying that we should logically be altruists, because we have conscious experiences. I agree we should be altruists; but that’s a personal preference, and there’s no logic to it. Following your argument (consciousness before physics) one could perfectly become a solipsist, believing only one’s own mind exists, and ignoring others. Or your could be a racist altruist, preferring certain individuals or conscious experiences. Or you could put all experiences together on an infinite numbers of comparative scales (there is no intrinsic measure to compare the quality of two positive experiences in different people).
But in a way, that’s entirely a moot point. Your claim is that a certain ethics logically follows from our conscious reality. There I must ask you to prove it. State your assumptions, show your claims, present the deductions. You’ll need to do that, before we can start critiquing your position properly.
Hi Stuart,
Indeed, wanting and liking do not always correspond, also from a neurological perspective. Wanting involves planning and planning often involves error. We often want things mistakenly, be it by evolutionary selected reasons, cultural reasons, or just bad planning. Liking is what matters, because it can be immediately and directly determined to be good, with the highest certainty. This is an empirical confirmation of its value, while wanting is like an empty promise.
We have good and bad feelings associated with some evolutionarily or culturally determined things. Theoretically, the result of good and bad feelings could be associated with any inputs. The inputs don’t matter, nor does wanting necessarily matter, nor innate intuitions of morality. The only thing that has direct value, which is empirically confirmed, is good and bad feelings.
Well noticed. That comment was not well elaborated and is not a complete explanation. It is also necessary for that point you mentioned to consider the philosophy of personal identities, which is a point that I examine in my more complete essay on Less Wrong, and also in my essay Universal Identity.
I have a small essay written on ethics, but it’s a detailed topic, and my article may be too concise, assuming much previous reading on the subject. It is here. I propose that we instead focus on questions as they come up.
That is your opinion. Others believe wanting is fundamental and rational, that can be checked and explained and shared—while liking is a misleading emotional response (that probably shows much less consistency, too).
How would you resolve the difference? They say something is more important, you say something else is. Neither of your disagree about the facts of the world, just about what is important and what isn’t. What can you point to that makes this into a logical disagreement?
One argument is that from empiricism or verification. Wanting can be and often is wrong. Simple examples can show this, but I assume that they won’t be needed because you understand. Liking can be misleading in terms of motivation or in terms of the external object which is liked, but it cannot be misleading or wrong in itself, in that it is a good feeling. For instance, a person could like to use cocaine, and this might be misleading in terms of being a wrong motivation, that in the long-term would prove destructive and dislikeable. However, immediately, in terms of the sensation of liking itself, and all else being equal, then it is certainly good, and this is directly verifiable by consciousness.
Taking this into account, some would argue for wanting values X, Y, or Z, but not values A, B, or C. This is another matter. I’m arguing that good and bad feelings are the direct values that have validity and should be wanted. Other valid values are those that are instrumentally reducible to these, which are very many, and most of what we do.
“Wanting can be misleading in terms of the long term or in terms of the internal emotional state with which it is connected, but it cannot be misleading or wrong in itself, in that it is a clear preference.”
Indeed, but what separates wanting and liking is that preferences can be wrong, they require no empirical basis, while liking in itself cannot be wrong, and it has an empirical basis.
When rightfully wanting something, that something gets a justification. Liking, understood as good feelings, is a justification, while another is avoiding bad feelings, and this can be causally extended to include instrumental actions that will cause this in indirect ways.
Then how can wanting be wrong? They’re there, they’re conscious preferences (you can introspect and get them, just as liking), and they have as much empirical basis as liking.
And wanting can be seen as more fundamental—they are your preferences, and inform your actions (along with your world model), whereas using liking to take action involve having a (potentially flawed) mental model of what will increase your good experiences and diminish bad ones.
The game can be continued endlessly—what you’re saying is that your moral system revolves around liking, and that the arguments that this should be so are convincing to you. But you can’t convince wanters with the same argument—their convictions are different, and neither set of arguments are “logical”. It becomes a taste-based debate.
Sorry, I thought you already understood why wanting can be wrong.
Example 1: imagine a person named Eliezer walks to an ice cream stand, and picks a new flavor X. Eliezer wants to try the flavor X of ice cream. Eliezer buys it and eats it. The taste is awful and Eliezer vomits it. Eliezer concludes that wanting can be wrong and that it is different from liking in this sense.
Example 2: imagine Eliezer watched a movie in which some homophobic gangsters go about killing homosexuals. Eliezer gets inspired and wants to kill homosexuals too, so he picks a knife and finds a nice looking young man and prepares to torture and kill him. Eliezer looks at the muscular body of the young man, and starts to feel homosexual urges and desires, and instead he makes love with the homosexual young man. Eliezer concludes that he wanted something wrong and that he had been a bigot and homosexual all along, liking men, but not wanting to kill them.
I understand why those examples are wrong. Because I have certain beliefs (broadly, but not universally, shared). But I don’t see how any of those beliefs can be logically deduced.
Quite a lot follows from “positive conscious experiences are intrinsically valuable”, but that axiom won’t be accepted unless you already partially agree with it anyway.
I don’t think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?
Can you elaborate? I don’t understand… Many valid wants or beliefs can be ultimately reduced as to good and bad feelings, in the present or future, for oneself or for others, as instrumental values, such as peace, learning, curiosity, love, security, longevity, health, science...
I do disagree with it! :-) Here is what I agree with:
That humans have positive and negative conscious experiences.
That humans have an innate sense that morality exists: that good and bad mean something.
That humans have preferences.
I’ll also agree that preferences often (but not always) track the positive or negative conscious experiences of that human. That human impressions of good and bad sometimes (but not always) track positive or negative conscious experiences of humans in general, at least approximately.
But I don’t see any grounds for saying “positive conscious experiences are intrinsically (or logically) good”. That seems to be putting in far to many extra connotations, and moving far beyond the facts we know.
I agree with what you agree with.
Did you read my article Arguments against the Orthogonality Thesis?
I think that the argument for the intrinsic value (goodness or badness) of conscious feelings goes like this:
Conscious experiences are real, and are the most certain data about the world, because they are directly accessible, and don’t depend on inference, unlike the external world as we perceive it. It would not be possible to dismiss conscious experiences as unreal, inferring that they not be part of the external world, since they are more certain than the external world is. The external world could be an illusion, and we could be living inside a simulated virtual world, in an underlying universe that be alien and with different physical laws.
Even though conscious experiences are representations (sometimes of external physical states, sometimes of abstract internal states), apart from what they represent they do exist in themselves as real phenomena (likely physical).
Conscious experiences can be felt as intrinsically neutral, good, or bad in value, sometimes intensely so. For example, the bad value of having deep surgery without anesthesia is felt as intrinsically and intensely bad, and this badness is a real occurrence in the world. Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.
Ethical value is, by definition, what is good and what is bad. We have directly accessible data of occurrences of intrinsic goodness and badness. They are ethical value.
Of course!
-> Likewise, an experience of extreme success or pleasure is often intrinsically felt as good, and this feeling of goodness is a real occurrence in the world.
And that renders the 4th point moot—your extra axiom (the one that goes from “is” to “ought”) is “feelings of goodness are actually goodness”. I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it’s a logical transition.
This is a relevant discussion in another thread, by the way:
http://lesswrong.com/lw/gu1/decision_theory_faq/8lt9?context=3
Could you explain more at length for me?
The feeling of badness is something bad (imagine yourself or someone being tortured and tell me it’s not bad), and it is a real occurrence, because conscious contents are real occurrences. It is then a bad occurrence. A bad occurrence must be a bad ethical value. All this is data, since conscious perceptions have a directly accessible nature, they are “is”, and the “ought” is part of the definition of ethical value, that what is good ought to be promoted, and what is bad ought to be avoided.
This does not mean that we should seek direct good and avoid direct bad on the immediate present, such as making parties to no end, but it means that we should seek it in the present and the future, seeking indirect values such as working, learning, promoting peace and equality, so that the future, even in the longest-term, will have direct value.
(To the anonymous users who down-voted this, do me the favor of posting a comment saying why you disagree, if you are sure that you are right and I am wrong, otherwise it’s just rudeness, the down-vote should be used as a censoring mechanism for inappropriate posts rather than to express disagreement with a reasonable point of view. I’m using my time to freely explain this as a favor to whoever is reading, and it’s a bit insulting and bad mannered to down-vote it).
Why? That’s an assertion—it won’t convince anyone who doesn’t already agree with you. And you’re using two meanings of the word “bad”—an unpleasant subjective experience, and badness according to a moral system. Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference). However, I do not assume that this moral system can be deduced from universal or logical principles, for the reasons stated above. Most humans will have moral systems not too far removed from ours (in the sense of Kolmogorov complexity—there are many human cultural universals, and our moral instincts are generally similar), but this isn’t a logical argument for the correctness of something.
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics. It seems to be a matter of including something in a verbal definition, so it seems to be correct. Moral realism would follow. It is not undesirable, but helpful, since anti-realism implies that our values are not really valuable, but just fiction.
I agree, this would be a special case, of incomplete knowledge about conscious animals. This would be possible for instance in some artificial intelligences, but they might learn about it indirectly by observing animals, humans, and getting contact with human culture in various forms. Otherwise, they might become morally anti-realist.
Could you explain a bit this emphasis on preference?
Which is exactly why I critiqued using the word “bad” for the conscious experiences, using “negative” or “unpleasant”, words which describe the conscious experience in a similar way without sneaking in normative claims.
Er, nothing complex—in my ethics, there are cases where preferences trump feelings (eg experience machines) and cases where feelings trump preferences (eg drug users who are very unhappy). That’s all I’m saying.
Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.
The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.
Could you give me examples of any reasonable preferences that could not be reducible to good and bad feelings in that sense?
Anyway, there is also the argument from personal identity which calls for equalization of values taking into account all subjects (equally valued, if ceteris paribus), and their reasoning, if contextually equivalent. This could be in itself a partial refutation of the orthogonality thesis, a refutation in theory and for autonomous and free general superintelligent agents, but not necessarily for imprisoned and tampered ones.
Then they are no longer purely descriptive, and I can’t agree that they are logically or empirically true.
Apart from that, what do you think of the other points? If you wish, we could continue a conversation on another online medium.
Certainly, but I don’t have much time for the next few weeks :-(
Send me a message in mid-April if you’re still interested!
I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.