I will answer by explaining my view of morally realist ethics.
Conscious experiences and their content are physical occurrences and real. They can vary from the world they represent, but they are still real occurrences. Their reality can be known with the highest possible certainty, above all else, including physics, because they are immediately and directly accessible, while the external world is accessible indirectly.
Unlike the physical world, it seems that physical conscious perceptions can theoretically be anything. The content of conscious perceptions could, with the right technology, be controlled, as in a virtual world, and made to be anything, even things that differ from the external physical world. While the physical world has no ethical value except from conscious perceptions, conscious perceptions can be ethical value, and only by being good or bad conscious perceptions, or feelings. This seems to be so by definition, because ethical value is being good or bad.
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty. This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings, because it is logical that this will make the universe better. This is acting ethically. Not acting accordingly is irrational and mistaken. Ethics is about realizing valuable states.
Human beings have primitive emotional and instinctive motivations that are not guided by intelligence and rationality. These primitive motivations can take control of human minds and make them act in irrational and unintelligent ways. Although human beings may consider it good to act according to their primitive motivations in cases in which they conflict with acting ethically, this would be an irrational and mistaken decision.
When primitive motivations conflict with human intelligent reason, these two could be thought of as two different agents inside one mind, with differing motivations. Intelligent reason does not always prevail, because primitive motivations have strong control of behavior. However, it would be rational and intelligent for intelligent reason to always take the ultimate control of behavior if it could somehow suppress the power of primitive motivations. This might be done by somehow strengthening human intelligent reason and its control of motivations.
Actions which foster good conscious feelings and prevent bad conscious feelings need not do so in the short-term. Many effective actions tend to do so only in the long-term. Likewise, such actions need not do so directly; many effective actions only do so indirectly. Often it is rational to act if it is probable that it will be ethically positive eventually.
That people have personal identities is false; they are mere parts of the universe. This is clear upon advanced philosophical analysis, but can be hard to understand for those who haven’t thought much about it. An objective and impersonal perspective is called for. For this reason it is rational for all beings to ‘act ethically’ not only for themselves but also for all other beings in the same universe. For an explanation of why personal identities don’t exist, what is relevant for the question of why acting ethically in a collective rather than selfish sense, see this brief essay:
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty. This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings, because it is logical that this will make the universe better. This is acting ethically. Not acting accordingly is irrational and mistaken. Ethics is about realizing valuable states.
Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system?
You say
Ethics is about realizing valuable states.
But valuable to who? If there were a person who valued others being in pain, why would this person’s views matter less?
“Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system?”
If we agree that good and bad feelings are good and bad, that only conscious experiences produce direct ethical value, which lies in its good or bad quality, then theories that contradict this should not be correct, or they would need to justify their points, but it seems that they have trouble in that area.
“But valuable to who? If there were a person who valued others being in pain, why would this person’s views matter less?”
:) That’s a beauty of personal identities not existing. It doesn’t matter who it is. In the case of valuing others being in pain, would it be generating pleasure from it? In that case, lots of things have to be considered, among which: the net balance of good and bad feelings caused from the actions; the societal effects of legalizing or not certain actions...
Unlike the physical world, it seems that physical conscious perceptions can theoretically be anything.
Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics.
While the physical world has no ethical value except from conscious perceptions, conscious perceptions can be ethical value, and only by being good or bad conscious perceptions, or feelings. This seems to be so by definition, because ethical value is being good or bad.
Should I interpret this as you defining ethics as good and bad feelings?
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty. This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings
“Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics.”
Indeed, conscious experience may be bound by the size and complexity of brains or similar machinery, of humans, other animals, and cyborgs. Theoretically, conscious perceptions may be able to be anything (or nearly), as we could theorize about brains the size of Jupiter or much larger. You get the point.
“Should I interpret this as you defining ethics as good and bad feelings?”
Almost. Not ethics, but ethical value in a direct, ultimate sense. There is also indirect value, which is things that can lead to direct value, which are myriad, and ethics is much more than defining value, it comprises laws, decision theory, heuristics, empirical research, and many theoretical considerations. I’m aware that Elizer has written a post on Less Wrong saying that ethical value is not on happiness alone. Although happiness alone is not my proposition, I find his post on the topic quite poorly developed, and really not an advisable read.
“So, do you endorse wireheading?”
This depends very much on the context. All else being equal, wireheading could be good for some people, depending on the implications of it. However, all else seems hardly equal in this case. People seem to have a diverse spectrum of good feelings that may not be covered by the wireheading (such as love, some types of physical pleasure, good smell and taste, and many others), and the wireheading might prevent people from being functional and acting in order to increase ethical value in the long-term, so as to possibly deny its benefits. I see wireheading, in the sense of artificial paradise simulations, as a possibly desirable condition in a rather distant future of ideal development and post-scarcity, though.
While the physical world has no ethical value except from conscious perceptions, conscious perceptions can be ethical value, and only by being good or bad conscious perceptions, or feelings. This seems to be so by definition, because ethical value is being good or bad.
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty.
A bit unclear, but I’m assuming you mean something like “we have good or bad (technically, pleasant or unpleasant) conscious experiences, and we know this with great certainty”. That seems fine.
This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings, because it is logical that this will make the universe better.
Why? This is the whole core of the disagreement, and you’re zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned—we want things we don’t like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other?
You seem to get words to do too much of the work. We have innate senses of positivity and negativity for certain experiences; we also have an innate sense that morality exists. But those together do not make positive experiences good “by definition” (nor does calling them “good” rather than “positive”).
But those are relatively minor points—if there was a single consciousness in the universe, them maybe your argument could get off the ground. But we have many current and potential consciousnesses, with competing values and conscious experiences. You seem to be saying that we should logically be altruists, because we have conscious experiences. I agree we should be altruists; but that’s a personal preference, and there’s no logic to it. Following your argument (consciousness before physics) one could perfectly become a solipsist, believing only one’s own mind exists, and ignoring others. Or your could be a racist altruist, preferring certain individuals or conscious experiences. Or you could put all experiences together on an infinite numbers of comparative scales (there is no intrinsic measure to compare the quality of two positive experiences in different people).
But in a way, that’s entirely a moot point. Your claim is that a certain ethics logically follows from our conscious reality. There I must ask you to prove it. State your assumptions, show your claims, present the deductions. You’ll need to do that, before we can start critiquing your position properly.
Why? This is the whole core of the disagreement, and you’re zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned—we want things we don’t like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other?
Indeed, wanting and liking do not always correspond, also from a neurological perspective. Wanting involves planning and planning often involves error. We often want things mistakenly, be it by evolutionary selected reasons, cultural reasons, or just bad planning. Liking is what matters, because it can be immediately and directly determined to be good, with the highest certainty. This is an empirical confirmation of its value, while wanting is like an empty promise.
We have good and bad feelings associated with some evolutionarily or culturally determined things. Theoretically, the result of good and bad feelings could be associated with any inputs. The inputs don’t matter, nor does wanting necessarily matter, nor innate intuitions of morality. The only thing that has direct value, which is empirically confirmed, is good and bad feelings.
if there was a single consciousness in the universe, them maybe your argument could get off the ground. But we have many current and potential consciousnesses, with competing values and conscious experiences.
Well noticed. That comment was not well elaborated and is not a complete explanation. It is also necessary for that point you mentioned to consider the philosophy of personal identities, which is a point that I examine in my more complete essay on Less Wrong, and also in my essay Universal Identity.
But in a way, that’s entirely a moot point. Your claim is that a certain ethics logically follows from our conscious reality. There I must ask you to prove it. State your assumptions, show your claims, present the deductions. You’ll need to do that, before we can start critiquing your position properly.
I have a small essay written on ethics, but it’s a detailed topic, and my article may be too concise, assuming much previous reading on the subject. It is here. I propose that we instead focus on questions as they come up.
Liking is what matters, because it can be immediately and directly determined to be good, with the highest certainty.
That is your opinion. Others believe wanting is fundamental and rational, that can be checked and explained and shared—while liking is a misleading emotional response (that probably shows much less consistency, too).
How would you resolve the difference? They say something is more important, you say something else is. Neither of your disagree about the facts of the world, just about what is important and what isn’t. What can you point to that makes this into a logical disagreement?
One argument is that from empiricism or verification. Wanting can be and often is wrong. Simple examples can show this, but I assume that they won’t be needed because you understand. Liking can be misleading in terms of motivation or in terms of the external object which is liked, but it cannot be misleading or wrong in itself, in that it is a good feeling. For instance, a person could like to use cocaine, and this might be misleading in terms of being a wrong motivation, that in the long-term would prove destructive and dislikeable. However, immediately, in terms of the sensation of liking itself, and all else being equal, then it is certainly good, and this is directly verifiable by consciousness.
Taking this into account, some would argue for wanting values X, Y, or Z, but not values A, B, or C. This is another matter. I’m arguing that good and bad feelings are the direct values that have validity and should be wanted. Other valid values are those that are instrumentally reducible to these, which are very many, and most of what we do.
Liking can be misleading in terms of motivation or in terms of the external object which is liked, but it cannot be misleading or wrong in itself, in that it is a good feeling.
“Wanting can be misleading in terms of the long term or in terms of the internal emotional state with which it is connected, but it cannot be misleading or wrong in itself, in that it is a clear preference.”
Indeed, but what separates wanting and liking is that preferences can be wrong, they require no empirical basis, while liking in itself cannot be wrong, and it has an empirical basis.
When rightfully wanting something, that something gets a justification. Liking, understood as good feelings, is a justification, while another is avoiding bad feelings, and this can be causally extended to include instrumental actions that will cause this in indirect ways.
Then how can wanting be wrong? They’re there, they’re conscious preferences (you can introspect and get them, just as liking), and they have as much empirical basis as liking.
And wanting can be seen as more fundamental—they are your preferences, and inform your actions (along with your world model), whereas using liking to take action involve having a (potentially flawed) mental model of what will increase your good experiences and diminish bad ones.
The game can be continued endlessly—what you’re saying is that your moral system revolves around liking, and that the arguments that this should be so are convincing to you. But you can’t convince wanters with the same argument—their convictions are different, and neither set of arguments are “logical”. It becomes a taste-based debate.
Sorry, I thought you already understood why wanting can be wrong.
Example 1: imagine a person named Eliezer walks to an ice cream stand, and picks a new flavor X. Eliezer wants to try the flavor X of ice cream. Eliezer buys it and eats it. The taste is awful and Eliezer vomits it. Eliezer concludes that wanting can be wrong and that it is different from liking in this sense.
Example 2: imagine Eliezer watched a movie in which some homophobic gangsters go about killing homosexuals. Eliezer gets inspired and wants to kill homosexuals too, so he picks a knife and finds a nice looking young man and prepares to torture and kill him. Eliezer looks at the muscular body of the young man, and starts to feel homosexual urges and desires, and instead he makes love with the homosexual young man. Eliezer concludes that he wanted something wrong and that he had been a bigot and homosexual all along, liking men, but not wanting to kill them.
I understand why those examples are wrong. Because I have certain beliefs (broadly, but not universally, shared). But I don’t see how any of those beliefs can be logically deduced.
Quite a lot follows from “positive conscious experiences are intrinsically valuable”, but that axiom won’t be accepted unless you already partially agree with it anyway.
I don’t think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?
Because I have certain beliefs (broadly, but not universally, shared). But I don’t see how any of those beliefs can be logically deduced.
Can you elaborate? I don’t understand… Many valid wants or beliefs can be ultimately reduced as to good and bad feelings, in the present or future, for oneself or for others, as instrumental values, such as peace, learning, curiosity, love, security, longevity, health, science...
I don’t think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?
I do disagree with it! :-) Here is what I agree with:
That humans have positive and negative conscious experiences.
That humans have an innate sense that morality exists: that good and bad mean something.
That humans have preferences.
I’ll also agree that preferences often (but not always) track the positive or negative conscious experiences of that human. That human impressions of good and bad sometimes (but not always) track positive or negative conscious experiences of humans in general, at least approximately.
But I don’t see any grounds for saying “positive conscious experiences are intrinsically (or logically) good”. That seems to be putting in far to many extra connotations, and moving far beyond the facts we know.
I think that the argument for the intrinsic value (goodness or badness) of conscious feelings goes like this:
Conscious experiences are real, and are the most certain data about the world, because they are directly accessible, and don’t depend on inference, unlike the external world as we perceive it. It would not be possible to dismiss conscious experiences as unreal, inferring that they not be part of the external world, since they are more certain than the external world is. The external world could be an illusion, and we could be living inside a simulated virtual world, in an underlying universe that be alien and with different physical laws.
Even though conscious experiences are representations (sometimes of external physical states, sometimes of abstract internal states), apart from what they represent they do exist in themselves as real phenomena (likely physical).
Conscious experiences can be felt as intrinsically neutral, good, or bad in value, sometimes intensely so. For example, the bad value of having deep surgery without anesthesia is felt as intrinsically and intensely bad, and this badness is a real occurrence in the world. Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.
Ethical value is, by definition, what is good and what is bad. We have directly accessible data of occurrences of intrinsic goodness and badness. They are ethical value.
Did you read my article Arguments against the Orthogonality Thesis?
Of course!
Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.
-> Likewise, an experience of extreme success or pleasure is often intrinsically felt as good, and this feeling of goodness is a real occurrence in the world.
And that renders the 4th point moot—your extra axiom (the one that goes from “is” to “ought”) is “feelings of goodness are actually goodness”. I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it’s a logical transition.
I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it’s a logical transition.
Could you explain more at length for me?
The feeling of badness is something bad (imagine yourself or someone being tortured and tell me it’s not bad), and it is a real occurrence, because conscious contents are real occurrences. It is then a bad occurrence. A bad occurrence must be a bad ethical value. All this is data, since conscious perceptions have a directly accessible nature, they are “is”, and the “ought” is part of the definition of ethical value, that what is good ought to be promoted, and what is bad ought to be avoided.
This does not mean that we should seek direct good and avoid direct bad on the immediate present, such as making parties to no end, but it means that we should seek it in the present and the future, seeking indirect values such as working, learning, promoting peace and equality, so that the future, even in the longest-term, will have direct value.
(To the anonymous users who down-voted this, do me the favor of posting a comment saying why you disagree, if you are sure that you are right and I am wrong, otherwise it’s just rudeness, the down-vote should be used as a censoring mechanism for inappropriate posts rather than to express disagreement with a reasonable point of view. I’m using my time to freely explain this as a favor to whoever is reading, and it’s a bit insulting and bad mannered to down-vote it).
Why? That’s an assertion—it won’t convince anyone who doesn’t already agree with you. And you’re using two meanings of the word “bad”—an unpleasant subjective experience, and badness according to a moral system. Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it’s a logical transition.
Could you explain more at length for me?
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference). However, I do not assume that this moral system can be deduced from universal or logical principles, for the reasons stated above. Most humans will have moral systems not too far removed from ours (in the sense of Kolmogorov complexity—there are many human cultural universals, and our moral instincts are generally similar), but this isn’t a logical argument for the correctness of something.
Why? That’s an assertion—it won’t convince anyone who doesn’t already agree with you. And you’re using two meanings of the word “bad”—an unpleasant subjective experience, and badness according to a moral system.
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics. It seems to be a matter of including something in a verbal definition, so it seems to be correct. Moral realism would follow. It is not undesirable, but helpful, since anti-realism implies that our values are not really valuable, but just fiction.
Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I agree, this would be a special case, of incomplete knowledge about conscious animals. This would be possible for instance in some artificial intelligences, but they might learn about it indirectly by observing animals, humans, and getting contact with human culture in various forms. Otherwise, they might become morally anti-realist.
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference).
Could you explain a bit this emphasis on preference?
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics.
Which is exactly why I critiqued using the word “bad” for the conscious experiences, using “negative” or “unpleasant”, words which describe the conscious experience in a similar way without sneaking in normative claims.
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference).
Could you explain a bit this emphasis on preference?
Er, nothing complex—in my ethics, there are cases where preferences trump feelings (eg experience machines) and cases where feelings trump preferences (eg drug users who are very unhappy). That’s all I’m saying.
Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.
The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.
Could you give me examples of any reasonable preferences that could not be reducible to good and bad feelings in that sense?
Anyway, there is also the argument from personal identity which calls for equalization of values taking into account all subjects (equally valued, if ceteris paribus), and their reasoning, if contextually equivalent. This could be in itself a partial refutation of the orthogonality thesis, a refutation in theory and for autonomous and free general superintelligent agents, but not necessarily for imprisoned and tampered ones.
I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.
That people have personal identities is false; they are mere parts of the universe. This is clear upon advanced philosophical analysis, but can be hard to understand for those who haven’t thought much about it. An objective and impersonal perspective is called for. For this reason it is rational for all beings to ‘act ethically’ not only for themselves but also for all other beings in the same universe. For an explanation of why personal identities don’t exist, what is relevant for the question of why acting ethically in a collective rather than selfish sense, see this brief essay: https://www.facebook.com/notes/jonatas-müller/universal-identity/10151189314697917
Right now I see this as perhaps the most challenging and serious form of moral realism, so I definitely intend to take time and care to study it. I’ll have to get back to you, as I think I said I would before.
I think that it is a worthy use of time, and I applaud your rational attitude of looking to refute one’s theories. I also like to do that in order to evolve them and discard wrong parts.
Don’t hesitate to bring up specific parts for debate.
I don’t directly apprehend anything as the being “good” or the “bad” in the moral realist sense and I don’t count other peoples’ accounts of directly apprehending such things as evidence (especially since schizophrenics and theists exist).
Conscious perceptions are quite direct and simple. Do you feel, for example, a bad feeling like intense pain as being a bad occurrence (which, like all occurrences in the universe, is physical), and likewise, for example, a good feeling like a delicious taste as being a good occurrence?
I argue that these are perceived with the highest degree of certainty of all things and are the only things that can be ultimately linked to direct good and bad value.
No, though I admit it has felt like that for me at some points in my life. Even if I did, there are a bunch of reasons why that I would not trust that intuition
I like certain things and dislike certain things, and in a certain sense I would be mistaken if I were doing things that reliably caused me pain. That certain sense is that if I were better informed I would not take that action. If, however, I liked pain, I would still take that action, and so I would not be mistaken. I could go through the same process to explain why an sadist is not mistaken.
I do not know what else to say except that this is just an appeal to intuition, and that specific intuitions are worthless unless they are proven to reliably point towards the truth.
Liking pain seems impossible, as it is an aversive feeling. However, for some people, some types of pain or self-harm cause a distraction from underlying emotional pain, which is felt as good or relieving, or it may give them some thrill, but in these cases it seems that it is always pain + some associated good feeling, or some relief of an underlying bad feeling, and it is for the good feeling or relief that they want pain, rather than pain for itself.
Conscious perceptions in themselves seem to be what is most certain in terms of truth. The things they represent, such as the physical world, may be illusions, but one cannot doubt feeling the illusions themselves.
Let’s play the Monday-Tuesday game. On Monday I like pain. On Tuesday I like some associated good feeling that pain provides. What’s the difference between Monday and Tuesday?
The idea that one can like pain in itself is not substantiated by evidence. Masochists or self-harmers seek some pleasure or relief they get from pain or humiliation, not pain for itself. They won’t stick their hands in a pot with boiling water.
To follow that line of reasoning, please provide evidence that there exists anyone that enjoys pain in itself. I find that unbelievable, as pain is aversive by nature.
This is not how you play the Monday-Tuesday game! Also, a request to play the Monday-Tuesday game isn’t an argument, it’s a request for clarification. Specifically, I’m asking you to clarify what the difference between two statements is. Maybe we should try a simpler example:
On Monday I like ice cream. On Tuesday I like some associated good feeling that ice cream provides. What’s the difference between Monday and Tuesday?
Who cares about that silly game. Accepting to play it or not is my choice.
You can only validly like ice cream by way of feelings, because all that you have direct access to in this universe is consciousness. The difference between Monday and Tuesday in your example is only in the nature of the feelings involved. In the pain example, it is liked by virtue of the association with other good feelings, not pain in itself. If a person somehow loses the associated good feelings, certain painful stimuli cease to be desirable.
If a person somehow loses the associated good feelings, ice cream also ceases to be desirable. I still don’t see the difference between Monday and Tuesday.
I think I might have some idea what you mean about masochists not liking pain. Let me tell a different story, and you can tell me whether you agree...
Masochists like pain, but only in very specific environments, such as roleplaying fantasies. Within that environment, masochists like pain because of how it affects the overall experience of the fantasy. Outside that environment, masochists are just as pain-averse as the rest of the world.
Yes, in the same way that explaining your ideas well or poorly is your choice, but I don’t see what this has to do with explaining the difference between liking X and liking associated good feelings that X provides.
I will answer by explaining my view of morally realist ethics.
Conscious experiences and their content are physical occurrences and real. They can vary from the world they represent, but they are still real occurrences. Their reality can be known with the highest possible certainty, above all else, including physics, because they are immediately and directly accessible, while the external world is accessible indirectly.
Unlike the physical world, it seems that physical conscious perceptions can theoretically be anything. The content of conscious perceptions could, with the right technology, be controlled, as in a virtual world, and made to be anything, even things that differ from the external physical world. While the physical world has no ethical value except from conscious perceptions, conscious perceptions can be ethical value, and only by being good or bad conscious perceptions, or feelings. This seems to be so by definition, because ethical value is being good or bad.
That a conscious experience can be a good or bad physical occurrence is also a reality which can be felt and known with the highest possible certainty. This makes it rational, and an imperative, to follow it and care about it, to act in order to foster good conscious feelings and to prevent bad conscious feelings, because it is logical that this will make the universe better. This is acting ethically. Not acting accordingly is irrational and mistaken. Ethics is about realizing valuable states.
Human beings have primitive emotional and instinctive motivations that are not guided by intelligence and rationality. These primitive motivations can take control of human minds and make them act in irrational and unintelligent ways. Although human beings may consider it good to act according to their primitive motivations in cases in which they conflict with acting ethically, this would be an irrational and mistaken decision.
When primitive motivations conflict with human intelligent reason, these two could be thought of as two different agents inside one mind, with differing motivations. Intelligent reason does not always prevail, because primitive motivations have strong control of behavior. However, it would be rational and intelligent for intelligent reason to always take the ultimate control of behavior if it could somehow suppress the power of primitive motivations. This might be done by somehow strengthening human intelligent reason and its control of motivations.
Actions which foster good conscious feelings and prevent bad conscious feelings need not do so in the short-term. Many effective actions tend to do so only in the long-term. Likewise, such actions need not do so directly; many effective actions only do so indirectly. Often it is rational to act if it is probable that it will be ethically positive eventually.
That people have personal identities is false; they are mere parts of the universe. This is clear upon advanced philosophical analysis, but can be hard to understand for those who haven’t thought much about it. An objective and impersonal perspective is called for. For this reason it is rational for all beings to ‘act ethically’ not only for themselves but also for all other beings in the same universe. For an explanation of why personal identities don’t exist, what is relevant for the question of why acting ethically in a collective rather than selfish sense, see this brief essay:
https://www.facebook.com/notes/jonatas-müller/universal-identity/10151189314697917
Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system?
You say
But valuable to who? If there were a person who valued others being in pain, why would this person’s views matter less?
“Why is fostering good conscious feelings and prevent bad conscious feelings necessarily correct? It is intuitive for humans to say we should maximize conscious experience, and that falls under the success theory that Peter talks about, but why is this necessarily the one true moral system?”
If we agree that good and bad feelings are good and bad, that only conscious experiences produce direct ethical value, which lies in its good or bad quality, then theories that contradict this should not be correct, or they would need to justify their points, but it seems that they have trouble in that area.
“But valuable to who? If there were a person who valued others being in pain, why would this person’s views matter less?”
:) That’s a beauty of personal identities not existing. It doesn’t matter who it is. In the case of valuing others being in pain, would it be generating pleasure from it? In that case, lots of things have to be considered, among which: the net balance of good and bad feelings caused from the actions; the societal effects of legalizing or not certain actions...
Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics.
Should I interpret this as you defining ethics as good and bad feelings?
So, do you endorse wireheading?
“Not quite anything, since the size and complexity of conscious thought is bounded by the human brain. But that is not relevant to this discussion of ethics.”
Indeed, conscious experience may be bound by the size and complexity of brains or similar machinery, of humans, other animals, and cyborgs. Theoretically, conscious perceptions may be able to be anything (or nearly), as we could theorize about brains the size of Jupiter or much larger. You get the point.
“Should I interpret this as you defining ethics as good and bad feelings?”
Almost. Not ethics, but ethical value in a direct, ultimate sense. There is also indirect value, which is things that can lead to direct value, which are myriad, and ethics is much more than defining value, it comprises laws, decision theory, heuristics, empirical research, and many theoretical considerations. I’m aware that Elizer has written a post on Less Wrong saying that ethical value is not on happiness alone. Although happiness alone is not my proposition, I find his post on the topic quite poorly developed, and really not an advisable read.
“So, do you endorse wireheading?”
This depends very much on the context. All else being equal, wireheading could be good for some people, depending on the implications of it. However, all else seems hardly equal in this case. People seem to have a diverse spectrum of good feelings that may not be covered by the wireheading (such as love, some types of physical pleasure, good smell and taste, and many others), and the wireheading might prevent people from being functional and acting in order to increase ethical value in the long-term, so as to possibly deny its benefits. I see wireheading, in the sense of artificial paradise simulations, as a possibly desirable condition in a rather distant future of ideal development and post-scarcity, though.
A bit unclear, but I’m assuming you mean something like “we have good or bad (technically, pleasant or unpleasant) conscious experiences, and we know this with great certainty”. That seems fine.
Why? This is the whole core of the disagreement, and you’re zooming over it way too fast. Even for ourselves, our wanting systems and our liking systems are not well aligned—we want things we don’t like, and vice-versa. A preference utilitarian would say our wants are the most important; you seem to disagree, focusing on the good/bad aspect instead. But what logical reason would there be to follow one or the other?
You seem to get words to do too much of the work. We have innate senses of positivity and negativity for certain experiences; we also have an innate sense that morality exists. But those together do not make positive experiences good “by definition” (nor does calling them “good” rather than “positive”).
But those are relatively minor points—if there was a single consciousness in the universe, them maybe your argument could get off the ground. But we have many current and potential consciousnesses, with competing values and conscious experiences. You seem to be saying that we should logically be altruists, because we have conscious experiences. I agree we should be altruists; but that’s a personal preference, and there’s no logic to it. Following your argument (consciousness before physics) one could perfectly become a solipsist, believing only one’s own mind exists, and ignoring others. Or your could be a racist altruist, preferring certain individuals or conscious experiences. Or you could put all experiences together on an infinite numbers of comparative scales (there is no intrinsic measure to compare the quality of two positive experiences in different people).
But in a way, that’s entirely a moot point. Your claim is that a certain ethics logically follows from our conscious reality. There I must ask you to prove it. State your assumptions, show your claims, present the deductions. You’ll need to do that, before we can start critiquing your position properly.
Hi Stuart,
Indeed, wanting and liking do not always correspond, also from a neurological perspective. Wanting involves planning and planning often involves error. We often want things mistakenly, be it by evolutionary selected reasons, cultural reasons, or just bad planning. Liking is what matters, because it can be immediately and directly determined to be good, with the highest certainty. This is an empirical confirmation of its value, while wanting is like an empty promise.
We have good and bad feelings associated with some evolutionarily or culturally determined things. Theoretically, the result of good and bad feelings could be associated with any inputs. The inputs don’t matter, nor does wanting necessarily matter, nor innate intuitions of morality. The only thing that has direct value, which is empirically confirmed, is good and bad feelings.
Well noticed. That comment was not well elaborated and is not a complete explanation. It is also necessary for that point you mentioned to consider the philosophy of personal identities, which is a point that I examine in my more complete essay on Less Wrong, and also in my essay Universal Identity.
I have a small essay written on ethics, but it’s a detailed topic, and my article may be too concise, assuming much previous reading on the subject. It is here. I propose that we instead focus on questions as they come up.
That is your opinion. Others believe wanting is fundamental and rational, that can be checked and explained and shared—while liking is a misleading emotional response (that probably shows much less consistency, too).
How would you resolve the difference? They say something is more important, you say something else is. Neither of your disagree about the facts of the world, just about what is important and what isn’t. What can you point to that makes this into a logical disagreement?
One argument is that from empiricism or verification. Wanting can be and often is wrong. Simple examples can show this, but I assume that they won’t be needed because you understand. Liking can be misleading in terms of motivation or in terms of the external object which is liked, but it cannot be misleading or wrong in itself, in that it is a good feeling. For instance, a person could like to use cocaine, and this might be misleading in terms of being a wrong motivation, that in the long-term would prove destructive and dislikeable. However, immediately, in terms of the sensation of liking itself, and all else being equal, then it is certainly good, and this is directly verifiable by consciousness.
Taking this into account, some would argue for wanting values X, Y, or Z, but not values A, B, or C. This is another matter. I’m arguing that good and bad feelings are the direct values that have validity and should be wanted. Other valid values are those that are instrumentally reducible to these, which are very many, and most of what we do.
“Wanting can be misleading in terms of the long term or in terms of the internal emotional state with which it is connected, but it cannot be misleading or wrong in itself, in that it is a clear preference.”
Indeed, but what separates wanting and liking is that preferences can be wrong, they require no empirical basis, while liking in itself cannot be wrong, and it has an empirical basis.
When rightfully wanting something, that something gets a justification. Liking, understood as good feelings, is a justification, while another is avoiding bad feelings, and this can be causally extended to include instrumental actions that will cause this in indirect ways.
Then how can wanting be wrong? They’re there, they’re conscious preferences (you can introspect and get them, just as liking), and they have as much empirical basis as liking.
And wanting can be seen as more fundamental—they are your preferences, and inform your actions (along with your world model), whereas using liking to take action involve having a (potentially flawed) mental model of what will increase your good experiences and diminish bad ones.
The game can be continued endlessly—what you’re saying is that your moral system revolves around liking, and that the arguments that this should be so are convincing to you. But you can’t convince wanters with the same argument—their convictions are different, and neither set of arguments are “logical”. It becomes a taste-based debate.
Sorry, I thought you already understood why wanting can be wrong.
Example 1: imagine a person named Eliezer walks to an ice cream stand, and picks a new flavor X. Eliezer wants to try the flavor X of ice cream. Eliezer buys it and eats it. The taste is awful and Eliezer vomits it. Eliezer concludes that wanting can be wrong and that it is different from liking in this sense.
Example 2: imagine Eliezer watched a movie in which some homophobic gangsters go about killing homosexuals. Eliezer gets inspired and wants to kill homosexuals too, so he picks a knife and finds a nice looking young man and prepares to torture and kill him. Eliezer looks at the muscular body of the young man, and starts to feel homosexual urges and desires, and instead he makes love with the homosexual young man. Eliezer concludes that he wanted something wrong and that he had been a bigot and homosexual all along, liking men, but not wanting to kill them.
I understand why those examples are wrong. Because I have certain beliefs (broadly, but not universally, shared). But I don’t see how any of those beliefs can be logically deduced.
Quite a lot follows from “positive conscious experiences are intrinsically valuable”, but that axiom won’t be accepted unless you already partially agree with it anyway.
I don’t think that someone can disagree with it (good conscious feelings are intrinsically good; bad conscious feelings are intrinsically bad), because it would be akin to disagreeing that, for instance, the color green feels greenish. Do you disagree with it?
Can you elaborate? I don’t understand… Many valid wants or beliefs can be ultimately reduced as to good and bad feelings, in the present or future, for oneself or for others, as instrumental values, such as peace, learning, curiosity, love, security, longevity, health, science...
I do disagree with it! :-) Here is what I agree with:
That humans have positive and negative conscious experiences.
That humans have an innate sense that morality exists: that good and bad mean something.
That humans have preferences.
I’ll also agree that preferences often (but not always) track the positive or negative conscious experiences of that human. That human impressions of good and bad sometimes (but not always) track positive or negative conscious experiences of humans in general, at least approximately.
But I don’t see any grounds for saying “positive conscious experiences are intrinsically (or logically) good”. That seems to be putting in far to many extra connotations, and moving far beyond the facts we know.
I agree with what you agree with.
Did you read my article Arguments against the Orthogonality Thesis?
I think that the argument for the intrinsic value (goodness or badness) of conscious feelings goes like this:
Conscious experiences are real, and are the most certain data about the world, because they are directly accessible, and don’t depend on inference, unlike the external world as we perceive it. It would not be possible to dismiss conscious experiences as unreal, inferring that they not be part of the external world, since they are more certain than the external world is. The external world could be an illusion, and we could be living inside a simulated virtual world, in an underlying universe that be alien and with different physical laws.
Even though conscious experiences are representations (sometimes of external physical states, sometimes of abstract internal states), apart from what they represent they do exist in themselves as real phenomena (likely physical).
Conscious experiences can be felt as intrinsically neutral, good, or bad in value, sometimes intensely so. For example, the bad value of having deep surgery without anesthesia is felt as intrinsically and intensely bad, and this badness is a real occurrence in the world. Likewise, an experience of extreme success or pleasure is intrinsically felt as good, and this goodness is a real occurrence in the world.
Ethical value is, by definition, what is good and what is bad. We have directly accessible data of occurrences of intrinsic goodness and badness. They are ethical value.
Of course!
-> Likewise, an experience of extreme success or pleasure is often intrinsically felt as good, and this feeling of goodness is a real occurrence in the world.
And that renders the 4th point moot—your extra axiom (the one that goes from “is” to “ought”) is “feelings of goodness are actually goodness”. I slightly disagree with that on a personal moral level, and entirely disagree with the assertion that it’s a logical transition.
This is a relevant discussion in another thread, by the way:
http://lesswrong.com/lw/gu1/decision_theory_faq/8lt9?context=3
Could you explain more at length for me?
The feeling of badness is something bad (imagine yourself or someone being tortured and tell me it’s not bad), and it is a real occurrence, because conscious contents are real occurrences. It is then a bad occurrence. A bad occurrence must be a bad ethical value. All this is data, since conscious perceptions have a directly accessible nature, they are “is”, and the “ought” is part of the definition of ethical value, that what is good ought to be promoted, and what is bad ought to be avoided.
This does not mean that we should seek direct good and avoid direct bad on the immediate present, such as making parties to no end, but it means that we should seek it in the present and the future, seeking indirect values such as working, learning, promoting peace and equality, so that the future, even in the longest-term, will have direct value.
(To the anonymous users who down-voted this, do me the favor of posting a comment saying why you disagree, if you are sure that you are right and I am wrong, otherwise it’s just rudeness, the down-vote should be used as a censoring mechanism for inappropriate posts rather than to express disagreement with a reasonable point of view. I’m using my time to freely explain this as a favor to whoever is reading, and it’s a bit insulting and bad mannered to down-vote it).
Why? That’s an assertion—it won’t convince anyone who doesn’t already agree with you. And you’re using two meanings of the word “bad”—an unpleasant subjective experience, and badness according to a moral system. Minds in general need not have moral systems, or conversely may lack hedonistic feelings, making the argument incomprehensible to them.
I have a personal moral system that isn’t too far removed from the one you’re espousing (a bit more emphasise on preference). However, I do not assume that this moral system can be deduced from universal or logical principles, for the reasons stated above. Most humans will have moral systems not too far removed from ours (in the sense of Kolmogorov complexity—there are many human cultural universals, and our moral instincts are generally similar), but this isn’t a logical argument for the correctness of something.
If it is a bad occurrence, then the definition of ethics, at least as I see it (or this dictionary, although meaning is not authoritative), is defining what is good and bad (values), as normative ethics, and bringing about good and avoiding bad, as applied ethics. It seems to be a matter of including something in a verbal definition, so it seems to be correct. Moral realism would follow. It is not undesirable, but helpful, since anti-realism implies that our values are not really valuable, but just fiction.
I agree, this would be a special case, of incomplete knowledge about conscious animals. This would be possible for instance in some artificial intelligences, but they might learn about it indirectly by observing animals, humans, and getting contact with human culture in various forms. Otherwise, they might become morally anti-realist.
Could you explain a bit this emphasis on preference?
Which is exactly why I critiqued using the word “bad” for the conscious experiences, using “negative” or “unpleasant”, words which describe the conscious experience in a similar way without sneaking in normative claims.
Er, nothing complex—in my ethics, there are cases where preferences trump feelings (eg experience machines) and cases where feelings trump preferences (eg drug users who are very unhappy). That’s all I’m saying.
Bad, negative, unpleasant, all possess partial semantic correspondence, which justifies their being a value.
The normative claims in this case need not be definitive and overruling in that case. Perhaps that is where your resistance to accepting it comes from. In moral realism, a justified preference or instrumental / indirect value that weights more can overpower a direct feeling as well. This justified preference will be ultimately reducible to direct feelings in the present or in the future, for oneself or for others, though.
Could you give me examples of any reasonable preferences that could not be reducible to good and bad feelings in that sense?
Anyway, there is also the argument from personal identity which calls for equalization of values taking into account all subjects (equally valued, if ceteris paribus), and their reasoning, if contextually equivalent. This could be in itself a partial refutation of the orthogonality thesis, a refutation in theory and for autonomous and free general superintelligent agents, but not necessarily for imprisoned and tampered ones.
Then they are no longer purely descriptive, and I can’t agree that they are logically or empirically true.
Apart from that, what do you think of the other points? If you wish, we could continue a conversation on another online medium.
Certainly, but I don’t have much time for the next few weeks :-(
Send me a message in mid-April if you’re still interested!
I think that this is an important point: the previously argued normative badness of directly accessible bad conscious experiences is not absolute and definitive, or in terms of justifying actions. It should weight on the scale with all other factors involved, even indirect and instrumental ones that could only affect intrinsic goodness or badness in a distant and unclear way.
Right now I see this as perhaps the most challenging and serious form of moral realism, so I definitely intend to take time and care to study it. I’ll have to get back to you, as I think I said I would before.
I think that it is a worthy use of time, and I applaud your rational attitude of looking to refute one’s theories. I also like to do that in order to evolve them and discard wrong parts.
Don’t hesitate to bring up specific parts for debate.
I don’t directly apprehend anything as the being “good” or the “bad” in the moral realist sense and I don’t count other peoples’ accounts of directly apprehending such things as evidence (especially since schizophrenics and theists exist).
Conscious perceptions are quite direct and simple. Do you feel, for example, a bad feeling like intense pain as being a bad occurrence (which, like all occurrences in the universe, is physical), and likewise, for example, a good feeling like a delicious taste as being a good occurrence?
I argue that these are perceived with the highest degree of certainty of all things and are the only things that can be ultimately linked to direct good and bad value.
No, though I admit it has felt like that for me at some points in my life. Even if I did, there are a bunch of reasons why that I would not trust that intuition
I like certain things and dislike certain things, and in a certain sense I would be mistaken if I were doing things that reliably caused me pain. That certain sense is that if I were better informed I would not take that action. If, however, I liked pain, I would still take that action, and so I would not be mistaken. I could go through the same process to explain why an sadist is not mistaken.
I do not know what else to say except that this is just an appeal to intuition, and that specific intuitions are worthless unless they are proven to reliably point towards the truth.
Liking pain seems impossible, as it is an aversive feeling. However, for some people, some types of pain or self-harm cause a distraction from underlying emotional pain, which is felt as good or relieving, or it may give them some thrill, but in these cases it seems that it is always pain + some associated good feeling, or some relief of an underlying bad feeling, and it is for the good feeling or relief that they want pain, rather than pain for itself.
Conscious perceptions in themselves seem to be what is most certain in terms of truth. The things they represent, such as the physical world, may be illusions, but one cannot doubt feeling the illusions themselves.
Let’s play the Monday-Tuesday game. On Monday I like pain. On Tuesday I like some associated good feeling that pain provides. What’s the difference between Monday and Tuesday?
The idea that one can like pain in itself is not substantiated by evidence. Masochists or self-harmers seek some pleasure or relief they get from pain or humiliation, not pain for itself. They won’t stick their hands in a pot with boiling water.
http://en.wikipedia.org/wiki/Sadomasochism http://en.wikipedia.org/wiki/Self-harm
To follow that line of reasoning, please provide evidence that there exists anyone that enjoys pain in itself. I find that unbelievable, as pain is aversive by nature.
This is not how you play the Monday-Tuesday game! Also, a request to play the Monday-Tuesday game isn’t an argument, it’s a request for clarification. Specifically, I’m asking you to clarify what the difference between two statements is. Maybe we should try a simpler example:
On Monday I like ice cream. On Tuesday I like some associated good feeling that ice cream provides. What’s the difference between Monday and Tuesday?
Who cares about that silly game. Accepting to play it or not is my choice.
You can only validly like ice cream by way of feelings, because all that you have direct access to in this universe is consciousness. The difference between Monday and Tuesday in your example is only in the nature of the feelings involved. In the pain example, it is liked by virtue of the association with other good feelings, not pain in itself. If a person somehow loses the associated good feelings, certain painful stimuli cease to be desirable.
If a person somehow loses the associated good feelings, ice cream also ceases to be desirable. I still don’t see the difference between Monday and Tuesday.
I think I might have some idea what you mean about masochists not liking pain. Let me tell a different story, and you can tell me whether you agree...
Masochists like pain, but only in very specific environments, such as roleplaying fantasies. Within that environment, masochists like pain because of how it affects the overall experience of the fantasy. Outside that environment, masochists are just as pain-averse as the rest of the world.
Does that story jibe with your understanding?
Yes, that is correct. I’m glad a Less Wronger finally understood.
Yes, in the same way that explaining your ideas well or poorly is your choice, but I don’t see what this has to do with explaining the difference between liking X and liking associated good feelings that X provides.