At the point where you called some values “errors” without defining their truth conditions I assumed this wasn’t going to be any good and stopped reading.
Being open to criticism is very important, and the bias to disvalue it should be resisted. Perhaps I defined the truth conditions later on (see below).
“There is a difference between valid and invalid human values, which is the ground of justification for moral realism: valid values have an epistemological justification, while invalid ones are based on arbitrary choice or intuition. The epistemological justification of valid values occurs by that part of our experiences which has a direct certainty, as opposed to indirect: conscious experiences in themselves.”
I find your texts here on ethics incomplete and poor (for instance, this one, it shows a lack of understanding of the topic and is naive). I dare you to defend and justify a value that cannot be reduced to good and bad feelings.
I read that and similar articles. I deliberately didn’t say pleasure or happiness, but “reduced to good and bad feelings”, including other feelings that might be deemed good, such as love, curiosity, self-esteem, meaningfulness..., and including the present and the future. The part about the future includes any instrumental actions in the present which be taken with the intention of obtaining good feelings in the future, for oneself or for others.
This should cover visiting Costa Rica, having good sex, and helping loved ones succeed, which are the examples given in that essay against the simple example of Nozick’s experience machine. The experience machine is intuitively deemed bad because it precludes acting in order to instrumentally increase good feelings in the future and prevent bad feelings of oneself or others, and because pleasure is not what good feelings are all about. It is a very narrow part of the whole spectrum of good experiences one can have, precluding many others mentioned, and this makes it aversive.
The part about wanting and liking has neurological interest and has been well researched. It is not relevant for this question, because values need not correspond with wanting, they can just correspond with liking. Immediate liking is value, wanting is often mistaken. We want things which are evolutionarily or culturally caused, but that are not good for us. Wanting is like an empty promise, while liking can be empirically and directly verified to be good.
Any valid values reduce to good and bad feelings, for oneself or for others, in the present or in the future. This can be said of survival, learning, working, loving, protecting, sight-seeing, etc.
I say it again, I dare Eliezer (or others) to defend and justify a value that cannot be reduced to good and bad feelings.
I want to know more about the future. I do not expect to make much use of the information, and the tiny good feeling I expect to get when I am proven right is far smaller than the good feelings I could get from other uses of my time. My defence for this value as legitimate is that I am quite capable of rational reasoning and hearing out any and all of your arguments, and yet I am also quite certain that neither you nor others will be able to persuade me to abandon it. No further justification or defence beyond that is necessary or possible, in my opinion.
Is the value found in the conscious experiences, which happen to correlate with the activities mentioned, or are the activities themselves valuable, because we happen to like them? If the former, Jonatas’ point should apply. If the latter, then anything can be a value, you just need to design a mind in order to like it. Am I the only one who is bothered by the fact that we could find value in anything if we follow the procedure outlined above?
How about we play a different “game”. Instead of starting with the arbitrary likings evolution has equipped us with, we could just ask what action-guiding principles produce a state of the world which is optimal for conscious beings, as beings with a first-person-perspective are the only entities for which states can objectively be good or bad. If we accept this axiome (or if we presuppose, even within error theory, a fundamental meta utility function stating something like “I terminally care about others”), we can reason about ethics in a much more elagant and non-arbitrary way.
I don’t know whether not experiencing joys in Brazil (or whatever activities humans tend to favor) is bad for a being blissed out in the experience machine; at least it doesn’t seem to me! What I do know for sure is that there’s something bad, i.e. worth preventing, in a consciousness-moment that wants its experiential content to be different.
At the point where you called some values “errors” without defining their truth conditions I assumed this wasn’t going to be any good and stopped reading.
Being open to criticism is very important, and the bias to disvalue it should be resisted. Perhaps I defined the truth conditions later on (see below).
“There is a difference between valid and invalid human values, which is the ground of justification for moral realism: valid values have an epistemological justification, while invalid ones are based on arbitrary choice or intuition. The epistemological justification of valid values occurs by that part of our experiences which has a direct certainty, as opposed to indirect: conscious experiences in themselves.”
I find your texts here on ethics incomplete and poor (for instance, this one, it shows a lack of understanding of the topic and is naive). I dare you to defend and justify a value that cannot be reduced to good and bad feelings.
See here.
I read that and similar articles. I deliberately didn’t say pleasure or happiness, but “reduced to good and bad feelings”, including other feelings that might be deemed good, such as love, curiosity, self-esteem, meaningfulness..., and including the present and the future. The part about the future includes any instrumental actions in the present which be taken with the intention of obtaining good feelings in the future, for oneself or for others.
This should cover visiting Costa Rica, having good sex, and helping loved ones succeed, which are the examples given in that essay against the simple example of Nozick’s experience machine. The experience machine is intuitively deemed bad because it precludes acting in order to instrumentally increase good feelings in the future and prevent bad feelings of oneself or others, and because pleasure is not what good feelings are all about. It is a very narrow part of the whole spectrum of good experiences one can have, precluding many others mentioned, and this makes it aversive.
The part about wanting and liking has neurological interest and has been well researched. It is not relevant for this question, because values need not correspond with wanting, they can just correspond with liking. Immediate liking is value, wanting is often mistaken. We want things which are evolutionarily or culturally caused, but that are not good for us. Wanting is like an empty promise, while liking can be empirically and directly verified to be good.
Any valid values reduce to good and bad feelings, for oneself or for others, in the present or in the future. This can be said of survival, learning, working, loving, protecting, sight-seeing, etc.
I say it again, I dare Eliezer (or others) to defend and justify a value that cannot be reduced to good and bad feelings.
I want to know more about the future. I do not expect to make much use of the information, and the tiny good feeling I expect to get when I am proven right is far smaller than the good feelings I could get from other uses of my time. My defence for this value as legitimate is that I am quite capable of rational reasoning and hearing out any and all of your arguments, and yet I am also quite certain that neither you nor others will be able to persuade me to abandon it. No further justification or defence beyond that is necessary or possible, in my opinion.
Is the value found in the conscious experiences, which happen to correlate with the activities mentioned, or are the activities themselves valuable, because we happen to like them? If the former, Jonatas’ point should apply. If the latter, then anything can be a value, you just need to design a mind in order to like it. Am I the only one who is bothered by the fact that we could find value in anything if we follow the procedure outlined above?
How about we play a different “game”. Instead of starting with the arbitrary likings evolution has equipped us with, we could just ask what action-guiding principles produce a state of the world which is optimal for conscious beings, as beings with a first-person-perspective are the only entities for which states can objectively be good or bad. If we accept this axiome (or if we presuppose, even within error theory, a fundamental meta utility function stating something like “I terminally care about others”), we can reason about ethics in a much more elagant and non-arbitrary way.
I don’t know whether not experiencing joys in Brazil (or whatever activities humans tend to favor) is bad for a being blissed out in the experience machine; at least it doesn’t seem to me! What I do know for sure is that there’s something bad, i.e. worth preventing, in a consciousness-moment that wants its experiential content to be different.