Is the value found in the conscious experiences, which happen to correlate with the activities mentioned, or are the activities themselves valuable, because we happen to like them? If the former, Jonatas’ point should apply. If the latter, then anything can be a value, you just need to design a mind in order to like it. Am I the only one who is bothered by the fact that we could find value in anything if we follow the procedure outlined above?
How about we play a different “game”. Instead of starting with the arbitrary likings evolution has equipped us with, we could just ask what action-guiding principles produce a state of the world which is optimal for conscious beings, as beings with a first-person-perspective are the only entities for which states can objectively be good or bad. If we accept this axiome (or if we presuppose, even within error theory, a fundamental meta utility function stating something like “I terminally care about others”), we can reason about ethics in a much more elagant and non-arbitrary way.
I don’t know whether not experiencing joys in Brazil (or whatever activities humans tend to favor) is bad for a being blissed out in the experience machine; at least it doesn’t seem to me! What I do know for sure is that there’s something bad, i.e. worth preventing, in a consciousness-moment that wants its experiential content to be different.
Is the value found in the conscious experiences, which happen to correlate with the activities mentioned, or are the activities themselves valuable, because we happen to like them? If the former, Jonatas’ point should apply. If the latter, then anything can be a value, you just need to design a mind in order to like it. Am I the only one who is bothered by the fact that we could find value in anything if we follow the procedure outlined above?
How about we play a different “game”. Instead of starting with the arbitrary likings evolution has equipped us with, we could just ask what action-guiding principles produce a state of the world which is optimal for conscious beings, as beings with a first-person-perspective are the only entities for which states can objectively be good or bad. If we accept this axiome (or if we presuppose, even within error theory, a fundamental meta utility function stating something like “I terminally care about others”), we can reason about ethics in a much more elagant and non-arbitrary way.
I don’t know whether not experiencing joys in Brazil (or whatever activities humans tend to favor) is bad for a being blissed out in the experience machine; at least it doesn’t seem to me! What I do know for sure is that there’s something bad, i.e. worth preventing, in a consciousness-moment that wants its experiential content to be different.