I just thought of another, larger and more unsettling problem. Although it’s kind of hard for me to explain, but I’ll try.
If the following statements are true:
The only reason we need pain is to notify us of damage to ourselves or to things that matter to us.
The only reason we need fear is to motivate us to avoid things that could cause damage to ourselves or things that matter to us.
The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us.
The only reason we need beliefs is to predict reality.
Then I am extremely concerned about whether the answers to the following questions might doom the continued, dynamic existence of sentient life merely by its very nature:
What would life be like for sentient beings such as ourselves if we either eliminated damage to ourselves and the things that matter to us, or minimized that damage to the point where that damage was insignificant to our overall well-being, and therefore could be mostly ignored if we so chose, only dealt with in such a way to prevent it from becoming significant? In other words, what if we eliminated the need for pain? This was the question discussed in the article above.
What would life be like for sentient beings such as ourselves if we neutralized all threats to our survival and health, as well as eliminating all of the reasons we would have to misjudge something as a threat to our survival and health? Or at least minimized these threats and misjudgements of threat so that they are insignificant to our overall well-being and can be mostly ignored if we chose, only dealt with in such a way as to prevent them from becoming significant? In other words, what if we eliminated the need for fear?
What would life be like for sentient beings such as ourselves if the health, the safety, and the sustainability of the health and safety of all individual members of sentient species such as ourselves were maximized, to the point that we never needed to seek out things that help us or help the things that matter to us, or at least that the need for such help is minimized to the point of insignificance to our overall well-being, and therefore could be mostly ignored if we so chose, only dealt with in such a way to prevent it from becoming significant? In other words, what if we eliminated the need for happiness?
Note: i did notice that our very definition of “human health” and “overall-wellbeing” includes happiness, or perhaps average happiness. If you can’t feel happiness, then we say you’re not mentally healthy. I think this neglects the problem that we need happiness for a reason; it exists in the context of an environment where we need to seek out stimuli that help us, or at least that would have probably helped us in the ancestral environment. If we improve the capabilities of our own brains and bodies enough, eventually we will no longer need to rely on each other or on tools outside our own bodies and brains to compensate for our individual weaknesses. Which brings me to the fourth question.
(I am aware that it looks like a 1 instead of a 4. I don’t know why, since it looks like a 4 again when I go to edit it.) What if our mental models of reality became so accurate that they were identical, or nearly identical, to the point where the only difference between reality and our models of it was ever so slightly more than the time it took for us to receive sensory information? Could a human mind become a highly realistic simulation of the universe merely by learning how to increase its own mental capacity enough and systematically eliminating all false models of the universe? And in that case, how can we know if our own universe is not such a simulation? If it is, if our universe is a map of another universe, is it a perfect map? Or is there a small amount of error, even inconsistency in our own universe, which would not exist in the original?
I recently learned in a neuroscience class that thinking is by definition a problem-solving tool—a means to identify a path of causality from a current less desirable state to a more desirable goal state. At least that’s what I think it said. If we reached all possible goals, and ran out of possible goals to strive for, what do we do then? Generate a new virtual reality in which there are more possible goals to reach? Or stop thinking altogether? Something about both of those options doesn’t sound right for some reason.
I know it says on this very site that perfectionism is one of the twelve virtues of rationality, but then it says that the goal of perfection is impossible to reach. That doesn’t make sense to me. If the goal you are trying to reach is unattainable, then why attempt to attain it? Because the amount of effort you expend towards the unattainable goal of perfection allows you to reach better goal states than you otherwise would reach if you did not expend that much effort? But what if we found a way to make the amount of effort spent equal, or at least proportional or close to proportional to the actual desirability of the goal state that effort allows you to reach?
The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us.
That may be the only reason we evolved happiness or pleasure, but we don’t have to care about what evolution optimized for, when designing a utopia. We’re allowed to value happiness for its own sake. See Adaptation-Executers, not Fitness-Maximizers.
If we reached all possible goals, and ran out of possible goals to strive for, what do we do then?
Worthwhile goals are finite, so it’s true we might run out of goals someday, and from then on be bored. But it doesn’t frighten me too much because:
We’re not going to run out of goals as soon as we create an AI that can achieve them for us; we can always tell it to let us solve some things on our own, if it’s more fun that way.
The space of worthwhile goals is still ridiculously big. To live a life where I accomplish literally everything I want to accomplish is good enough for me, even if that life can’t be literally infinite.* Plus, I’m somewhat open to the idea of deleting memories/experience in order to experience the same thing again.
There’s other fun things to do that don’t involve achieving goals, and that aren’t used up when you do them.
*Actually, I am a little worried about a situation where the stronger and more competent I get, the quicker I run out of life to live… but I’m sure we’ll work that out somehow.
I know it says on this very site that perfectionism is one of the twelve virtues of rationality, but then it says that the goal of perfection is impossible to reach. That doesn’t make sense to me. If the goal you are trying to reach is unattainable, then why attempt to attain it?
I guess technically the real goal is to be “close to perfection”, as close as possible. We pretend that the goal is “perfection” for ease of communication, and because (as imperfect humans) we can sometimes trick ourselves into achieving more by setting our goals higher than what’s really possible.
The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us. See: http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/ If we reached all possible goals, and ran out of possible goals to strive for, what do we do then?
I just thought of another, larger and more unsettling problem. Although it’s kind of hard for me to explain, but I’ll try.
If the following statements are true:
The only reason we need pain is to notify us of damage to ourselves or to things that matter to us.
The only reason we need fear is to motivate us to avoid things that could cause damage to ourselves or things that matter to us.
The only reason we need happiness or pleasure is so that we are motivated to seek out things that would help us or things that matter to us.
The only reason we need beliefs is to predict reality.
Then I am extremely concerned about whether the answers to the following questions might doom the continued, dynamic existence of sentient life merely by its very nature:
What would life be like for sentient beings such as ourselves if we either eliminated damage to ourselves and the things that matter to us, or minimized that damage to the point where that damage was insignificant to our overall well-being, and therefore could be mostly ignored if we so chose, only dealt with in such a way to prevent it from becoming significant? In other words, what if we eliminated the need for pain? This was the question discussed in the article above.
What would life be like for sentient beings such as ourselves if we neutralized all threats to our survival and health, as well as eliminating all of the reasons we would have to misjudge something as a threat to our survival and health? Or at least minimized these threats and misjudgements of threat so that they are insignificant to our overall well-being and can be mostly ignored if we chose, only dealt with in such a way as to prevent them from becoming significant? In other words, what if we eliminated the need for fear?
What would life be like for sentient beings such as ourselves if the health, the safety, and the sustainability of the health and safety of all individual members of sentient species such as ourselves were maximized, to the point that we never needed to seek out things that help us or help the things that matter to us, or at least that the need for such help is minimized to the point of insignificance to our overall well-being, and therefore could be mostly ignored if we so chose, only dealt with in such a way to prevent it from becoming significant? In other words, what if we eliminated the need for happiness?
Note: i did notice that our very definition of “human health” and “overall-wellbeing” includes happiness, or perhaps average happiness. If you can’t feel happiness, then we say you’re not mentally healthy. I think this neglects the problem that we need happiness for a reason; it exists in the context of an environment where we need to seek out stimuli that help us, or at least that would have probably helped us in the ancestral environment. If we improve the capabilities of our own brains and bodies enough, eventually we will no longer need to rely on each other or on tools outside our own bodies and brains to compensate for our individual weaknesses. Which brings me to the fourth question.
(I am aware that it looks like a 1 instead of a 4. I don’t know why, since it looks like a 4 again when I go to edit it.) What if our mental models of reality became so accurate that they were identical, or nearly identical, to the point where the only difference between reality and our models of it was ever so slightly more than the time it took for us to receive sensory information? Could a human mind become a highly realistic simulation of the universe merely by learning how to increase its own mental capacity enough and systematically eliminating all false models of the universe? And in that case, how can we know if our own universe is not such a simulation? If it is, if our universe is a map of another universe, is it a perfect map? Or is there a small amount of error, even inconsistency in our own universe, which would not exist in the original?
I recently learned in a neuroscience class that thinking is by definition a problem-solving tool—a means to identify a path of causality from a current less desirable state to a more desirable goal state. At least that’s what I think it said. If we reached all possible goals, and ran out of possible goals to strive for, what do we do then? Generate a new virtual reality in which there are more possible goals to reach? Or stop thinking altogether? Something about both of those options doesn’t sound right for some reason.
I know it says on this very site that perfectionism is one of the twelve virtues of rationality, but then it says that the goal of perfection is impossible to reach. That doesn’t make sense to me. If the goal you are trying to reach is unattainable, then why attempt to attain it? Because the amount of effort you expend towards the unattainable goal of perfection allows you to reach better goal states than you otherwise would reach if you did not expend that much effort? But what if we found a way to make the amount of effort spent equal, or at least proportional or close to proportional to the actual desirability of the goal state that effort allows you to reach?
These questions are really bothering me.
That may be the only reason we evolved happiness or pleasure, but we don’t have to care about what evolution optimized for, when designing a utopia. We’re allowed to value happiness for its own sake. See Adaptation-Executers, not Fitness-Maximizers.
Worthwhile goals are finite, so it’s true we might run out of goals someday, and from then on be bored. But it doesn’t frighten me too much because:
We’re not going to run out of goals as soon as we create an AI that can achieve them for us; we can always tell it to let us solve some things on our own, if it’s more fun that way.
The space of worthwhile goals is still ridiculously big. To live a life where I accomplish literally everything I want to accomplish is good enough for me, even if that life can’t be literally infinite.* Plus, I’m somewhat open to the idea of deleting memories/experience in order to experience the same thing again.
There’s other fun things to do that don’t involve achieving goals, and that aren’t used up when you do them.
*Actually, I am a little worried about a situation where the stronger and more competent I get, the quicker I run out of life to live… but I’m sure we’ll work that out somehow.
I guess technically the real goal is to be “close to perfection”, as close as possible. We pretend that the goal is “perfection” for ease of communication, and because (as imperfect humans) we can sometimes trick ourselves into achieving more by setting our goals higher than what’s really possible.