Man is a rational animal—he doesn’t use claws or poison to survive
Why is survival one of your goals (“I want it.” is an acceptable answer, but you have to accept that you might only want it due to being misinformed; even if it is probably correct, it it extremely unlikely that all your desires would be retained in your reflective equilibrium.)? Is it your only goal? Why?
he uses his brain.
Intelligence may be our comparative advantage over other animals, but we’re not trading with them. Brain are useful because they solve problems, not because they’re our skill as a species.
In order /to/ use his brain most effectively, he has to be able to do certain things—most fundamentally, he has to stay alive
If survival infringes on your other desires it becomes counterproductive. Beware lost purposes. Even if this doesn’t hold, maximizing your probability of survival is not the same as maximizing whatever you actually prefer to maximize. If you only focus on survival, you risk giving up everything (or everything else if you value survival in itself—I don’t think I do, but I’m very uncertain) for a slightly increased lifespan.
Why is survival one of your goals (“I want it.” is an acceptable answer, but you
have to accept that you might only want it due to being misinformed; even if it
is probably correct, it it extremely unlikely that all your desires would be
retained in your reflective equilibrium.)? Is it your only goal? Why?
At the moment, my primary goal is the continued existence of sapience. Partly it is due to the fact that since purpose and meaning aren’t inherent qualities of anything, but are projected onto things by sapient minds, and I want my existence to have had some meaning, then in order to do that, sapients have to continue to exist. Or, put another way, for just about /any/ goal I can seriously imagine myself wanting, the continued existence of sapience is a necessary prerequisite.
If survival infringes on your other desires it becomes counterproductive.
Beware lost purposes. Even if this doesn’t hold, maximizing your probability
of survival is not the same as maximizing whatever you actually prefer to
maximize. If you only focus on survival, you risk giving up everything (or
everything else if you value survival in itself—I don’t think I do, but I’m very
uncertain) for a slightly increased lifespan.
If I seriously come to the conclusion that my continued life has a measurable impact that /reduces/ the probability that sapience will continue to exist in the universe… then I honestly don’t know whether I’d choose personal death. For example, one of the goals I’ve imagined myself working for is “Life forever or die trying”, which, as usual, requires at least some sapience in the universe (if only myself), but… well, it’s a problem I hope never to have to encounter… and, fortunately, at present, I’m trying to use my existence to /increase/ the probability that sapience will continue to exist, so it’s unlikely I’ll never encounter that particular problem.
Partly it is due to the fact that since purpose and meaning aren’t inherent qualities of anything, but are projected onto things by sapient minds, and I want my existence to have had some meaning, then in order to do that, sapients have to continue to exist. Or, put another way, for just about /any/ goal I can seriously imagine myself wanting, the continued existence of sapience is a necessary prerequisite.
The two ways of putting it are not equivalent; it is possible for a sapient mind to decide that its purpose is to maximize the number of paperclips in the universe, which can be achieved without its continued existence. You probably realize this already though; the last quoted sentence makes sense.
I’m trying to use my existence to /increase/ the probability that sapience will continue to exist, so it’s unlikely I’ll never encounter that particular problem.
If you had a chance to preform an action that led to a slight risk to your life but increased the chance of sapience continuing to exist (in such a way as to lower your overall chance of living forever), would you do so? It is usually impossible to perfectly optimize for two different things at once; even if hey are mostly unopposed, near the maxima there will be tradeoffs.
If you had a chance to preform an action that led to a slight risk to your life
but increased the chance of sapience continuing to exist (in such a way
as to lower your overall chance of living forever), would you do so?
A good question.
I have at least one datum suggesting that the answer, for me in particular, is ‘yes’. I currently believe that what’s generally called ‘free speech’ is a strong supporting factor, if not necessary prerequisite, for developing the science we need to ensure sapience’s survival. Last year, there was an event, ‘Draw Muhammad Day’, to promote free speech; for which, before it actually happened, there was a non-zero probability that anyone participating in it would receive threats and potentially even violence from certain extremists. While that was still the calculation, I joined in. (I did get my very first death threats in response, but nothing came of them.)
You have evidence that you do, in fact, take such risks, but, unless you considered the issue very carefully, you don’t know whether you really want to do so. Section 1 of Yvain’s consequentialism FAQ covers the concept of not knowing and then determining when you really want. (The rest of the FAQ is also good but not directly relevant to this discussion and I think you might disagree with much of it.)
Why is survival one of your goals (“I want it.” is an acceptable answer, but you have to accept that you might only want it due to being misinformed; even if it is probably correct, it it extremely unlikely that all your desires would be retained in your reflective equilibrium.)? Is it your only goal? Why?
Intelligence may be our comparative advantage over other animals, but we’re not trading with them. Brain are useful because they solve problems, not because they’re our skill as a species.
If survival infringes on your other desires it becomes counterproductive. Beware lost purposes. Even if this doesn’t hold, maximizing your probability of survival is not the same as maximizing whatever you actually prefer to maximize. If you only focus on survival, you risk giving up everything (or everything else if you value survival in itself—I don’t think I do, but I’m very uncertain) for a slightly increased lifespan.
At the moment, my primary goal is the continued existence of sapience. Partly it is due to the fact that since purpose and meaning aren’t inherent qualities of anything, but are projected onto things by sapient minds, and I want my existence to have had some meaning, then in order to do that, sapients have to continue to exist. Or, put another way, for just about /any/ goal I can seriously imagine myself wanting, the continued existence of sapience is a necessary prerequisite.
If I seriously come to the conclusion that my continued life has a measurable impact that /reduces/ the probability that sapience will continue to exist in the universe… then I honestly don’t know whether I’d choose personal death. For example, one of the goals I’ve imagined myself working for is “Life forever or die trying”, which, as usual, requires at least some sapience in the universe (if only myself), but… well, it’s a problem I hope never to have to encounter… and, fortunately, at present, I’m trying to use my existence to /increase/ the probability that sapience will continue to exist, so it’s unlikely I’ll never encounter that particular problem.
The two ways of putting it are not equivalent; it is possible for a sapient mind to decide that its purpose is to maximize the number of paperclips in the universe, which can be achieved without its continued existence. You probably realize this already though; the last quoted sentence makes sense.
If you had a chance to preform an action that led to a slight risk to your life but increased the chance of sapience continuing to exist (in such a way as to lower your overall chance of living forever), would you do so? It is usually impossible to perfectly optimize for two different things at once; even if hey are mostly unopposed, near the maxima there will be tradeoffs.
A good question.
I have at least one datum suggesting that the answer, for me in particular, is ‘yes’. I currently believe that what’s generally called ‘free speech’ is a strong supporting factor, if not necessary prerequisite, for developing the science we need to ensure sapience’s survival. Last year, there was an event, ‘Draw Muhammad Day’, to promote free speech; for which, before it actually happened, there was a non-zero probability that anyone participating in it would receive threats and potentially even violence from certain extremists. While that was still the calculation, I joined in. (I did get my very first death threats in response, but nothing came of them.)
You have evidence that you do, in fact, take such risks, but, unless you considered the issue very carefully, you don’t know whether you really want to do so. Section 1 of Yvain’s consequentialism FAQ covers the concept of not knowing and then determining when you really want. (The rest of the FAQ is also good but not directly relevant to this discussion and I think you might disagree with much of it.)