It must become so important to you that if you fail or die in the service of that cause, it is okay, so long as it advances that cause.
Why?
I feel like if I had to choose between
treating my foundational emotional needs as intrinsically important and living a happy and long life, versus
picking a cause that might get me killed and treating my foundational emotional needs as things that unfortunately need to be satisfied so that they don’t get in the way
Then #1 seems greatly preferable.
I’m not saying that I’d never be willing to die for a cause. Sure, if my death could somehow make the difference between us getting an AI that creates a utopia vs. an AI that just kills everyone, then I’ll consent to being killed. But even then I wouldn’t say that it’s “okay”, I’d say that it’s a terrible tragedy that someone needs to die for it and an option that I’d only pick because it was the least terrible option.
If one is actively looking for a cause that would make them willing to die, then that seems worse to me than looking for things that make them love life so much that they’ll want to live forever.
Generally, the more basal purposes override the more recently evolved purposes, which means if your biology wants you to do something and your logic wants you to do something else, while your logic may win in the short term by tapping willpower, biology will eventually wear your logic out and in the long run find a way to win
Note that this kind of “logic vs. biology” view is generally rejected by evolutionary biologists. Logic is a part of biology, and it’s not a question of our more “primitive” impulses overriding our more recent purposes. Rather both logical and emotional purposes have co-evolved, so that the brain will end up emphasizing logic-based decision-making in situations where it predicts that to be appropriate and non-logic-based decision-making in situations where it predicts that to be appropriate. E.g. Cesario et al. 2020:
The final—and most important—problem with this mistaken view is the implication that anatomical evolution proceeds in the same fashion as geological strata, with new layers added over existing ones. Instead, much evolutionary change consists of transforming existing parts. Bats’ wings are not new appendages; their forelimbs were transformed into wings through several intermediate steps. In the same way, the cortex is not an evolutionary novelty unique to humans, primates, or mammals; all vertebrates possess structures evolutionarily related to our cortex (Fig. 1d). In fact, the cortex may even predate vertebrates (Dugas-Ford, Rowell, & Ragsdale, 2012; Tomer, Denes, Tessmar-Raible, & Arendt, 2010). Researchers studying the evolution of vertebrate brains do debate which parts of the forebrain correspond to which others across vertebrates, but all operate from the premise that all vertebrates possess the same basic brain—and forebrain—regions.
Neurobiologists do not debate whether any cortical regions are evolutionarily newer in some mammals than others. To be clear, even the prefrontal cortex, a region associated with reason and action planning, is not a uniquely human structure. Although there is debate concerning the relative size of the prefrontal cortex in humans compared with nonhuman animals (Passingham & Smaers, 2014; Sherwood, Bauernfeind, Bianchi, Raghanti, & Hof, 2012; Teffer & Semendeferi, 2012), all mammals have a prefrontal cortex.
The notion of layers added to existing structures across evolutionary time as species became more complex is simply incorrect. The misconception stems from the work of Paul MacLean, who in the 1940s began to study the brain region he called the limbic system (MacLean, 1949). MacLean later proposed that humans possess a triune brain consisting of three large divisions that evolved sequentially: The oldest, the “reptilian complex,” controls basic functions such as movement and breathing; next, the limbic system controls emotional responses; and finally, the cerebral cortex controls language and reasoning (MacLean, 1973). MacLean’s ideas were already understood to be incorrect by the time he published his 1990 book (see Reiner, 1990, for a critique of MacLean, 1990). [...]
Framing willpower as long-term planning versus animalistic desires leads to the questionable conclusion that delaying gratification is not something other animals are capable of if other animals lack the evolutionarily newer neural structures required for rational long-term planning. Although certain aspects of willpower may be unique to humans, this framing misses the connection between willpower in humans and decision-making in nonhuman animals. All animals make decisions between actions that involve trade-offs in opportunity costs. In this way, the question of willpower is not “Why do people act sometimes like hedonic animals and sometimes like rational humans?” but instead, “What are the general principles by which animals make decisions about opportunity costs?” (Gintis, 2007; Kurzban, Duckworth, Kable, & Myers, 2013; Monterosso & Luo, 2010).
In evolutionary biology and psychology, life-history theory describes broad principles concerning how all organisms make decisions about trade-offs that are consistent with reproductive success as the sole driver of evolutionary change (Daly & Wilson, 2005; Draper & Harpending, 1982). This approach asks how recurrent challenges adaptively shape decisions regarding opportunity trade-offs. For example, in reliable environments, waiting to eat a second marshmallow is likely to be beneficial. However, in environments in which rewards are uncertain, such as when experimenters are unreliable, eating the single marshmallow right away may be beneficial (Kidd, Palmeri, & Aslin, 2013). Thus, impulsivity can be understood as an adaptive response to the contingencies present in an unstable environment rather than a moral failure in which animalistic drives overwhelm human rationality.
Why?
I feel like if I had to choose between
treating my foundational emotional needs as intrinsically important and living a happy and long life, versus
picking a cause that might get me killed and treating my foundational emotional needs as things that unfortunately need to be satisfied so that they don’t get in the way
Then #1 seems greatly preferable.
I’m not saying that I’d never be willing to die for a cause. Sure, if my death could somehow make the difference between us getting an AI that creates a utopia vs. an AI that just kills everyone, then I’ll consent to being killed. But even then I wouldn’t say that it’s “okay”, I’d say that it’s a terrible tragedy that someone needs to die for it and an option that I’d only pick because it was the least terrible option.
If one is actively looking for a cause that would make them willing to die, then that seems worse to me than looking for things that make them love life so much that they’ll want to live forever.
Note that this kind of “logic vs. biology” view is generally rejected by evolutionary biologists. Logic is a part of biology, and it’s not a question of our more “primitive” impulses overriding our more recent purposes. Rather both logical and emotional purposes have co-evolved, so that the brain will end up emphasizing logic-based decision-making in situations where it predicts that to be appropriate and non-logic-based decision-making in situations where it predicts that to be appropriate. E.g. Cesario et al. 2020: