I think that this post conflates two issues, and is an example of a flaw of reasoning that goes like this:
Alice: It would be good if we could change [thing X]. Bob: Ah, but if we changed X, then problems A, B, and C would ensue! Therefore, it would not be good if we could change X.
Bob is confusing the desirability of the change with the prudence of the change. Alice isn’t necessarily saying that we should make the change she’s proposing. She’s saying it would be good if we could do so. But Bob immediately jumps to examining what problems would ensue if we changed X, decides that changing X would be imprudent, and concludes from this that it would also be undesirable.
But that last step is entirely groundless. Something could be a bad idea in practice due to implementation difficulties, but very desirable. These are orthogonal considerations. (Another way to think about it is: the consequences of making a change, vs. the consequences of the means used to implement said change.)
I think that Bob’s mistake is rooted in the fact that he is treating Alice’s proposal as, essentially, a wish made to a genie. “Oh great genie,” says Alice, “please make it so that death is no more!” Bob, horrified, stops Alice before she can finish speaking, and shouts “No! Think of all the ways the words of your wish can be twisted! Think of the unintended consequences! You haven’t considered the implications! No, Alice, you must not make such grand wishes of a genie, for they will inevitably go awry.”
The view here on Lesswrong, on the other hand, treats Alice’s proposal as an engineering challenge. The conversation in that style goes like this:
Alice: It would be good if we could change [thing X]. Chris: Hm, I concur that this would be good if we could do it. However, consider problem A, which would arise as a result. Alice: I think that solution J would handle that acceptably. Chris: That seems reasonable. But, there is also problem B to deal with. Alice: It may seem like that at first, but actually that won’t be a problem because [reason K]. Chris: Ah, I see. It occurs to me that C will also be problematic. Alice: Hmm, you’re right. That will be a challenge; I will have to give that some serious thought. Chris: Please do! It would be very nice if you could solve it, because then we could make change X, which we both agree would be great.
Once you properly distinguish the concepts of desirability and prudence, you can treat problems with your proposal as obstacles to overcome, not reasons not to do it. So a real “defense of deathism” would have to argue that death is desirable; that immortality is not something we would or should want, even if we solved all the auxiliary problems. Otherwise, it fails to engage with the core of the anti-death position.
a real “defense of deathism” would have to argue that death is desirable; that immortality is not something we would or should want, even if we solved all the auxiliary problems
Death is a trade-off we are sometimes willing to make, despite it being emotionally difficult. For example, we accept the risk of death in space exploration.
If instead we valued Death as an intrinsic good, we would be making the hard trade-offs in the opposite direction. For example we would stop the space program saying: “Sure, never learning about the space sucks, and as curious people we are naturally disappointed, but on the other hand, without space program we are all guaranteed to die when the Sun explodes, which is a great thing! We don’t want to risk anyone surviving (oh, the horror!), just because we were once unable to stop our curiosity.”
Or, as a more personal example, people would be willing to sell their cars and houses just to increase the hope that their loved ones won’t survive the illness and return to normal life. Doctors would be considered the greatest villains, and hanging someone would be consider a reward for their noble acts.
If instead we valued Death as an intrinsic good, we would be making the hard trade-offs in the opposite direction.
Not necessarily. We could value both life and death; we might then want to live some amount of years (or live long enough such that we accomplish some amount of “living”, i.e. derive the desired amount of benefit out of life), and then die. Dying before getting all the life we want out of life would leave our desire for life unsatisfied, while continuing to live longer than that would stop us from satisfying our desire for death.
A (possibly equivalent) formulation might be to say that we derive diminishing marginal utility from life, such that as some point it is outweighed by the utility we derive from death.
What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body? I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I actually do think it is unreasonable to take any but the physical stance toward society; the predictive power of taking the intentional stance (or the design stance, for that matter) is just less.
But! We might assume, for the sake of argument, that we can think of society as having emergent goals, goals that do not benefit its members (or do not benefit a majority of its members, or something). In that case, however, my question is:
Why should I care?
Society’s emergent goals can go take a flying leap, as can evolution’s goals, the goals of my genes, the goals of the human species, and any other goals of any other entity that is not me or the people I care about.
Hmm, I’ll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people’s preferences, so that any drop in predictive power might be worth it. But I’m not sure I’ve seen evidence in either direction; I just assumed it based on analogy and priors.
As for why you should care, I don’t think you should, necessarily, if you don’t already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.
Sorry, when I said “predictive power”, I was actually assuming normalization for efficiency. That is, my claim that the total predictive capacity you get for your available computation resources is greatest by taking the physical stance in this case.
What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body?
It is almost trivial to imagine such a thing. For example, my body may be destroyed utterly in the process of transferring my mind, unharmed, into a new, artificial body, better in every way than my old one. This would be great for me (assuming the new body suited my wants and needs), but bad for the cells making up my existing body.
The core idea here is that I am not my body. I am currently instantiated in my body, but that’s not the same thing. I care about my current instantiation only to the degree that doing so is necessary for me to survive and prosper.
Ah. I’m not sure I agree with you on the nature of the self. What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
I mean, I don’t necessarily disagree with this kind of dualism; it seems possible, even given what I know about embodied cognition. I just am not sure how it could be tested scientifically.
What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
No direct evidence, just the totality of what we currently know about the mind (i.e. cognitive science). Subjective experience is not irrelevant, though I am still confused about its nature. I don’t, however, have any reason to believe that it’s tied to any particular instantiation.
dualism
I don’t think my view can properly be characterized as dualism. I don’t posit any sort of nonmaterial properties of mind, for instance, nor that the mind itself is some nonmaterial substance. Computationalism merely says, essentially, that “the mind is what the brain does”, and that other physical substrates can perform the same computation.
embodied cognition
Everything that I know about the idea of embodied cognition leads me to conclude that it is a brand of mysticism. I’ve never heard a cogent argument for why embodiment can’t be simulated on some suitable level.
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
As for embodied cognition, most of what I know about it comes from reading Lawrence Shapiro’s book Embodied Cognition. I was much less impressed with the field after reading that book, but I do think the general idea is important, that it’s a mistake to think of the mind and body as separate things, and that in order to study cognition we have to take the body into consideration.
I agree that embodiment could be simulated. But I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others. Since I think of my subjective experience as an essential part of my self, this seems important.
in order to study cognition we have to take the body into consideration.
I agree. I don’t think embodiment is irrelevant; my own field (human-computer interaction) takes embodiment quite seriously — it’s an absolutely integral factor in natural user interface design, for example.
I just don’t think embodiment is in any way magic, the way that the embodied cognition people seem to think and imply. If you can simulate a human and their environment on any level you like, then embodiment stops being an issue. It seems like we don’t actually disagree on this.
I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others.
This is certainly not impossible, but it’s not clear to me why you couldn’t then simulate the substrate on a sufficiently low level as to capture whatever aspect of the substrate is responsible for enabling cognition. After all, we could in principle simulate the entire universe down to quantum configuration distributions, right?
If you wanted to make a weaker claim based on computational tractability, then that would of course be another thing.
Since I think of my subjective experience as an essential part of my self, this seems important.
I concur with this. To the extent that I have any kind of a handle of what subjective experience even is, it does seem quite important.
P.S.
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
Yeah, this is probably a question of preferred terminology and I am not inclined to argue about it too much; I just wanted to clarify my actual views.
I don’t think embodiment is irrelevant.[...] If you can simulate a human and their environment on any level you like, then embodiment stops being an issue.
Sure you can (and have to) take the body with you into the simulation—but then the (biological) rules governing the body still apply. You may have more control over them though.
Ok. So, my death benefits me insofar as I care about other people (altruism) and my death benefits those people.
Obvious next question: how does my death benefit the people I care about?
Roughly delineated, I care about my family and friends immensely; casual acquaintances / colleagues / peers / etc. a good amount; and other people roughly to the degree that they share my culture (variously defined).
Thus, for death to benefit me via altruism, it would have to benefit my family and friends a lot (to offset the great loss they would feel at my death); and/or benefit casual acquaintances / colleagues / peers a pretty large amount (to compensate for the discount factor of how much I care about them); and/or benefit various other people in the world an almost astronomical total amount (ditto).
Does my death in fact do this? I can imagine certain specific scenarios where one or more of these things is the case, such as if my death serves to save my family, or save the world from destruction. However, death in common circumstances does not seem to fit the aforementioned criteria for me to judge it a net positive.
If you have children who in turn plan on having children, then you might be inclined to die eventually in order to secure more resources for your progeny. Similarly if you have close friends with children.
Of course, I’d prefer to incentivize having fewer children and make it harder to have children as a means to control population. If death were necessary, I’d recommend a lottery (assuming there are insufficient volunteers).
how does my death benefit the people I care about?
Directly? It doesn’t. It only benefits society at large and even then only in so far as it conveys a fitness advantage. Same as sexual reproduction does. The obvious difference being that sexual mechanisms are visible as motivations on your conscious level but aging mechanisms stop at the biological level. But denying it’s operation will not work same as an AI couldn’t deny the operation of it hardware (at least not without conferring comparable disadvantages),
It only benefits society at large and even then only in so far as it conveys a fitness advantage.
Once again, what does it mean for something to benefit society, apart from any benefit to individuals in that society?
If something doesn’t benefit any of the people I care about directly, at all, then how can it benefit society, which is made up of those people?
It only benefits society at large and even then only in so far as it conveys a fitness advantage. Same as sexual reproduction does. The obvious difference being that sexual mechanisms are visible as motivations on your conscious level but aging mechanisms stop at the biological level. But denying it’s operation will not work same as an AI couldn’t deny the operation of it hardware (at least not without conferring comparable disadvantages),
Are you familiar with the concept of the selfish gene?
The problem of “old people will be close-minded and it will be harder for new ideas to gain a foothold” seems pretty inherent in abolishing death, and not just an implementation detail we can work around.
I think that the closed-mindedness of elderly people is more likely cultural than a biological fact of humanity. While the cliche runs that science progresses one death at a time, in my experience, old scientists usually have discarded and continue to discard great numbers of once-popular ideas. Science as a process gives people a mechanism for rejecting old ideas, and on the whole it’s pretty effective.
Lacking effective mechanisms for changing their mind, people in general do not need to get old in order to become closed-minded.
Really? It doesn’t seem to you like the program of studying cognitive biases, and finding ways to overcome them, can have any impact on this? What about the whole “modifying our minds” bit — enhancing our intelligence, and fixing cognitive glitches, in assorted biological and technological ways? That seems like it might have some effect, no?
I think the issue isn’t so much that old people are close minded compared to young and middle-aged people, as that young people are very open-,minded compared to middle-aged and old people.
Also, an advantage of aging and death is that the people at the top of hierarchies get changed.
I think the issue isn’t so much that old people are close minded compared to young and middle-aged people, as that young people are very open-,minded compared to middle-aged and old people.
Indeed. It seems like our goal should be to optimize the level of open-mindedness in everyone, appropriately for their status in society and other considerations.
Also, an advantage of aging and death is that the people at the top of hierarchies get changed.
Also true, however this is only an advantage insofar as otherwise, without death, the people at the top of hierarchies don’t get changed. It seems to me that our goal should be to avoid having hierarchies that work in such a fashion. Of course, that is a difficult project, but it’s not obvious to me that it’s an impossible one.
I think that this post conflates two issues, and is an example of a flaw of reasoning that goes like this:
Alice: It would be good if we could change [thing X].
Bob: Ah, but if we changed X, then problems A, B, and C would ensue! Therefore, it would not be good if we could change X.
Bob is confusing the desirability of the change with the prudence of the change. Alice isn’t necessarily saying that we should make the change she’s proposing. She’s saying it would be good if we could do so. But Bob immediately jumps to examining what problems would ensue if we changed X, decides that changing X would be imprudent, and concludes from this that it would also be undesirable.
But that last step is entirely groundless. Something could be a bad idea in practice due to implementation difficulties, but very desirable. These are orthogonal considerations. (Another way to think about it is: the consequences of making a change, vs. the consequences of the means used to implement said change.)
I think that Bob’s mistake is rooted in the fact that he is treating Alice’s proposal as, essentially, a wish made to a genie. “Oh great genie,” says Alice, “please make it so that death is no more!” Bob, horrified, stops Alice before she can finish speaking, and shouts “No! Think of all the ways the words of your wish can be twisted! Think of the unintended consequences! You haven’t considered the implications! No, Alice, you must not make such grand wishes of a genie, for they will inevitably go awry.”
The view here on Lesswrong, on the other hand, treats Alice’s proposal as an engineering challenge. The conversation in that style goes like this:
Alice: It would be good if we could change [thing X].
Chris: Hm, I concur that this would be good if we could do it. However, consider problem A, which would arise as a result.
Alice: I think that solution J would handle that acceptably.
Chris: That seems reasonable. But, there is also problem B to deal with.
Alice: It may seem like that at first, but actually that won’t be a problem because [reason K].
Chris: Ah, I see. It occurs to me that C will also be problematic.
Alice: Hmm, you’re right. That will be a challenge; I will have to give that some serious thought.
Chris: Please do! It would be very nice if you could solve it, because then we could make change X, which we both agree would be great.
Once you properly distinguish the concepts of desirability and prudence, you can treat problems with your proposal as obstacles to overcome, not reasons not to do it. So a real “defense of deathism” would have to argue that death is desirable; that immortality is not something we would or should want, even if we solved all the auxiliary problems. Otherwise, it fails to engage with the core of the anti-death position.
Exactly. A confusion between terminal values and instrumental values.
Death is a trade-off we are sometimes willing to make, despite it being emotionally difficult. For example, we accept the risk of death in space exploration.
If instead we valued Death as an intrinsic good, we would be making the hard trade-offs in the opposite direction. For example we would stop the space program saying: “Sure, never learning about the space sucks, and as curious people we are naturally disappointed, but on the other hand, without space program we are all guaranteed to die when the Sun explodes, which is a great thing! We don’t want to risk anyone surviving (oh, the horror!), just because we were once unable to stop our curiosity.”
Or, as a more personal example, people would be willing to sell their cars and houses just to increase the hope that their loved ones won’t survive the illness and return to normal life. Doctors would be considered the greatest villains, and hanging someone would be consider a reward for their noble acts.
Not necessarily. We could value both life and death; we might then want to live some amount of years (or live long enough such that we accomplish some amount of “living”, i.e. derive the desired amount of benefit out of life), and then die. Dying before getting all the life we want out of life would leave our desire for life unsatisfied, while continuing to live longer than that would stop us from satisfying our desire for death.
A (possibly equivalent) formulation might be to say that we derive diminishing marginal utility from life, such that as some point it is outweighed by the utility we derive from death.
I’m not sure why you think that the post confuses these points.
What I see from these first comments is that a definition of deathism that could be defended is needed. Obvious in retrospect.
One proposal (with hindsight) would be
What does it mean for something to be “beneficial to a society”, apart from any benefits it has to the individuals in that society?
My death would be very bad for me, personally. How does the “benefit to society” of my death cash out as benefit to me?
What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body? I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I actually do think it is unreasonable to take any but the physical stance toward society; the predictive power of taking the intentional stance (or the design stance, for that matter) is just less.
But! We might assume, for the sake of argument, that we can think of society as having emergent goals, goals that do not benefit its members (or do not benefit a majority of its members, or something). In that case, however, my question is:
Why should I care?
Society’s emergent goals can go take a flying leap, as can evolution’s goals, the goals of my genes, the goals of the human species, and any other goals of any other entity that is not me or the people I care about.
Hmm, I’ll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people’s preferences, so that any drop in predictive power might be worth it. But I’m not sure I’ve seen evidence in either direction; I just assumed it based on analogy and priors.
As for why you should care, I don’t think you should, necessarily, if you don’t already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.
Sorry, when I said “predictive power”, I was actually assuming normalization for efficiency. That is, my claim that the total predictive capacity you get for your available computation resources is greatest by taking the physical stance in this case.
Oh, and:
It is almost trivial to imagine such a thing. For example, my body may be destroyed utterly in the process of transferring my mind, unharmed, into a new, artificial body, better in every way than my old one. This would be great for me (assuming the new body suited my wants and needs), but bad for the cells making up my existing body.
The core idea here is that I am not my body. I am currently instantiated in my body, but that’s not the same thing. I care about my current instantiation only to the degree that doing so is necessary for me to survive and prosper.
Ah. I’m not sure I agree with you on the nature of the self. What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
I mean, I don’t necessarily disagree with this kind of dualism; it seems possible, even given what I know about embodied cognition. I just am not sure how it could be tested scientifically.
No direct evidence, just the totality of what we currently know about the mind (i.e. cognitive science). Subjective experience is not irrelevant, though I am still confused about its nature. I don’t, however, have any reason to believe that it’s tied to any particular instantiation.
I don’t think my view can properly be characterized as dualism. I don’t posit any sort of nonmaterial properties of mind, for instance, nor that the mind itself is some nonmaterial substance. Computationalism merely says, essentially, that “the mind is what the brain does”, and that other physical substrates can perform the same computation.
Everything that I know about the idea of embodied cognition leads me to conclude that it is a brand of mysticism. I’ve never heard a cogent argument for why embodiment can’t be simulated on some suitable level.
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
As for embodied cognition, most of what I know about it comes from reading Lawrence Shapiro’s book Embodied Cognition. I was much less impressed with the field after reading that book, but I do think the general idea is important, that it’s a mistake to think of the mind and body as separate things, and that in order to study cognition we have to take the body into consideration.
I agree that embodiment could be simulated. But I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others. Since I think of my subjective experience as an essential part of my self, this seems important.
I agree. I don’t think embodiment is irrelevant; my own field (human-computer interaction) takes embodiment quite seriously — it’s an absolutely integral factor in natural user interface design, for example.
I just don’t think embodiment is in any way magic, the way that the embodied cognition people seem to think and imply. If you can simulate a human and their environment on any level you like, then embodiment stops being an issue. It seems like we don’t actually disagree on this.
This is certainly not impossible, but it’s not clear to me why you couldn’t then simulate the substrate on a sufficiently low level as to capture whatever aspect of the substrate is responsible for enabling cognition. After all, we could in principle simulate the entire universe down to quantum configuration distributions, right?
If you wanted to make a weaker claim based on computational tractability, then that would of course be another thing.
I concur with this. To the extent that I have any kind of a handle of what subjective experience even is, it does seem quite important.
P.S.
Yeah, this is probably a question of preferred terminology and I am not inclined to argue about it too much; I just wanted to clarify my actual views.
Sure you can (and have to) take the body with you into the simulation—but then the (biological) rules governing the body still apply. You may have more control over them though.
It doesn’t.
Except in the same way as altruism does.
Ok. So, my death benefits me insofar as I care about other people (altruism) and my death benefits those people.
Obvious next question: how does my death benefit the people I care about?
Roughly delineated, I care about my family and friends immensely; casual acquaintances / colleagues / peers / etc. a good amount; and other people roughly to the degree that they share my culture (variously defined).
Thus, for death to benefit me via altruism, it would have to benefit my family and friends a lot (to offset the great loss they would feel at my death); and/or benefit casual acquaintances / colleagues / peers a pretty large amount (to compensate for the discount factor of how much I care about them); and/or benefit various other people in the world an almost astronomical total amount (ditto).
Does my death in fact do this? I can imagine certain specific scenarios where one or more of these things is the case, such as if my death serves to save my family, or save the world from destruction. However, death in common circumstances does not seem to fit the aforementioned criteria for me to judge it a net positive.
If you have children who in turn plan on having children, then you might be inclined to die eventually in order to secure more resources for your progeny. Similarly if you have close friends with children.
Of course, I’d prefer to incentivize having fewer children and make it harder to have children as a means to control population. If death were necessary, I’d recommend a lottery (assuming there are insufficient volunteers).
Directly? It doesn’t. It only benefits society at large and even then only in so far as it conveys a fitness advantage. Same as sexual reproduction does. The obvious difference being that sexual mechanisms are visible as motivations on your conscious level but aging mechanisms stop at the biological level. But denying it’s operation will not work same as an AI couldn’t deny the operation of it hardware (at least not without conferring comparable disadvantages),
Once again, what does it mean for something to benefit society, apart from any benefit to individuals in that society?
If something doesn’t benefit any of the people I care about directly, at all, then how can it benefit society, which is made up of those people?
Are you familiar with the concept of the selfish gene?
The problem of “old people will be close-minded and it will be harder for new ideas to gain a foothold” seems pretty inherent in abolishing death, and not just an implementation detail we can work around.
I think that the closed-mindedness of elderly people is more likely cultural than a biological fact of humanity. While the cliche runs that science progresses one death at a time, in my experience, old scientists usually have discarded and continue to discard great numbers of once-popular ideas. Science as a process gives people a mechanism for rejecting old ideas, and on the whole it’s pretty effective.
Lacking effective mechanisms for changing their mind, people in general do not need to get old in order to become closed-minded.
Really? It doesn’t seem to you like the program of studying cognitive biases, and finding ways to overcome them, can have any impact on this? What about the whole “modifying our minds” bit — enhancing our intelligence, and fixing cognitive glitches, in assorted biological and technological ways? That seems like it might have some effect, no?
I think the issue isn’t so much that old people are close minded compared to young and middle-aged people, as that young people are very open-,minded compared to middle-aged and old people.
Also, an advantage of aging and death is that the people at the top of hierarchies get changed.
Indeed. It seems like our goal should be to optimize the level of open-mindedness in everyone, appropriately for their status in society and other considerations.
Also true, however this is only an advantage insofar as otherwise, without death, the people at the top of hierarchies don’t get changed. It seems to me that our goal should be to avoid having hierarchies that work in such a fashion. Of course, that is a difficult project, but it’s not obvious to me that it’s an impossible one.
It has an impact. But it doesn’t invalidate the argument. Just move the balance a bit.