What is meant by closure? Is it just about being concerned as to what happens to them while they’re frozen? Or does it mean that people want their loved ones to die at some point?
The average human requires some form of mental “Ending” to their inner-narrative’s “Story Of My Relationship With This Person”, AFAICT. Without some form of mental marker which can be officially labeled in their mind as the “End” of things, the person and their relationship with that person will linger in their subconscious and they will worry, stress and otherwise continue agonizing over the subject as when waiting to see if a disappeared or kidnapped person will come back or turn up dead eventually.
From my viewpoint, the need for “closure” is an extremely selfish desire for some external sign that they are allowed to stop worrying about it. However, for most people, it is a “natural” part of their life and the need for closure is socially accepted and often socially expected. Outside of LW, I would expect that qualifying a need for closure as “selfish” would earn me substantial scorn and negative judgment.
After several attempts at providing a direct answer to this, I find that I am currently unable to.
The term “wrong” here confuses me more than anything. What’s the point of the question?
My comment is about how the need for closure is suboptimal for both the individual and the society, and how reality doesn’t necessarily fit with a human’s inner narrative expectations on this subject.
If you’re asking about why generic society would scorn and negatively perceive the notion of it being selfish, it’s because it’s socially accepted and socially expected in most cultures that individuals must not be selfish. The details of that probably belong in a different discussion.
The term “wrong” here confuses me more than anything. What’s the point of the question? My comment is about how the need for closure is suboptimal for both the individual and the society
Let me rephrase the question in your terms, then. Why is the need for closure suboptimal? What are you optimizing for?
Consider hunger—the desire to eat. It is “extremely selfish” and “suboptimal for both the individual and the society”?
Consider the need for solitude. Consider the desire to look pretty. Consider the yearning to be loved. Are they all “extremely selfish” and “suboptimal for both the individual and the society”?
Consider hunger—the desire to eat. It is “extremely selfish” and “suboptimal for both the individual and the society”?
As a desire that causes us to fulfill a necessary condition for survival, as per physics, no. Survival is beneficial, while perhaps not always optimal, currently the best general rule that I can think of.
The other examples, modulo some signalling and escalation subtleties regarding the “look pretty” case that would require a separate and lengthy discussion, are similar cases in that the desires lead individuals to take actions that are, ceteris paribus, overall beneficial given the current human condition.
Now change a variable: Food is no longer necessary for humans to live. All humans function perfectly well, as if they were eating optimally, without food (maybe they now take energy from waste heat or something, in an entropy-optimal kind of way). In this hypothetical, I would consider the desire to eat very selfish and suboptimal—it consumes resources of all kinds, including time that the individual could be spending on other things!
My assertion is that, on average, the desire for closure is more similar to the hypothetical second case than it is similar to the first case.
Corollaries / secondary assertions: The desire is purely emotional, individuals without it usually actually function better than their counterparts in situations where it is relevant (or at least would in the hypothetical case where there is no social expectation of such), and an individual that does not value conformity to an inner narrative that generates the need for closure is, ceteris paribus, happier and obtains higher expected utility than their counterparts.
Now change a variable: Food is no longer necessary for humans to live … In this hypothetical, I would consider the desire to eat very selfish and suboptimal—it consumes resources of all kinds, including time that the individual could be spending on other things!
You haven’t answered an important question: what are you optimizing for?
In your hypothetical eating (for pure hedonics) does consume resources including time, but you have neglected to show that this is not a good use of these resources. Yes, they can be spent on other things but why these other things are more valuable than the hedonics of eating?
What is the yardstick that you apply to outcomes to determine whether they are suboptimal or not?
The desire is purely emotional, individuals without it usually actually function better than their counterparts in situations where it is relevant
8-0 That’s an unexpected approach. Are you pointing out the “purely emotional” part in a derogatory sense? Is having emotional desires, err… suboptimal?
What do you mean by individuals without such emotional desires functioning “better”? Are emotions a crippling disability?
I am comparing across utility systems, so my best yardsticks are intuition and a vague idea of strength of hedons + psychological utilon estimates as my best approximation of per-person-utility.
I do realize this makes little formal sense considering that the problem of comparing different utility functions with different units is completely unresolved, but it’s not like we can’t throw balls in we don’t understand physics.
So what I’m really optimizing for is a weighted or normalized “evaluation”, on the theoretical assumption that this is possible across all relevant variants of humans, of any given human’s utility function. Naturally, the optimization target is the highest possible value.
It’s with that in mind that if I consider the case of two MWI-like branches of the same person, one where this person spontaneously develops a need for closure and one where it doesn’t happen, and try to visualize in as much detail as possible both the actions and stream of consciousness of both side-by-side, I can only imagine the person without a need for closure to be “better off” in a selfish manner, and if these individuals’ utility functions care for what they do for or cost to society, this compounds into an even greater difference in favor of the branch without need for closure.
This exercise can be (and I mentally did, yesterday) extended to the four-branch example of hunger and need for food, for all binary conjunctions. It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
so my best yardsticks are intuition and a vague idea of … estimates
Um. Intuition is often used as a fancy word for “I ain’t got no arguments but I got an opinion”. Effectively you are talking about your n=1 personal likes and dislikes. This is fine, but I don’t know why do you want to generalize on that basis.
It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
Let’s extend that line of imagination a bit further. It seems to me that this leads to a claim that the less needs and desires you have, the more “optimal” you will be in the sense of obtaining “higher values on [the] utility function”. In the end someone with no needs or desires at all will score the highest utility.
I would say that for practical purposes, we could distinguish “selfish” desires from simple “desires,” as being ones which place an inappropriate degree of burden on other people. After all, in general usage, we tend to use selfish to mean “privileging oneself over others to an inappropriate degree,” not “concerned with oneself at all.”
In that context, “I need you to definitely stay dead forever so I can stop worrying about it,” seems like a good example of a selfish desire, and rather more like something one would apply to a comic book archenemy than a loved one.
“I need you to definitely stay dead forever so I can stop worrying about it,”
What? Why isn’t it more like: “It’s very probable that you stay dead forever, so I better stop worrying about it and move on, because if I don’t, it’ll likely screw up my very probably finite, only life.”
If the person takes the burden on themselves to stop worrying about their loved ones who pursue cryonics, that would be a better description. I was trying to encapsulate the scenario under discussion of people who resist letting their loved ones pursue cryonics because it interferes with their sense of closure.
in general usage, we tend to use selfish to mean “privileging oneself over others to an inappropriate degree,”
This definition turns on the word “inappropriate” which is a weasel word and can mean everything (and nothing) under the sun. How can one be so selfish as to order a Starbucks latte when there are hungry children in Mozambique?
“I need you to definitely stay dead forever so I can stop worrying about it,”
Doesn’t look nice, but then most things dialed to 11 don’t look nice.
Let’s look at analogous realistic examples. Let’s say there is a couple, one spouse gets into a car accident and becomes a vegetable. He’s alive and can be kept alive (on respirators, etc.) for a long time, but his mind is either no longer there or walled off. What do you think is the properly ethical, appropriately non-selfish behavior for the other spouse?
Doesn’t look nice, but then most things dialed to 11 don’t look nice.
Let’s look at analogous realistic examples.
The example I gave is not just a realistic, but a real example, if as posited upthread, people are resisting having their loved ones pursue cryonics because it denies them a sense of closure.
What does or does not qualify as an inappropriate level of self-privilege is of course subject to debate, but when framed in those terms I think such a position would be widely agreed to be beyond it.
a real example, if as posited upthread, people are resisting having their loved ones pursue cryonics because it denies them a sense of closure.
Well, one person. And not “resist”, but “highly uncomfortable with”. And “may (tentatively) be part of the underlying objection”. You are adding lots of certainty which is entirely absent from the OP.
I am still interested in your normative position, though. So let’s get back to cryonics. Alice and Bob are a monogamous pair. Bob dies, is cryopreserved. Alice is monogamous by nature and young, she feels it’s possible that Bob could be successfully thawed during her lifetime.
What, in your opinion, is the ethical thing for Alice to do? Is it OK for her to remarry?
What, in your opinion, is the ethical thing for Alice to do?
Use some clever rationalization and remarry. More rationally, she should be aware that the probability of Bob being resurrected during her lifetime is pretty low.
I don’t think that’s enough information for me to return a single specific piece of advice. What does Alice think Bob would think of her getting married in his absence were he to be brought back at a later date? How likely does she think it is that he’d be brought back in her lifetime? Does she think that she’d still want to be in a relationship with him if she waited and he was brought back after, say, forty years? Etc.
There are certainly trends in relationship behavior which can constitute actionable information, but I think the solution to any specific relationship problem is likely to be idiosyncratic.
What does Alice think Bob would think of her getting married in his absence were he to be brought back at a later date?
Bob also was monogamous. Alice is pretty sure Bob wouldn’t like it.
How likely does she think it is that he’d be brought back in her lifetime?
Alice is uncertain. She thinks it’s possible, she is not sure how likely it is.
Does she think that she’d still want to be in a relationship with him if she waited and he was brought back after, say, forty years?
She has no idea what she’ll want in 40 years.
the solution to any specific relationship problem is likely to be idiosyncratic.
So, are there are any general guidelines, you think? Remember, the claim we are talking about is that the desire for closure is extremely selfish and “suboptimal”.
So, are there are any general guidelines, you think?
Well, I suspect that anyone preserved with current technology is probably not coming back, while this may not be the case for people preserved in the future given different technological resources, so I’d suggest treating death as probably permanent in cases of present cryonics. So as a guideline, I’d suggest that both those slated to be cryonically preserved and their survivors treat the procedure as offering a low probability that the subject is put into suspended animation rather than permanent death.
How to deal with that situation is up to individual values, but I think that for most people, refusing to seek another partner would result in an expected decrease in future happiness.
Not this again… The guy’s definition of selfishness is probably narrower than yours, so start the spat from there :)
I’d search a thread where I actually defended the “selfishness is wrong” position, but I can’t find it, because the search function doesn’t seem to work anymore.
I suspect selfish desire here refers to the fact that if people just move on, they’re less likely to care about longevity research, cryonics and A.I. Do you find that acceptable?
More, I find it desirable. I suspect I would find it hard to be sympathetic to a morality which would consider some increased caring for longevity research and cryonics more valuable than not screwing up the rest of your life.
What is meant by closure? Is it just about being concerned as to what happens to them while they’re frozen? Or does it mean that people want their loved ones to die at some point?
The average human requires some form of mental “Ending” to their inner-narrative’s “Story Of My Relationship With This Person”, AFAICT. Without some form of mental marker which can be officially labeled in their mind as the “End” of things, the person and their relationship with that person will linger in their subconscious and they will worry, stress and otherwise continue agonizing over the subject as when waiting to see if a disappeared or kidnapped person will come back or turn up dead eventually.
From my viewpoint, the need for “closure” is an extremely selfish desire for some external sign that they are allowed to stop worrying about it. However, for most people, it is a “natural” part of their life and the need for closure is socially accepted and often socially expected. Outside of LW, I would expect that qualifying a need for closure as “selfish” would earn me substantial scorn and negative judgment.
What’s wrong with selfish desires?
After several attempts at providing a direct answer to this, I find that I am currently unable to.
The term “wrong” here confuses me more than anything. What’s the point of the question?
My comment is about how the need for closure is suboptimal for both the individual and the society, and how reality doesn’t necessarily fit with a human’s inner narrative expectations on this subject.
If you’re asking about why generic society would scorn and negatively perceive the notion of it being selfish, it’s because it’s socially accepted and socially expected in most cultures that individuals must not be selfish. The details of that probably belong in a different discussion.
Let me rephrase the question in your terms, then. Why is the need for closure suboptimal? What are you optimizing for?
Consider hunger—the desire to eat. It is “extremely selfish” and “suboptimal for both the individual and the society”?
Consider the need for solitude. Consider the desire to look pretty. Consider the yearning to be loved. Are they all “extremely selfish” and “suboptimal for both the individual and the society”?
As a desire that causes us to fulfill a necessary condition for survival, as per physics, no. Survival is beneficial, while perhaps not always optimal, currently the best general rule that I can think of.
The other examples, modulo some signalling and escalation subtleties regarding the “look pretty” case that would require a separate and lengthy discussion, are similar cases in that the desires lead individuals to take actions that are, ceteris paribus, overall beneficial given the current human condition.
Now change a variable: Food is no longer necessary for humans to live. All humans function perfectly well, as if they were eating optimally, without food (maybe they now take energy from waste heat or something, in an entropy-optimal kind of way). In this hypothetical, I would consider the desire to eat very selfish and suboptimal—it consumes resources of all kinds, including time that the individual could be spending on other things!
My assertion is that, on average, the desire for closure is more similar to the hypothetical second case than it is similar to the first case.
Corollaries / secondary assertions: The desire is purely emotional, individuals without it usually actually function better than their counterparts in situations where it is relevant (or at least would in the hypothetical case where there is no social expectation of such), and an individual that does not value conformity to an inner narrative that generates the need for closure is, ceteris paribus, happier and obtains higher expected utility than their counterparts.
You haven’t answered an important question: what are you optimizing for?
In your hypothetical eating (for pure hedonics) does consume resources including time, but you have neglected to show that this is not a good use of these resources. Yes, they can be spent on other things but why these other things are more valuable than the hedonics of eating?
What is the yardstick that you apply to outcomes to determine whether they are suboptimal or not?
8-0 That’s an unexpected approach. Are you pointing out the “purely emotional” part in a derogatory sense? Is having emotional desires, err… suboptimal?
What do you mean by individuals without such emotional desires functioning “better”? Are emotions a crippling disability?
I am comparing across utility systems, so my best yardsticks are intuition and a vague idea of strength of hedons + psychological utilon estimates as my best approximation of per-person-utility.
I do realize this makes little formal sense considering that the problem of comparing different utility functions with different units is completely unresolved, but it’s not like we can’t throw balls in we don’t understand physics.
So what I’m really optimizing for is a weighted or normalized “evaluation”, on the theoretical assumption that this is possible across all relevant variants of humans, of any given human’s utility function. Naturally, the optimization target is the highest possible value.
It’s with that in mind that if I consider the case of two MWI-like branches of the same person, one where this person spontaneously develops a need for closure and one where it doesn’t happen, and try to visualize in as much detail as possible both the actions and stream of consciousness of both side-by-side, I can only imagine the person without a need for closure to be “better off” in a selfish manner, and if these individuals’ utility functions care for what they do for or cost to society, this compounds into an even greater difference in favor of the branch without need for closure.
This exercise can be (and I mentally did, yesterday) extended to the four-branch example of hunger and need for food, for all binary conjunctions. It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.
Um. Intuition is often used as a fancy word for “I ain’t got no arguments but I got an opinion”. Effectively you are talking about your n=1 personal likes and dislikes. This is fine, but I don’t know why do you want to generalize on that basis.
Let’s extend that line of imagination a bit further. It seems to me that this leads to a claim that the less needs and desires you have, the more “optimal” you will be in the sense of obtaining “higher values on [the] utility function”. In the end someone with no needs or desires at all will score the highest utility.
That doesn’t look reasonable to me.
I would say that for practical purposes, we could distinguish “selfish” desires from simple “desires,” as being ones which place an inappropriate degree of burden on other people. After all, in general usage, we tend to use selfish to mean “privileging oneself over others to an inappropriate degree,” not “concerned with oneself at all.”
In that context, “I need you to definitely stay dead forever so I can stop worrying about it,” seems like a good example of a selfish desire, and rather more like something one would apply to a comic book archenemy than a loved one.
What? Why isn’t it more like: “It’s very probable that you stay dead forever, so I better stop worrying about it and move on, because if I don’t, it’ll likely screw up my very probably finite, only life.”
If the person takes the burden on themselves to stop worrying about their loved ones who pursue cryonics, that would be a better description. I was trying to encapsulate the scenario under discussion of people who resist letting their loved ones pursue cryonics because it interferes with their sense of closure.
That indeed would be so incredibly selfish that I was blind to the possibility until you explicitly pointed it out now.
This definition turns on the word “inappropriate” which is a weasel word and can mean everything (and nothing) under the sun. How can one be so selfish as to order a Starbucks latte when there are hungry children in Mozambique?
Doesn’t look nice, but then most things dialed to 11 don’t look nice.
Let’s look at analogous realistic examples. Let’s say there is a couple, one spouse gets into a car accident and becomes a vegetable. He’s alive and can be kept alive (on respirators, etc.) for a long time, but his mind is either no longer there or walled off. What do you think is the properly ethical, appropriately non-selfish behavior for the other spouse?
The example I gave is not just a realistic, but a real example, if as posited upthread, people are resisting having their loved ones pursue cryonics because it denies them a sense of closure.
What does or does not qualify as an inappropriate level of self-privilege is of course subject to debate, but when framed in those terms I think such a position would be widely agreed to be beyond it.
Well, one person. And not “resist”, but “highly uncomfortable with”. And “may (tentatively) be part of the underlying objection”. You are adding lots of certainty which is entirely absent from the OP.
I am still interested in your normative position, though. So let’s get back to cryonics. Alice and Bob are a monogamous pair. Bob dies, is cryopreserved. Alice is monogamous by nature and young, she feels it’s possible that Bob could be successfully thawed during her lifetime.
What, in your opinion, is the ethical thing for Alice to do? Is it OK for her to remarry?
Use some clever rationalization and remarry. More rationally, she should be aware that the probability of Bob being resurrected during her lifetime is pretty low.
I don’t think that’s enough information for me to return a single specific piece of advice. What does Alice think Bob would think of her getting married in his absence were he to be brought back at a later date? How likely does she think it is that he’d be brought back in her lifetime? Does she think that she’d still want to be in a relationship with him if she waited and he was brought back after, say, forty years? Etc.
There are certainly trends in relationship behavior which can constitute actionable information, but I think the solution to any specific relationship problem is likely to be idiosyncratic.
Bob also was monogamous. Alice is pretty sure Bob wouldn’t like it.
Alice is uncertain. She thinks it’s possible, she is not sure how likely it is.
She has no idea what she’ll want in 40 years.
So, are there are any general guidelines, you think? Remember, the claim we are talking about is that the desire for closure is extremely selfish and “suboptimal”.
Well, I suspect that anyone preserved with current technology is probably not coming back, while this may not be the case for people preserved in the future given different technological resources, so I’d suggest treating death as probably permanent in cases of present cryonics. So as a guideline, I’d suggest that both those slated to be cryonically preserved and their survivors treat the procedure as offering a low probability that the subject is put into suspended animation rather than permanent death.
How to deal with that situation is up to individual values, but I think that for most people, refusing to seek another partner would result in an expected decrease in future happiness.
Not this again… The guy’s definition of selfishness is probably narrower than yours, so start the spat from there :)
I’d search a thread where I actually defended the “selfishness is wrong” position, but I can’t find it, because the search function doesn’t seem to work anymore.
Given “the need for “closure” is an extremely selfish desire” it doesn’t look like his definition is improbably narrow :-/
Also note that we are talking about desires, not actions.
Fair enough. I suppose desires often lead to actions, so sometimes it’s good to do some pre-emptive soul searching.
I suspect selfish desire here refers to the fact that if people just move on, they’re less likely to care about longevity research, cryonics and A.I. Do you find that acceptable?
More, I find it desirable. I suspect I would find it hard to be sympathetic to a morality which would consider some increased caring for longevity research and cryonics more valuable than not screwing up the rest of your life.
I agree. Besides, I can both move on, and care about those things. Added A.I. to the bunch.