“And yet… and yet...” said I to my Teacher, when all the shapes and the singing had passed some distance away into the forest, “even now I am not quite sure. Is it really tolerable that she should be untouched by his misery, even his self-made misery?”
“Would you rather he still had the power of tormenting her? He did it many a day and many a year in their earthly life.”
“Well, no. I suppose I don’t want that that.”
“What then?”
“I hardly know, Sir. What some people say on Earth is that the final loss of one’s soul gives the lie to all the joy of those who are saved.”
“Ye see it does not.”
“I feel in a way that it ought to.”
“That sounds very merciful, but see what lurks behind it.”
“What?”
“The demand of the loveless and the self-imprisoned that they should be allowed to blackmail the universe: that till they consent to be happy (on their own terms) no one else shall taste joy: that theirs should be the final power; that Hell should be able to veto Heaven.”
“I don’t know what I want, Sir.”
This dialogue follows the most compelling (to me) scene in C. S. Lewis’s “The Great Divorce”. A saved woman is trying to coax a man she knew in life to join her in heaven while the narrator and his guide look on. She clearly acts in such a way as to reveal a preference that the man join her. But nothing he does, not even remaining in Hell for all eternity, makes a bit of difference to her emotional state.
Do I want her miserable? No. Do I think she cares, really cares about the man she’s trying to help? Well… no. I don’t think that’s what “care” means; she lacks empathy for him. I recently acted in such a way as to get myself a baked potato. I don’t really care, in the deep and meaningful way I care about other people, about having gotten a baked potato—and I’m not even devoid of potato-related emotional feelings, I would have been disappointed if it had caught fire and I was pleased when it turned out nicely.
Do I like being sad when my friends are sad? Well, no, not really, I don’t have sadness-asymbolia. Would I rather not be sad when my friends are sad; do I want to deny them that power, as C. S. Lewis suggests would be only just? No! I don’t want to go around helping people just because this is written somewhere on my abstract list of preferences, acting in numb glee and feeling nothing that responds to my environment.
In numb glee I suspect you wouldn’t act at all, or have preferences in any meaningful sense.
From a very scattered and informal study of the modern concept of the Christian god, it seems to me that He’s up to something like this:
1) Fabricate or otherwise acquire a large batch of souls for some unknown larger purpose.
2) Realize the manufacturing process may be flawed or contaminated somehow.
3) Set up a procedurally-generated test environment (aka observable reality) for the souls, complete with self-replicating interface shells (aka human bodies).
4) Set up “good enough,” “repairable,” and “reject” bins, labeled heaven, purgatory, and hell respectively; souls in the first and third bins get put into stasis by what amounts for all practical purposes to sensory deprivation. Sit back and watch the test process run.
5) Double-check the specs for the unknown larger purpose, and pass/fail rate for the already-sorted souls, realize that tolerances have been set way too strict. Possibly also some sort of problem with other gods sneaking in and stealing the goods? Unclear.
6) Set up a temporary avatar in the test environment (aka Jesus) to announce the new, lower standard, since it’s qualitatively rather than quantitatively different, and yet-unsorted souls can partially reconfigure themselves to adapt.
7) Eventually, full batch will be incarnated and test environment will go through an elaborate self-destruct sequence.
acting in numb glee and feeling nothing that responds to my environment.
Sure. “Acting in numb glee and feeling nothing that responds to one’s environment” is rather far away from what I was advocating, though. Quite the opposite: at best, this is about fully embracing pretty much all of one’s emotions. (Possibly excluding a few that seem purely harmful to me, though that’s everyone’s own decision.)
Frankly, I’ve always found this story one of Lewis’ most sick, disgusting and unethical ones—and that’s for an author who had many moments that come across as sick, disgusting and unethical to many.
When you share a bond of emotional contingency with someone, it sometimes happens that features of their style of living are so incompatible with yours as to destroy more of your own personal utility than the bond can generate. It’s a nasty situation, which we often adapt to by laboriously self-modifying the bond away. Colloquially, this is called “getting over someone”.
It’s quite a reasonable response—but it’s also a voluntary one. I’m considerably less thrilled by Lewis including it as part of the salvation package by default. That seems—well, manipulative is one word for it, but convenient might be an even better one. It’s as if he’s resolved a conflict between human emotion and his religious beliefs by declaring that the conflict magically won’t exist in any sense that matters long-term.
Of course, that’s not much comfort to the living people whose loved ones he’s implicitly condemned to Hell.
Mathematically speaking, let U1 be the woman’s utility value if the man is in Hell, and U2 is her utility value if the man is in Heaven. What does the story tell us about values of U1 and U2?
At first sight it says that U2 is greater than U1, because the woman really wants the man to join her, but also U1 is not less than U2, because she is not sorry that her attempt failed. This is mathematically impossible.
I suppose a Christian reader could suggest that both values U1 and U2 are infinite, because she is in Heaven. So it’s like she was trying to increase U to U+k, because increasing U is the natural thing to do, but it does not matter that she failed, because if U is infinite, then U is not smaller that U+k.
Now I am not sure, does this interpretation mean something, or is it just explaining away? I can’t even imagine the very large values of U, nor infinite ones.
Another explanation could be based on “predestination” at the moment of one’s death. (The story happens in the afterlife.) It was already decided whether the man will choose Heaven or Hell, but until the moment of his choice, nobody else can know the result. So the woman comes with hope that the man will choose Heaven, but he chooses Hell. She is a perfect rationalist, so she immediately realizes that the uncertaintly existed only in her mind, she discards her mental sunk costs, accepts the reality and moves on.
This explanation suggests that she was unable to change his decision, but she still tried to convince him, so why was she trying? Maybe at that moment, she wasn’t behaving as a perfect rationalist, and his decision somehow woke her up. (She is in Heaven, perhaps between rationality and irrationality she always chooses the variant that makes her more happy at the given moment.)
Back to the Earth… Our empathy motivates us to help our friends. This is why we feel that empathy is morally good. When we realize it is impossible to help our friends, it would be rational to lose empathy. It goes against our intuition, because empathy does not work this way, because in most situation there is something we can do to help our friends. (Even if they have an incurable illness, we can increase their utility function by talking to them.)
Mathematically speaking, let U1 be the woman’s utility value if the man is in Hell, and U2 is her utility value if the man is in Heaven. What does the story tell us about values of U1 and U2?
At first sight it says that U2 is greater than U1, because the woman really wants the man to join her, but also U1 is not less than U2, because she is not sorry that her attempt failed. This is mathematically impossible.
I think this mostly tells us that your model doesn’t actually model humans very well.
A simple explanation is that there’s a system in her brain that guides her action towards making the man join her, but the success or failure of this system doesn’t affect her emotional state.
Ceteris paribus, I would prefer not to be sad when my friends are sad. But this is incompatible with empathy—I use my sadness to model theirs. I can’t imagine “loving” someone while trying not to understand them.
This dialogue follows the most compelling (to me) scene in C. S. Lewis’s “The Great Divorce”. A saved woman is trying to coax a man she knew in life to join her in heaven while the narrator and his guide look on. She clearly acts in such a way as to reveal a preference that the man join her. But nothing he does, not even remaining in Hell for all eternity, makes a bit of difference to her emotional state.
Do I want her miserable? No. Do I think she cares, really cares about the man she’s trying to help? Well… no. I don’t think that’s what “care” means; she lacks empathy for him. I recently acted in such a way as to get myself a baked potato. I don’t really care, in the deep and meaningful way I care about other people, about having gotten a baked potato—and I’m not even devoid of potato-related emotional feelings, I would have been disappointed if it had caught fire and I was pleased when it turned out nicely.
Do I like being sad when my friends are sad? Well, no, not really, I don’t have sadness-asymbolia. Would I rather not be sad when my friends are sad; do I want to deny them that power, as C. S. Lewis suggests would be only just? No! I don’t want to go around helping people just because this is written somewhere on my abstract list of preferences, acting in numb glee and feeling nothing that responds to my environment.
I don’t know what I want, Sir.
Your comment has frightened me, confused me, and made me think. Thanks.
You are most welcome.
In numb glee I suspect you wouldn’t act at all, or have preferences in any meaningful sense.
From a very scattered and informal study of the modern concept of the Christian god, it seems to me that He’s up to something like this: 1) Fabricate or otherwise acquire a large batch of souls for some unknown larger purpose. 2) Realize the manufacturing process may be flawed or contaminated somehow. 3) Set up a procedurally-generated test environment (aka observable reality) for the souls, complete with self-replicating interface shells (aka human bodies). 4) Set up “good enough,” “repairable,” and “reject” bins, labeled heaven, purgatory, and hell respectively; souls in the first and third bins get put into stasis by what amounts for all practical purposes to sensory deprivation. Sit back and watch the test process run. 5) Double-check the specs for the unknown larger purpose, and pass/fail rate for the already-sorted souls, realize that tolerances have been set way too strict. Possibly also some sort of problem with other gods sneaking in and stealing the goods? Unclear. 6) Set up a temporary avatar in the test environment (aka Jesus) to announce the new, lower standard, since it’s qualitatively rather than quantitatively different, and yet-unsorted souls can partially reconfigure themselves to adapt. 7) Eventually, full batch will be incarnated and test environment will go through an elaborate self-destruct sequence.
Sure. “Acting in numb glee and feeling nothing that responds to one’s environment” is rather far away from what I was advocating, though. Quite the opposite: at best, this is about fully embracing pretty much all of one’s emotions. (Possibly excluding a few that seem purely harmful to me, though that’s everyone’s own decision.)
Frankly, I’ve always found this story one of Lewis’ most sick, disgusting and unethical ones—and that’s for an author who had many moments that come across as sick, disgusting and unethical to many.
When you share a bond of emotional contingency with someone, it sometimes happens that features of their style of living are so incompatible with yours as to destroy more of your own personal utility than the bond can generate. It’s a nasty situation, which we often adapt to by laboriously self-modifying the bond away. Colloquially, this is called “getting over someone”.
It’s quite a reasonable response—but it’s also a voluntary one. I’m considerably less thrilled by Lewis including it as part of the salvation package by default. That seems—well, manipulative is one word for it, but convenient might be an even better one. It’s as if he’s resolved a conflict between human emotion and his religious beliefs by declaring that the conflict magically won’t exist in any sense that matters long-term.
Of course, that’s not much comfort to the living people whose loved ones he’s implicitly condemned to Hell.
Agreed. Although it feels to me like there are other appalling things about the situation in the story; I’ll reflect some more and say what those are.
Mathematically speaking, let U1 be the woman’s utility value if the man is in Hell, and U2 is her utility value if the man is in Heaven. What does the story tell us about values of U1 and U2?
At first sight it says that U2 is greater than U1, because the woman really wants the man to join her, but also U1 is not less than U2, because she is not sorry that her attempt failed. This is mathematically impossible.
I suppose a Christian reader could suggest that both values U1 and U2 are infinite, because she is in Heaven. So it’s like she was trying to increase U to U+k, because increasing U is the natural thing to do, but it does not matter that she failed, because if U is infinite, then U is not smaller that U+k.
Now I am not sure, does this interpretation mean something, or is it just explaining away? I can’t even imagine the very large values of U, nor infinite ones.
Another explanation could be based on “predestination” at the moment of one’s death. (The story happens in the afterlife.) It was already decided whether the man will choose Heaven or Hell, but until the moment of his choice, nobody else can know the result. So the woman comes with hope that the man will choose Heaven, but he chooses Hell. She is a perfect rationalist, so she immediately realizes that the uncertaintly existed only in her mind, she discards her mental sunk costs, accepts the reality and moves on.
This explanation suggests that she was unable to change his decision, but she still tried to convince him, so why was she trying? Maybe at that moment, she wasn’t behaving as a perfect rationalist, and his decision somehow woke her up. (She is in Heaven, perhaps between rationality and irrationality she always chooses the variant that makes her more happy at the given moment.)
Back to the Earth… Our empathy motivates us to help our friends. This is why we feel that empathy is morally good. When we realize it is impossible to help our friends, it would be rational to lose empathy. It goes against our intuition, because empathy does not work this way, because in most situation there is something we can do to help our friends. (Even if they have an incurable illness, we can increase their utility function by talking to them.)
I think this mostly tells us that your model doesn’t actually model humans very well.
A simple explanation is that there’s a system in her brain that guides her action towards making the man join her, but the success or failure of this system doesn’t affect her emotional state.
Oh yes, “adaptation executers vs utility maximizers”.
So she has followed the algorithm: “if there is a chance to help, try to help / if there isn’t a chance to help, ignore”.
And the creepy part was how she perfectly knew which situation is it now, and how she accomodated so quickly.
Ceteris paribus, I would prefer not to be sad when my friends are sad. But this is incompatible with empathy—I use my sadness to model theirs. I can’t imagine “loving” someone while trying not to understand them.