I was assuming that “feeling bad when bad things happen to someone” is part of the definition of what it means to care about someone. And I’m naturally reluctant to choose to not care.
You still have preferences in addition to emotions. Say you have a strong preference for bad things not to happen to someone. Then you do whatever you can to prevent bad things from happening to them, and if something bad does happen to them, you help them out to the best of your ability. In my book, that counts as caring about someone. Not caring would mean that you didn’t do anything to stop them from experiencing bad stuff, nor did you help them out if something bad did happen to them.
Now people have various definitions of caring, and some probably do think that “feeling bad if something bad happens to someone” is required for genuine caring. But I would disagree. From an evolutionary point of view, emotions exist to motivate behavior. If you behave like a caring person would but don’t feel bad, then in reality you care more than someone who feels bad but doesn’t actually do anything. And if you end up feeling bad, then that may distract you and cause you to make worse decisions or temporarily paralyze you, which reduces your ability to actually help. (Also, evolutionarily, feeling bad about someone suffering probably also acts as a costly signal: if they’re hurt, you suffer, so they have unfakeable evidence of you actually caring and not just pretending to care. But you have no need to prove to yourself that you care in such a perverse way, and you can also prove your caring to them with your actions.)
From a consequentialist point of view also, what matters is your actual behavior. The less time you spend feeling bad, the more time you can spend things that actually make people better off.
Also, not feeling bad doesn’t mean that you can’t express sympathy. You can still honestly say things like “I wish things got better for you”. For most people, it’s the notion that you don’t care what happens to them that is bothersome. You can show with both your words and actions that you do care, that is, have a preference that things go well for them and are prepared to spend time and effort to help them out if necessary.
An important note when it comes to sadness: to some extent, it seems like sadness is the appropriate response when e.g. someone close to you dies. “You’re never obligated to feel bad” means that you have no moral obligation to suffer, it doesn’t mean that you should try to push away or suppress negative emotions. Remember, trying to do that is exactly what causes the negative emotion-related suffering in the first place. So if you feel sad about someone dying, say, that’s perfectly appropriate! It’s what your brain needs to do in order to adapt to the loss. But even then there’s no obligation to suffer from the sadness.
oops, I just realized… if the rule is “only have emotions about situations that were within my immediate control”, and you know that the other person will feel upset if they don’t see you feeling bad about their situation, then that counts as something that’s within your immediate control… though something about this seems like it doesn’t quite fit… it feels like I’m interpreting the rule to mean something other than what was intended...
In principle, that’s correct, though you’re right that it’s a bit different from what I was intending. Here’s something closer to what I was thinking about.
Here, I’m going to use the expression “feel bad” to refer to feeling emotions that are usually considered negative. I don’t mean that one should actually find them aversive or suffer from them. More on this below.
Suppose something bad happens to person X, who you care about. The bad thing wasn’t anything you had control over, so you have no reason to feel bad about it. But now you have a chance to help X. Whether you help them or not is something you do have control over, so if you do help them, you should feel good about it.
But suppose that you fail to help them. Now it may or may not be appropriate to feel bad, depending on why you fail to help them. For instance, maybe you are driving to their home, but on the way there your car breaks down. Presuming you hadn’t ignored clear signs of an immediate breakdown or otherwise clearly neglected the maintenance of your car, then it breaking down wasn’t really under your control. This prevents you from helping them, but it still isn’t something that you should feel bad about. Feeling bad is a feedback mechanism to teach you lessons about what you did wrong, and there are no useful lessons to be learned here.
You should only feel bad if you failed because of something that was under your control. Maybe you were going to take a bus to them, but got stuck online and missed the last bus. Or maybe you drove your car carelessly and got in an accident. In that case it’s okay to feel bad, as your behavior mechanisms need feedback.
Still, even if you feel bad, ideally you shouldn’t suffer from it. Blaming yourself accomplishes nothing. Your attitude should be “okay, I now made a mistake, so I’ll gladly embrace this momentary pain and be happy over the fact that it will teach me to act better in the future”. This is always a good mindset to have, because it will increase the odds of you actually acting better in the future. Being prepared to accept any pain without needless guilt is good, as it makes it easier to internalize the actual lessons of your mistake without wasting energy on needless suffering. And if the attention-allocation theory of suffering is true, then suffering is always needless, because it means that your brain is wasting energy and resources being pulled in opposite directions.
If you screw up and feel bad, you may think something like: it’s a bad thing that I screwed up, but it’s also a good thing that this pain is teaching me not to do it anymore. Now I’m going to feel good and happy about this enjoyable pain, because it means I’ll do better in the future.
But be careful not to mix in feelings of martyrdom, self-pity or anything like that. The lesson is not “I’m a terrible person and I deserve all this suffering I got so I’m going to revel in it”. Nobody ever deserves to suffer. The lesson is “I did a mistake but that doesn’t affect my worth as a person. Next time I’ll do better”. If you’re a utilitarian seeking to increase well-being or decrease suffering, that includes your own well-being and your own suffering.
Something also feels Wrong about enjoying sadness. If you happen to enjoy sadness, then you need to be really careful not to deliberately cause harmful things to happen to yourself or others, just for the sake of experiencing the sadness.
There is probably some risk of this, yes. But ideally, your behavior should be driven by your preferences. This becomes a lot easier once emotions stop being your enemy and you don’t need to avoid feeling any particular emotion. When all your emotions are your welcome allies, then it’s also easier to let your preferences guide your behavior in everything. That means that you’ve accepted feelings such as sadness as appropriate error messages that pop up when things haven’t gone as they should. Then you won’t be actively trying to cause those emotions, instead concentrating on seeking pleasure from doing things right.
“And yet… and yet...” said I to my Teacher, when all the shapes and the singing had passed some distance away into the forest, “even now I am not quite sure. Is it really tolerable that she should be untouched by his misery, even his self-made misery?”
“Would you rather he still had the power of tormenting her? He did it many a day and many a year in their earthly life.”
“Well, no. I suppose I don’t want that that.”
“What then?”
“I hardly know, Sir. What some people say on Earth is that the final loss of one’s soul gives the lie to all the joy of those who are saved.”
“Ye see it does not.”
“I feel in a way that it ought to.”
“That sounds very merciful, but see what lurks behind it.”
“What?”
“The demand of the loveless and the self-imprisoned that they should be allowed to blackmail the universe: that till they consent to be happy (on their own terms) no one else shall taste joy: that theirs should be the final power; that Hell should be able to veto Heaven.”
“I don’t know what I want, Sir.”
This dialogue follows the most compelling (to me) scene in C. S. Lewis’s “The Great Divorce”. A saved woman is trying to coax a man she knew in life to join her in heaven while the narrator and his guide look on. She clearly acts in such a way as to reveal a preference that the man join her. But nothing he does, not even remaining in Hell for all eternity, makes a bit of difference to her emotional state.
Do I want her miserable? No. Do I think she cares, really cares about the man she’s trying to help? Well… no. I don’t think that’s what “care” means; she lacks empathy for him. I recently acted in such a way as to get myself a baked potato. I don’t really care, in the deep and meaningful way I care about other people, about having gotten a baked potato—and I’m not even devoid of potato-related emotional feelings, I would have been disappointed if it had caught fire and I was pleased when it turned out nicely.
Do I like being sad when my friends are sad? Well, no, not really, I don’t have sadness-asymbolia. Would I rather not be sad when my friends are sad; do I want to deny them that power, as C. S. Lewis suggests would be only just? No! I don’t want to go around helping people just because this is written somewhere on my abstract list of preferences, acting in numb glee and feeling nothing that responds to my environment.
In numb glee I suspect you wouldn’t act at all, or have preferences in any meaningful sense.
From a very scattered and informal study of the modern concept of the Christian god, it seems to me that He’s up to something like this:
1) Fabricate or otherwise acquire a large batch of souls for some unknown larger purpose.
2) Realize the manufacturing process may be flawed or contaminated somehow.
3) Set up a procedurally-generated test environment (aka observable reality) for the souls, complete with self-replicating interface shells (aka human bodies).
4) Set up “good enough,” “repairable,” and “reject” bins, labeled heaven, purgatory, and hell respectively; souls in the first and third bins get put into stasis by what amounts for all practical purposes to sensory deprivation. Sit back and watch the test process run.
5) Double-check the specs for the unknown larger purpose, and pass/fail rate for the already-sorted souls, realize that tolerances have been set way too strict. Possibly also some sort of problem with other gods sneaking in and stealing the goods? Unclear.
6) Set up a temporary avatar in the test environment (aka Jesus) to announce the new, lower standard, since it’s qualitatively rather than quantitatively different, and yet-unsorted souls can partially reconfigure themselves to adapt.
7) Eventually, full batch will be incarnated and test environment will go through an elaborate self-destruct sequence.
acting in numb glee and feeling nothing that responds to my environment.
Sure. “Acting in numb glee and feeling nothing that responds to one’s environment” is rather far away from what I was advocating, though. Quite the opposite: at best, this is about fully embracing pretty much all of one’s emotions. (Possibly excluding a few that seem purely harmful to me, though that’s everyone’s own decision.)
Frankly, I’ve always found this story one of Lewis’ most sick, disgusting and unethical ones—and that’s for an author who had many moments that come across as sick, disgusting and unethical to many.
When you share a bond of emotional contingency with someone, it sometimes happens that features of their style of living are so incompatible with yours as to destroy more of your own personal utility than the bond can generate. It’s a nasty situation, which we often adapt to by laboriously self-modifying the bond away. Colloquially, this is called “getting over someone”.
It’s quite a reasonable response—but it’s also a voluntary one. I’m considerably less thrilled by Lewis including it as part of the salvation package by default. That seems—well, manipulative is one word for it, but convenient might be an even better one. It’s as if he’s resolved a conflict between human emotion and his religious beliefs by declaring that the conflict magically won’t exist in any sense that matters long-term.
Of course, that’s not much comfort to the living people whose loved ones he’s implicitly condemned to Hell.
Mathematically speaking, let U1 be the woman’s utility value if the man is in Hell, and U2 is her utility value if the man is in Heaven. What does the story tell us about values of U1 and U2?
At first sight it says that U2 is greater than U1, because the woman really wants the man to join her, but also U1 is not less than U2, because she is not sorry that her attempt failed. This is mathematically impossible.
I suppose a Christian reader could suggest that both values U1 and U2 are infinite, because she is in Heaven. So it’s like she was trying to increase U to U+k, because increasing U is the natural thing to do, but it does not matter that she failed, because if U is infinite, then U is not smaller that U+k.
Now I am not sure, does this interpretation mean something, or is it just explaining away? I can’t even imagine the very large values of U, nor infinite ones.
Another explanation could be based on “predestination” at the moment of one’s death. (The story happens in the afterlife.) It was already decided whether the man will choose Heaven or Hell, but until the moment of his choice, nobody else can know the result. So the woman comes with hope that the man will choose Heaven, but he chooses Hell. She is a perfect rationalist, so she immediately realizes that the uncertaintly existed only in her mind, she discards her mental sunk costs, accepts the reality and moves on.
This explanation suggests that she was unable to change his decision, but she still tried to convince him, so why was she trying? Maybe at that moment, she wasn’t behaving as a perfect rationalist, and his decision somehow woke her up. (She is in Heaven, perhaps between rationality and irrationality she always chooses the variant that makes her more happy at the given moment.)
Back to the Earth… Our empathy motivates us to help our friends. This is why we feel that empathy is morally good. When we realize it is impossible to help our friends, it would be rational to lose empathy. It goes against our intuition, because empathy does not work this way, because in most situation there is something we can do to help our friends. (Even if they have an incurable illness, we can increase their utility function by talking to them.)
Mathematically speaking, let U1 be the woman’s utility value if the man is in Hell, and U2 is her utility value if the man is in Heaven. What does the story tell us about values of U1 and U2?
At first sight it says that U2 is greater than U1, because the woman really wants the man to join her, but also U1 is not less than U2, because she is not sorry that her attempt failed. This is mathematically impossible.
I think this mostly tells us that your model doesn’t actually model humans very well.
A simple explanation is that there’s a system in her brain that guides her action towards making the man join her, but the success or failure of this system doesn’t affect her emotional state.
Ceteris paribus, I would prefer not to be sad when my friends are sad. But this is incompatible with empathy—I use my sadness to model theirs. I can’t imagine “loving” someone while trying not to understand them.
Suppose something bad happens to person X, who you care about. The bad thing wasn’t anything you had control over, so you have no reason to feel bad about it. But now you have a chance to help X. Whether you help them or not is something you do have control over, so if you do help them, you should feel good about it.
But suppose that you fail to help them. Now it may or may not be appropriate to feel bad, depending on why you fail to help them. For instance, maybe you are driving to their home, but on the way there your car breaks down. Presuming you hadn’t ignored clear signs of an immediate breakdown or otherwise clearly neglected the maintenance of your car, then it breaking down wasn’t really under your control. This prevents you from helping them, but it still isn’t something that you should feel bad about. Feeling bad is a feedback mechanism to teach you lessons about what you did wrong, and there are no useful lessons to be learned here.
You should only feel bad if you failed because of something that was under your control. Maybe you were going to take a bus to them, but got stuck online and missed the last bus. Or maybe you drove your car carelessly and got in an accident. In that case it’s okay to feel bad, as your behavior mechanisms need feedback.
This reminds me of a video game that I used to play. In Creatures 2, the player takes care of several artificial animal-ish creatures called norns. Interestingly, norns actually learn—they have a simulated brain with simulated reward and punishment chemicals, and whatever ‘neurons’ are firing when there are ‘reward chemicals’ fire more often in the future and whatever ‘neurons’ are firing when there are ‘punishment chemicals’ fire less often in the future, causing them to show more of certain behaviors and less of others.
Unfortunately, the game was released without adequate playtesting, and the default norns’ learning systems turned out not to be calibrated properly. Individual norns seemed to learn fine at first, but eventually turned stupid as they aged, jumping off of cliffs and refusing to eat. With some work, the player community figured out what was wrong: The default norns’ punishment and reward chemicals had too long of a half-life, and tended to stay in the norns’ systems long enough to affect several brain-states. Fortunately, once this was discovered, it was easy for some of the more advanced players to design norns without the issue (yes, the game allowed for genetic engineering!) and release them to the public, and the new norns learned just fine.
You still have preferences in addition to emotions. Say you have a strong preference for bad things not to happen to someone. Then you do whatever you can to prevent bad things from happening to them, and if something bad does happen to them, you help them out to the best of your ability. In my book, that counts as caring about someone. Not caring would mean that you didn’t do anything to stop them from experiencing bad stuff, nor did you help them out if something bad did happen to them.
Now people have various definitions of caring, and some probably do think that “feeling bad if something bad happens to someone” is required for genuine caring. But I would disagree. From an evolutionary point of view, emotions exist to motivate behavior. If you behave like a caring person would but don’t feel bad, then in reality you care more than someone who feels bad but doesn’t actually do anything. And if you end up feeling bad, then that may distract you and cause you to make worse decisions or temporarily paralyze you, which reduces your ability to actually help. (Also, evolutionarily, feeling bad about someone suffering probably also acts as a costly signal: if they’re hurt, you suffer, so they have unfakeable evidence of you actually caring and not just pretending to care. But you have no need to prove to yourself that you care in such a perverse way, and you can also prove your caring to them with your actions.)
From a consequentialist point of view also, what matters is your actual behavior. The less time you spend feeling bad, the more time you can spend things that actually make people better off.
Also, not feeling bad doesn’t mean that you can’t express sympathy. You can still honestly say things like “I wish things got better for you”. For most people, it’s the notion that you don’t care what happens to them that is bothersome. You can show with both your words and actions that you do care, that is, have a preference that things go well for them and are prepared to spend time and effort to help them out if necessary.
An important note when it comes to sadness: to some extent, it seems like sadness is the appropriate response when e.g. someone close to you dies. “You’re never obligated to feel bad” means that you have no moral obligation to suffer, it doesn’t mean that you should try to push away or suppress negative emotions. Remember, trying to do that is exactly what causes the negative emotion-related suffering in the first place. So if you feel sad about someone dying, say, that’s perfectly appropriate! It’s what your brain needs to do in order to adapt to the loss. But even then there’s no obligation to suffer from the sadness.
In principle, that’s correct, though you’re right that it’s a bit different from what I was intending. Here’s something closer to what I was thinking about.
Here, I’m going to use the expression “feel bad” to refer to feeling emotions that are usually considered negative. I don’t mean that one should actually find them aversive or suffer from them. More on this below.
Suppose something bad happens to person X, who you care about. The bad thing wasn’t anything you had control over, so you have no reason to feel bad about it. But now you have a chance to help X. Whether you help them or not is something you do have control over, so if you do help them, you should feel good about it.
But suppose that you fail to help them. Now it may or may not be appropriate to feel bad, depending on why you fail to help them. For instance, maybe you are driving to their home, but on the way there your car breaks down. Presuming you hadn’t ignored clear signs of an immediate breakdown or otherwise clearly neglected the maintenance of your car, then it breaking down wasn’t really under your control. This prevents you from helping them, but it still isn’t something that you should feel bad about. Feeling bad is a feedback mechanism to teach you lessons about what you did wrong, and there are no useful lessons to be learned here.
You should only feel bad if you failed because of something that was under your control. Maybe you were going to take a bus to them, but got stuck online and missed the last bus. Or maybe you drove your car carelessly and got in an accident. In that case it’s okay to feel bad, as your behavior mechanisms need feedback.
Still, even if you feel bad, ideally you shouldn’t suffer from it. Blaming yourself accomplishes nothing. Your attitude should be “okay, I now made a mistake, so I’ll gladly embrace this momentary pain and be happy over the fact that it will teach me to act better in the future”. This is always a good mindset to have, because it will increase the odds of you actually acting better in the future. Being prepared to accept any pain without needless guilt is good, as it makes it easier to internalize the actual lessons of your mistake without wasting energy on needless suffering. And if the attention-allocation theory of suffering is true, then suffering is always needless, because it means that your brain is wasting energy and resources being pulled in opposite directions.
If you screw up and feel bad, you may think something like: it’s a bad thing that I screwed up, but it’s also a good thing that this pain is teaching me not to do it anymore. Now I’m going to feel good and happy about this enjoyable pain, because it means I’ll do better in the future.
But be careful not to mix in feelings of martyrdom, self-pity or anything like that. The lesson is not “I’m a terrible person and I deserve all this suffering I got so I’m going to revel in it”. Nobody ever deserves to suffer. The lesson is “I did a mistake but that doesn’t affect my worth as a person. Next time I’ll do better”. If you’re a utilitarian seeking to increase well-being or decrease suffering, that includes your own well-being and your own suffering.
There is probably some risk of this, yes. But ideally, your behavior should be driven by your preferences. This becomes a lot easier once emotions stop being your enemy and you don’t need to avoid feeling any particular emotion. When all your emotions are your welcome allies, then it’s also easier to let your preferences guide your behavior in everything. That means that you’ve accepted feelings such as sadness as appropriate error messages that pop up when things haven’t gone as they should. Then you won’t be actively trying to cause those emotions, instead concentrating on seeking pleasure from doing things right.
This dialogue follows the most compelling (to me) scene in C. S. Lewis’s “The Great Divorce”. A saved woman is trying to coax a man she knew in life to join her in heaven while the narrator and his guide look on. She clearly acts in such a way as to reveal a preference that the man join her. But nothing he does, not even remaining in Hell for all eternity, makes a bit of difference to her emotional state.
Do I want her miserable? No. Do I think she cares, really cares about the man she’s trying to help? Well… no. I don’t think that’s what “care” means; she lacks empathy for him. I recently acted in such a way as to get myself a baked potato. I don’t really care, in the deep and meaningful way I care about other people, about having gotten a baked potato—and I’m not even devoid of potato-related emotional feelings, I would have been disappointed if it had caught fire and I was pleased when it turned out nicely.
Do I like being sad when my friends are sad? Well, no, not really, I don’t have sadness-asymbolia. Would I rather not be sad when my friends are sad; do I want to deny them that power, as C. S. Lewis suggests would be only just? No! I don’t want to go around helping people just because this is written somewhere on my abstract list of preferences, acting in numb glee and feeling nothing that responds to my environment.
I don’t know what I want, Sir.
Your comment has frightened me, confused me, and made me think. Thanks.
You are most welcome.
In numb glee I suspect you wouldn’t act at all, or have preferences in any meaningful sense.
From a very scattered and informal study of the modern concept of the Christian god, it seems to me that He’s up to something like this: 1) Fabricate or otherwise acquire a large batch of souls for some unknown larger purpose. 2) Realize the manufacturing process may be flawed or contaminated somehow. 3) Set up a procedurally-generated test environment (aka observable reality) for the souls, complete with self-replicating interface shells (aka human bodies). 4) Set up “good enough,” “repairable,” and “reject” bins, labeled heaven, purgatory, and hell respectively; souls in the first and third bins get put into stasis by what amounts for all practical purposes to sensory deprivation. Sit back and watch the test process run. 5) Double-check the specs for the unknown larger purpose, and pass/fail rate for the already-sorted souls, realize that tolerances have been set way too strict. Possibly also some sort of problem with other gods sneaking in and stealing the goods? Unclear. 6) Set up a temporary avatar in the test environment (aka Jesus) to announce the new, lower standard, since it’s qualitatively rather than quantitatively different, and yet-unsorted souls can partially reconfigure themselves to adapt. 7) Eventually, full batch will be incarnated and test environment will go through an elaborate self-destruct sequence.
Sure. “Acting in numb glee and feeling nothing that responds to one’s environment” is rather far away from what I was advocating, though. Quite the opposite: at best, this is about fully embracing pretty much all of one’s emotions. (Possibly excluding a few that seem purely harmful to me, though that’s everyone’s own decision.)
Frankly, I’ve always found this story one of Lewis’ most sick, disgusting and unethical ones—and that’s for an author who had many moments that come across as sick, disgusting and unethical to many.
When you share a bond of emotional contingency with someone, it sometimes happens that features of their style of living are so incompatible with yours as to destroy more of your own personal utility than the bond can generate. It’s a nasty situation, which we often adapt to by laboriously self-modifying the bond away. Colloquially, this is called “getting over someone”.
It’s quite a reasonable response—but it’s also a voluntary one. I’m considerably less thrilled by Lewis including it as part of the salvation package by default. That seems—well, manipulative is one word for it, but convenient might be an even better one. It’s as if he’s resolved a conflict between human emotion and his religious beliefs by declaring that the conflict magically won’t exist in any sense that matters long-term.
Of course, that’s not much comfort to the living people whose loved ones he’s implicitly condemned to Hell.
Agreed. Although it feels to me like there are other appalling things about the situation in the story; I’ll reflect some more and say what those are.
Mathematically speaking, let U1 be the woman’s utility value if the man is in Hell, and U2 is her utility value if the man is in Heaven. What does the story tell us about values of U1 and U2?
At first sight it says that U2 is greater than U1, because the woman really wants the man to join her, but also U1 is not less than U2, because she is not sorry that her attempt failed. This is mathematically impossible.
I suppose a Christian reader could suggest that both values U1 and U2 are infinite, because she is in Heaven. So it’s like she was trying to increase U to U+k, because increasing U is the natural thing to do, but it does not matter that she failed, because if U is infinite, then U is not smaller that U+k.
Now I am not sure, does this interpretation mean something, or is it just explaining away? I can’t even imagine the very large values of U, nor infinite ones.
Another explanation could be based on “predestination” at the moment of one’s death. (The story happens in the afterlife.) It was already decided whether the man will choose Heaven or Hell, but until the moment of his choice, nobody else can know the result. So the woman comes with hope that the man will choose Heaven, but he chooses Hell. She is a perfect rationalist, so she immediately realizes that the uncertaintly existed only in her mind, she discards her mental sunk costs, accepts the reality and moves on.
This explanation suggests that she was unable to change his decision, but she still tried to convince him, so why was she trying? Maybe at that moment, she wasn’t behaving as a perfect rationalist, and his decision somehow woke her up. (She is in Heaven, perhaps between rationality and irrationality she always chooses the variant that makes her more happy at the given moment.)
Back to the Earth… Our empathy motivates us to help our friends. This is why we feel that empathy is morally good. When we realize it is impossible to help our friends, it would be rational to lose empathy. It goes against our intuition, because empathy does not work this way, because in most situation there is something we can do to help our friends. (Even if they have an incurable illness, we can increase their utility function by talking to them.)
I think this mostly tells us that your model doesn’t actually model humans very well.
A simple explanation is that there’s a system in her brain that guides her action towards making the man join her, but the success or failure of this system doesn’t affect her emotional state.
Oh yes, “adaptation executers vs utility maximizers”.
So she has followed the algorithm: “if there is a chance to help, try to help / if there isn’t a chance to help, ignore”.
And the creepy part was how she perfectly knew which situation is it now, and how she accomodated so quickly.
Ceteris paribus, I would prefer not to be sad when my friends are sad. But this is incompatible with empathy—I use my sadness to model theirs. I can’t imagine “loving” someone while trying not to understand them.
This reminds me of a video game that I used to play. In Creatures 2, the player takes care of several artificial animal-ish creatures called norns. Interestingly, norns actually learn—they have a simulated brain with simulated reward and punishment chemicals, and whatever ‘neurons’ are firing when there are ‘reward chemicals’ fire more often in the future and whatever ‘neurons’ are firing when there are ‘punishment chemicals’ fire less often in the future, causing them to show more of certain behaviors and less of others.
Unfortunately, the game was released without adequate playtesting, and the default norns’ learning systems turned out not to be calibrated properly. Individual norns seemed to learn fine at first, but eventually turned stupid as they aged, jumping off of cliffs and refusing to eat. With some work, the player community figured out what was wrong: The default norns’ punishment and reward chemicals had too long of a half-life, and tended to stay in the norns’ systems long enough to affect several brain-states. Fortunately, once this was discovered, it was easy for some of the more advanced players to design norns without the issue (yes, the game allowed for genetic engineering!) and release them to the public, and the new norns learned just fine.