In general I think it’s fine/good to have sympathy for people who are dealing with something difficult, even if that difficult thing is part of a larger package that they signed up for voluntarily (while not forgetting that they did sign up for it, and might be able to influence or change it if they decided it was worth spending enough time/effort/points).
Edit: lest anyone mistake this for a subtweet, it’s an excerpt of a comment I left in a slack thread, where the people I might most plausibly be construed as subtweeting are likely to have seen it. The object-level subject that inspired it was another LW shortform.
True (but obvious) taken literally. But if you also mean it’s good to show sympathy by changing your stance in the discourse, such as by reallocating private or shared attention, it’s not always true. In particular, many responses you implement could be exploited.
For example, say I’m ongoingly doing something bad, and whenever you try to talk to me about it, I “get upset”. In this case, I’m probably actually upset, probably for multiple reasons; and probably a deep full empathic understanding of the various things going on with me would reveal that, in some real ways, I have good reason to be upset / there’s something actually going wrong for me. But now say that your response to me “getting upset” is to allocate our shared attention away from the bad thing I’m doing. That may indeed be a suitable thing to do; e.g., maybe we can work together to understand what I’m upset about, and get the good versions of everything involved. However, hopefully it’s clear how this could be taken advantage of—sometimes even catastrophically, if, say, you are for some reason very committed to the sort of cooperativeness that keeps reallocating attention this way, even to the ongoing abjection of your original concern for the thing I was originally and am ongoingly doing bad. (This is a nonfictional though intentionally vague example.)
I agree if their decision was voluntary and is a product of at least some reflection. Sometimes you’re mad at them precisely because they sign up for it.
From my perspective, when people make decisions, the decision involves many problems at once at different time scales. Some in the present, others in the future, or in the far future. Even if they calculate consequences correctly, there’s an action potential required. From the perspective of a different person, a decision might be simple—do x vs y and it requires little physical energy. But it’s not so simple at the level of the neurons. The circuit for making good decisions needs to be sufficiently developed, and the conditions for developing them are relatively rare.
One condition might be gradual exposure to increasing complexity in the problems that you solve, so that you can draw the appropriate conclusions, extract the lesson at each level of complexity, and develop healthy habits. But most people are faced with the full force of complexity of the world from the day they’re born. Imperfect upbringing, parents, environments, social circles.
When people make irrational decisions, in many cases I don’t believe it’s because they don’t know better.
My take is that it’s important to be able to accurately answer questions about what is going on. If someone asks “is this person dealing with something stressful and emotionally and socially difficult” it’s important that if the answer is “yes” then you don’t answer “no”.
That is different from it being your responsibility to navigate their emotional or social situation in a way that doesn’t exacerbate their problem. If a moving company has some internal conflicts that mean my purchasing their service will cause someone in their team emotional distress, I don’t owe them an apology; we have an simple implicit agreement that they provide goods and services on-demand, and I am free to order them. I think this is how functional economies work, and that I’m not reneging on an implicit social contract by not making it my responsibility to navigate their personal issues.
I typically take “empathy” to mean you are able to accurately simulate someone’s experience in yourself, and I take “sympathy” to mean that you express active desire to change their situation. (For instance, ChatGPT says “Sympathy involves recognizing the emotions or hardships someone else is going through and responding with kindness, support, or comfort.”) It’s of course not always appropriate to support people going through hardship with support and comfort; there are 8 billion people and I’d guess at least 10% are going through hardship of some sort today.
(I’ve written about this before in a discussion of wholesomeness. I believe it’s good to be aware of what is going on, even if you are going to make a decision that people dislike or disagree with or that hurts them.)
Huh, the division I usually make is that empathy is feeling their same feelings and sympathy is caring about their problems; it hadn’t occurred to me to think of either as requiring more action than the other. Not sure whether it’s your version or mine that RobertM meant, but it seems worth highlighting as a potential point of miscommunication.
This currently seems to me like a miscommunication between you and I. I write “I take “sympathy” to mean that you express active desire to change their situation” and you write “sympathy is caring about their problems”. Does the latter not seem to to you imply the former? Does caring about their problems not imply that, if you saw an opportunity to help out, you would take it?
First I’d note that for many people, “care about their problems” might mean something more like “I’d prefer a world where their problems went away”, which is different from “I’d personally put effort into fixing their problems.
I’m dissatisfied with that and probably so are you because it’s kinda sus – if you’re not willing to help at all, it’s a pretty shallow kind of caring that you might not care about.
My response to that is “Something something Loving Hitler.” I can care about someone’s problems (in the “would actually help” sense), but have that be pretty low on my priority queue of things to do, including “stop the person from hurting other people” or, if their problems are caused by decisions they made, still prefer them to have to deal with the consequences of their actions so they don’t do it again.
FTR I think it is quite easy for problems to get low enough on the priority queue that I choose to literally never think about them or try to solve them.
Suppose everyone has maybe 100 problems a day that they deal with (from where to get lunch to long-term relationship conflicts), and can ruminate on and prioritize between maybe 10 problems each hour (to eventually work on 1-3 per hour).
There are ~1010 people alive, each with 102 problems per day, and you are awake for about 16 hours a day, so you can prioritize between about 160 problems in a given day, or 160/1012=0.0000000000016% of total daily problems. So you can’t even thinkabout most problems.
I think it may makes sense to take “I don’t care about X” to mean “X isn’t rising to the level of problem that I’m going to think about prioritizing between”, and for this to be separate from “do I have literally any preferences about X in my preference ordering over world states”.
Sure seems reasonable, but I think that’s not what I expect most people to mean. I expect you’ll run into a bunch of miscommunication if you’re drawing the line there. I definitely think of myself as caring about the problems of random human #4,563,215, even though I will never take any specific actions about it (and, caring a bit more about them if they’re specifically brought to my attention)
No? Caring is an emotion, to me; it might affect your actions but it doesn’t necessarily follow that it does.
Edit: E.G. you might emotionally care, but intellectually think changing the situation would make it worse on net; you might care about multiple conflicting things another of which takes precedence; you might just not have much of an opportunity to do anything (e.g. they live somewhere else and you can’t do anything over the internet, their problem is something unfixable like a loved one dying, etc.); etc. (I also wouldn’t take the expression of sympathy to require expressing desire to change the situation? Like, you wouldn’t want to express approval while trying to be sympathetic, but you might say, like, “I’m sorry” or “that really sucks” or whatever.)
In general I think it’s fine/good to have sympathy for people who are dealing with something difficult, even if that difficult thing is part of a larger package that they signed up for voluntarily (while not forgetting that they did sign up for it, and might be able to influence or change it if they decided it was worth spending enough time/effort/points).
Edit: lest anyone mistake this for a subtweet, it’s an excerpt of a comment I left in a slack thread, where the people I might most plausibly be construed as subtweeting are likely to have seen it. The object-level subject that inspired it was another LW shortform.
True (but obvious) taken literally. But if you also mean it’s good to show sympathy by changing your stance in the discourse, such as by reallocating private or shared attention, it’s not always true. In particular, many responses you implement could be exploited.
For example, say I’m ongoingly doing something bad, and whenever you try to talk to me about it, I “get upset”. In this case, I’m probably actually upset, probably for multiple reasons; and probably a deep full empathic understanding of the various things going on with me would reveal that, in some real ways, I have good reason to be upset / there’s something actually going wrong for me. But now say that your response to me “getting upset” is to allocate our shared attention away from the bad thing I’m doing. That may indeed be a suitable thing to do; e.g., maybe we can work together to understand what I’m upset about, and get the good versions of everything involved. However, hopefully it’s clear how this could be taken advantage of—sometimes even catastrophically, if, say, you are for some reason very committed to the sort of cooperativeness that keeps reallocating attention this way, even to the ongoing abjection of your original concern for the thing I was originally and am ongoingly doing bad. (This is a nonfictional though intentionally vague example.)
Nah, I agree that resources are limited and guarding against exploitative policies is sensible.
I agree if their decision was voluntary and is a product of at least some reflection. Sometimes you’re mad at them precisely because they sign up for it.
From my perspective, when people make decisions, the decision involves many problems at once at different time scales. Some in the present, others in the future, or in the far future. Even if they calculate consequences correctly, there’s an action potential required. From the perspective of a different person, a decision might be simple—do x vs y and it requires little physical energy. But it’s not so simple at the level of the neurons. The circuit for making good decisions needs to be sufficiently developed, and the conditions for developing them are relatively rare.
One condition might be gradual exposure to increasing complexity in the problems that you solve, so that you can draw the appropriate conclusions, extract the lesson at each level of complexity, and develop healthy habits. But most people are faced with the full force of complexity of the world from the day they’re born. Imperfect upbringing, parents, environments, social circles.
When people make irrational decisions, in many cases I don’t believe it’s because they don’t know better.
My take is that it’s important to be able to accurately answer questions about what is going on. If someone asks “is this person dealing with something stressful and emotionally and socially difficult” it’s important that if the answer is “yes” then you don’t answer “no”.
That is different from it being your responsibility to navigate their emotional or social situation in a way that doesn’t exacerbate their problem. If a moving company has some internal conflicts that mean my purchasing their service will cause someone in their team emotional distress, I don’t owe them an apology; we have an simple implicit agreement that they provide goods and services on-demand, and I am free to order them. I think this is how functional economies work, and that I’m not reneging on an implicit social contract by not making it my responsibility to navigate their personal issues.
I typically take “empathy” to mean you are able to accurately simulate someone’s experience in yourself, and I take “sympathy” to mean that you express active desire to change their situation. (For instance, ChatGPT says “Sympathy involves recognizing the emotions or hardships someone else is going through and responding with kindness, support, or comfort.”) It’s of course not always appropriate to support people going through hardship with support and comfort; there are 8 billion people and I’d guess at least 10% are going through hardship of some sort today.
(I’ve written about this before in a discussion of wholesomeness. I believe it’s good to be aware of what is going on, even if you are going to make a decision that people dislike or disagree with or that hurts them.)
Huh, the division I usually make is that empathy is feeling their same feelings and sympathy is caring about their problems; it hadn’t occurred to me to think of either as requiring more action than the other. Not sure whether it’s your version or mine that RobertM meant, but it seems worth highlighting as a potential point of miscommunication.
This currently seems to me like a miscommunication between you and I. I write “I take “sympathy” to mean that you express active desire to change their situation” and you write “sympathy is caring about their problems”. Does the latter not seem to to you imply the former? Does caring about their problems not imply that, if you saw an opportunity to help out, you would take it?
First I’d note that for many people, “care about their problems” might mean something more like “I’d prefer a world where their problems went away”, which is different from “I’d personally put effort into fixing their problems.
I’m dissatisfied with that and probably so are you because it’s kinda sus – if you’re not willing to help at all, it’s a pretty shallow kind of caring that you might not care about.
My response to that is “Something something Loving Hitler.” I can care about someone’s problems (in the
“would actually help” sense), but have that be pretty low on my priority queue of things to do, including “stop the person from hurting other people” or, if their problems are caused by decisions they made, still prefer them to have to deal with the consequences of their actions so they don’t do it again.
FTR I think it is quite easy for problems to get low enough on the priority queue that I choose to literally never think about them or try to solve them.
Suppose everyone has maybe 100 problems a day that they deal with (from where to get lunch to long-term relationship conflicts), and can ruminate on and prioritize between maybe 10 problems each hour (to eventually work on 1-3 per hour).
There are ~1010 people alive, each with 102 problems per day, and you are awake for about 16 hours a day, so you can prioritize between about 160 problems in a given day, or 160/1012=0.0000000000016% of total daily problems. So you can’t even think about most problems.
I think it may makes sense to take “I don’t care about X” to mean “X isn’t rising to the level of problem that I’m going to think about prioritizing between”, and for this to be separate from “do I have literally any preferences about X in my preference ordering over world states”.
Sure seems reasonable, but I think that’s not what I expect most people to mean. I expect you’ll run into a bunch of miscommunication if you’re drawing the line there. I definitely think of myself as caring about the problems of random human #4,563,215, even though I will never take any specific actions about it (and, caring a bit more about them if they’re specifically brought to my attention)
Yeah that seems right, there’s a distinction between problems I think about and problems I care about.
No? Caring is an emotion, to me; it might affect your actions but it doesn’t necessarily follow that it does.
Edit: E.G. you might emotionally care, but intellectually think changing the situation would make it worse on net; you might care about multiple conflicting things another of which takes precedence; you might just not have much of an opportunity to do anything (e.g. they live somewhere else and you can’t do anything over the internet, their problem is something unfixable like a loved one dying, etc.); etc. (I also wouldn’t take the expression of sympathy to require expressing desire to change the situation? Like, you wouldn’t want to express approval while trying to be sympathetic, but you might say, like, “I’m sorry” or “that really sucks” or whatever.)