This was a great reply, very crunchy, I appreciate you spelling out your beliefs so legibly.
Do believe compassion should be the motivation behind these conversations, even if not friends, where compassion = treats people as real and relationships as real.
I’m confused here because that’s not my definition of compassion and the sentence doesn’t quite make sense to me if you plug that definition in.
But I agree those questions should be done treating everyone involved as real and human. I don’t believe they need to be done out of concern for the person. I also don’t think the question needs to be motivated by any specific concern; desire for good models is enough. It’s good if people ultimately use their models to help themselves and others, but I think it’s bad to make specific questions or models justify their usefulness before they can be asked.
Hm, neither of the motives I named include any specific concern for the person. Or any specific concern at all. Although I do think having a specific concern is a good bonus? Somehow you interpreted what I said as though there needs to be specific concerns.
RE: The bullet point on compassion… maybe just strike that bullet point. It doesn’t really affect the rest of the points.
It’s good if people ultimately use their models to help themselves and others, but I think it’s bad to make specific questions or models justify their usefulness before they can be asked.
I think I get what you’re getting at. And I feel in agreement with this sentiment. I don’t want well-intentioned people to hamstring themselves.
I certainly am not claiming ppl should make a model justify its usefulness in a specific way.
I’m more saying ppl should be responsible for their info-gathering and treat that with a certain weight. Like a moral responsibility comes with information. So they shouldn’t be cavalier about it.… but especially they should not delude themselves into believing they have good intentions for info when they do not.
And so to casually ask about Alice’s sanity, without taking responsibility for the impact of speech actions and failing to acknowledge the potential damage to relationships (Alice’s or others), is irresponsible. Even if Alice never hears about this exchange, it nonetheless can cause a bunch of damage, and a person should speak about these with eyes open, about that.
That makes sense. Let me take a stab at clarifying, but if that doesn’t work seems good to stop.
You said
to casually ask about Alice’s sanity, without taking responsibility for the impact of speech actions and failing to acknowledge the potential damage to relationships (Alice’s or others), is irresponsible. Even if Alice never hears about this exchange, it nonetheless can cause a bunch of damage, and a person should speak about these with eyes open, about that
When I read that, my first thought is that before (most?) every question, you want people to think hard and calculate the specific consequences asking that question might have, and ask only if the math comes out strongly positive. They bear personal responsibility for anything in which their question played any causal role. I think that such a policy would be deeply harmful.
But another thing you could mean is that people who have a policy of asking questions like this should be aware and open about the consequences of their general policies on questions they ask, and have feedback loops that steer themselves towards policies that produce good results on average. That seems good to me. I’m generally in favor of openly acknowledging costs even when they’re outweighed by benefits, and I care more that people have good feedback loops than that any one action is optimal.
I would never have put it as either of these, but the second one is closer.
For me personally, I try to always have an internal sense of my inner motivation before/during doing things. I don’t expect most people do, but I’ve developed this as a practice, and I am guessing most people can, with some effort or practice.
I can pretty much generally tell whether my motivation has these qualities: wanting to avoid, wanting to get away with something, craving a sensation, intention to deceive or hide, etc. And when it comes to speech actions, this includes things like “I’m just saying something to say something” or “I just said something off/false/inauthentic” or “I didn’t quite mean what I just said or am saying”.
Although, the motivations to really look out for are like “I want someone else to hurt” or “I want to hurt myself” or “I hate” or “I’m doing this out of fear” or “I covet” or “I feel entitled to this / they don’t deserve this” or a whole host of things that tend to hide from our conscious minds. Or in IFS terms, we can get ‘blended’ with these without realizing we’re blended, and then act out of them.
Sometimes, I could be in the middle of asking a question and notice that the initial motivation for asking it wasn’t noble or clean, and then by the end of asking the question, I change my inner resolve or motive to be something more noble and clean. This is NOT some kind of verbal sentence like going from “I wanted to just gossip” to “Now I want to do what I can to help.” It does not work like that. It’s more like changing a martial arts stance. And then I am more properly balanced and landed on my feet, ready to engage more appropriately in the conversation.
What does it mean to take personal responsibility?
I mean, for one example, if I later find out something I did caused harm, I would try to ‘take responsibility’ for that thing in some way. That can include a whole host of possible actions, including just resolving not to do that in the future. Or apologizing. Or fixing a broken thing.
And for another thing, I try to realize that my actions have consequences and that it’s my responsibility to improve my actions. Including getting more clear on the true motives behind my actions. And learning how to do more wholesome actions and fewer unwholesome actions, over time.
I almost never use a calculating frame to try to think about this. I think that’s inadvisable and can drive people onto a dark or deluded path 😅
I 100% agree it’s good to cultivate an internal sense of motivation, and move to act from motives more like curiosity and care, and less like prurient gossip and cruelty. I don’t necessarily think we can transition by fiat, but I share the goal.
But I strongly reject “I am responsible for mitigating all negative consequences of my actions”. If I truthfully accuse someone of a crime and it correctly gets them fired, am I responsible for feeding and housing them? If I truthfully accuse someone of a crime but people overreact, am I responsible for harm caused by overreaction? Given that the benefits of my statement accrue mostly to other people, having me bear the costs seems like a great way to reduce the supply of truthful, useful negative facts being shared in public.
I agree it’s good to acknowledge the consequences, and that this might lead to different actions on the margin. But that’s very different than making it a mandate.
This was a great reply, very crunchy, I appreciate you spelling out your beliefs so legibly.
I’m confused here because that’s not my definition of compassion and the sentence doesn’t quite make sense to me if you plug that definition in.
But I agree those questions should be done treating everyone involved as real and human. I don’t believe they need to be done out of concern for the person. I also don’t think the question needs to be motivated by any specific concern; desire for good models is enough. It’s good if people ultimately use their models to help themselves and others, but I think it’s bad to make specific questions or models justify their usefulness before they can be asked.
Hm, neither of the motives I named include any specific concern for the person. Or any specific concern at all. Although I do think having a specific concern is a good bonus? Somehow you interpreted what I said as though there needs to be specific concerns.
RE: The bullet point on compassion… maybe just strike that bullet point. It doesn’t really affect the rest of the points.
I think I get what you’re getting at. And I feel in agreement with this sentiment. I don’t want well-intentioned people to hamstring themselves.
I certainly am not claiming ppl should make a model justify its usefulness in a specific way.
I’m more saying ppl should be responsible for their info-gathering and treat that with a certain weight. Like a moral responsibility comes with information. So they shouldn’t be cavalier about it.… but especially they should not delude themselves into believing they have good intentions for info when they do not.
And so to casually ask about Alice’s sanity, without taking responsibility for the impact of speech actions and failing to acknowledge the potential damage to relationships (Alice’s or others), is irresponsible. Even if Alice never hears about this exchange, it nonetheless can cause a bunch of damage, and a person should speak about these with eyes open, about that.
Could you say more on what you mean by “with compassion” and “taking responsibility for the impact of speech actions”?
I’m fine with drilling deeper but I currently don’t know where your confusion is.
I assume we exist in different frames, but it’s hard for me to locate your assumptions.
I don’t like meandering in a disagreement without very specific examples to work with. So maybe this is as far as it is reasonable to go for now.
That makes sense. Let me take a stab at clarifying, but if that doesn’t work seems good to stop.
You said
When I read that, my first thought is that before (most?) every question, you want people to think hard and calculate the specific consequences asking that question might have, and ask only if the math comes out strongly positive. They bear personal responsibility for anything in which their question played any causal role. I think that such a policy would be deeply harmful.
But another thing you could mean is that people who have a policy of asking questions like this should be aware and open about the consequences of their general policies on questions they ask, and have feedback loops that steer themselves towards policies that produce good results on average. That seems good to me. I’m generally in favor of openly acknowledging costs even when they’re outweighed by benefits, and I care more that people have good feedback loops than that any one action is optimal.
I would never have put it as either of these, but the second one is closer.
For me personally, I try to always have an internal sense of my inner motivation before/during doing things. I don’t expect most people do, but I’ve developed this as a practice, and I am guessing most people can, with some effort or practice.
I can pretty much generally tell whether my motivation has these qualities: wanting to avoid, wanting to get away with something, craving a sensation, intention to deceive or hide, etc. And when it comes to speech actions, this includes things like “I’m just saying something to say something” or “I just said something off/false/inauthentic” or “I didn’t quite mean what I just said or am saying”.
Although, the motivations to really look out for are like “I want someone else to hurt” or “I want to hurt myself” or “I hate” or “I’m doing this out of fear” or “I covet” or “I feel entitled to this / they don’t deserve this” or a whole host of things that tend to hide from our conscious minds. Or in IFS terms, we can get ‘blended’ with these without realizing we’re blended, and then act out of them.
Sometimes, I could be in the middle of asking a question and notice that the initial motivation for asking it wasn’t noble or clean, and then by the end of asking the question, I change my inner resolve or motive to be something more noble and clean. This is NOT some kind of verbal sentence like going from “I wanted to just gossip” to “Now I want to do what I can to help.” It does not work like that. It’s more like changing a martial arts stance. And then I am more properly balanced and landed on my feet, ready to engage more appropriately in the conversation.
What does it mean to take personal responsibility?
I mean, for one example, if I later find out something I did caused harm, I would try to ‘take responsibility’ for that thing in some way. That can include a whole host of possible actions, including just resolving not to do that in the future. Or apologizing. Or fixing a broken thing.
And for another thing, I try to realize that my actions have consequences and that it’s my responsibility to improve my actions. Including getting more clear on the true motives behind my actions. And learning how to do more wholesome actions and fewer unwholesome actions, over time.
I almost never use a calculating frame to try to think about this. I think that’s inadvisable and can drive people onto a dark or deluded path 😅
I 100% agree it’s good to cultivate an internal sense of motivation, and move to act from motives more like curiosity and care, and less like prurient gossip and cruelty. I don’t necessarily think we can transition by fiat, but I share the goal.
But I strongly reject “I am responsible for mitigating all negative consequences of my actions”. If I truthfully accuse someone of a crime and it correctly gets them fired, am I responsible for feeding and housing them? If I truthfully accuse someone of a crime but people overreact, am I responsible for harm caused by overreaction? Given that the benefits of my statement accrue mostly to other people, having me bear the costs seems like a great way to reduce the supply of truthful, useful negative facts being shared in public.
I agree it’s good to acknowledge the consequences, and that this might lead to different actions on the margin. But that’s very different than making it a mandate.