Thanks for adding this. I felt really hamstrung by not knowing exactly what kind of conversation we were talking about, and this helps a lot.
I think it’s legit that this type of conversation feels shitty to the person it is about. Having people talk about you like you’re not a person feels awful. If it included someone with whom you had a personal relation with, I think it’s legit that this hurts the relationships. Relationships are based on viewing each other as people. And I can see how a lot of generators of this kind of conversation would be bad.
But I think it’s pretty important that people be able to do these kind of checks, for the purpose of updating their world model, without needing to fully boot up personal caring modules as if you were a friend they had an obligation to take care of. There are wholesome generators that would lead to this kind of conversation, and having this kind of conversation is useful to a bunch of wholesome goals.
Which doesn’t make it feel any less painful. You’re absolutely entitled to feel hurt, and have this affect your relationship with the people who do it. But this isn’t (yet) a sufficient argument for ”...and therefore people shouldn’t have these kinds of conversations”.
But I think it’s pretty important that people be able to do these kind of checks, for the purpose of updating their world model, without needing to fully boot up personal caring modules as if you were a friend they had an obligation to take care of. There are wholesome generators that would lead to this kind of conversation, and having this kind of conversation is useful to a bunch of wholesome goals.
There is a chance we don’t have a disagreement, and there is a chance we do.
In brief, to see if there’s a crux anywhere in here:
Don’t need ppl to boot up ‘care as a friend’ module.
Do believe compassion should be the motivation behind these conversations, even if not friends, where compassion = treats people as real and relationships as real.
So it matters if the convo is like (A) “I care about the world, and doing good in the world, and knowing about Renshin’s sanity is about that, at the base. I will use this information for good, not for evil.” Ideally the info is relevant to something they’re responsible for, so that it’s somewhat plausible the info would be useful and beneficial.
Versus (B) “I’m just idly curious about it, but I don’t need to know and if it required real effort to know, I wouldn’t bother. It doesn’t help me or anyone to know it. I just want to eat it like I crave a potato chip. I want satisfaction, stimulation, or to feel ‘I’m being productive’ even if it’s not truly so, and I am entitled to feel that just b/c I want to. I might use the info in a harmful way later, but I don’t care. I am not really responsible for info I take in or how I use info.”
And I personally think the whole endeavor of modeling the world should be for the (A) motive and not the (B) motive, and that taking in any-and-all information isn’t, like, neutral or net-positive by default. People should endeavor to use their intelligence, their models, and their knowledge for good, not for evil or selfish gain or to feed an addiction to feeling a certain way.
I used a lot of ‘should’ but that doesn’t mean I think people should be punished for going against a ‘should’. It’s more like healthy cultures, imo, reinforce such norms, and unhealthy cultures fail to see or acknowledge the difference between the two sets of actions.
This was a great reply, very crunchy, I appreciate you spelling out your beliefs so legibly.
Do believe compassion should be the motivation behind these conversations, even if not friends, where compassion = treats people as real and relationships as real.
I’m confused here because that’s not my definition of compassion and the sentence doesn’t quite make sense to me if you plug that definition in.
But I agree those questions should be done treating everyone involved as real and human. I don’t believe they need to be done out of concern for the person. I also don’t think the question needs to be motivated by any specific concern; desire for good models is enough. It’s good if people ultimately use their models to help themselves and others, but I think it’s bad to make specific questions or models justify their usefulness before they can be asked.
Hm, neither of the motives I named include any specific concern for the person. Or any specific concern at all. Although I do think having a specific concern is a good bonus? Somehow you interpreted what I said as though there needs to be specific concerns.
RE: The bullet point on compassion… maybe just strike that bullet point. It doesn’t really affect the rest of the points.
It’s good if people ultimately use their models to help themselves and others, but I think it’s bad to make specific questions or models justify their usefulness before they can be asked.
I think I get what you’re getting at. And I feel in agreement with this sentiment. I don’t want well-intentioned people to hamstring themselves.
I certainly am not claiming ppl should make a model justify its usefulness in a specific way.
I’m more saying ppl should be responsible for their info-gathering and treat that with a certain weight. Like a moral responsibility comes with information. So they shouldn’t be cavalier about it.… but especially they should not delude themselves into believing they have good intentions for info when they do not.
And so to casually ask about Alice’s sanity, without taking responsibility for the impact of speech actions and failing to acknowledge the potential damage to relationships (Alice’s or others), is irresponsible. Even if Alice never hears about this exchange, it nonetheless can cause a bunch of damage, and a person should speak about these with eyes open, about that.
That makes sense. Let me take a stab at clarifying, but if that doesn’t work seems good to stop.
You said
to casually ask about Alice’s sanity, without taking responsibility for the impact of speech actions and failing to acknowledge the potential damage to relationships (Alice’s or others), is irresponsible. Even if Alice never hears about this exchange, it nonetheless can cause a bunch of damage, and a person should speak about these with eyes open, about that
When I read that, my first thought is that before (most?) every question, you want people to think hard and calculate the specific consequences asking that question might have, and ask only if the math comes out strongly positive. They bear personal responsibility for anything in which their question played any causal role. I think that such a policy would be deeply harmful.
But another thing you could mean is that people who have a policy of asking questions like this should be aware and open about the consequences of their general policies on questions they ask, and have feedback loops that steer themselves towards policies that produce good results on average. That seems good to me. I’m generally in favor of openly acknowledging costs even when they’re outweighed by benefits, and I care more that people have good feedback loops than that any one action is optimal.
I would never have put it as either of these, but the second one is closer.
For me personally, I try to always have an internal sense of my inner motivation before/during doing things. I don’t expect most people do, but I’ve developed this as a practice, and I am guessing most people can, with some effort or practice.
I can pretty much generally tell whether my motivation has these qualities: wanting to avoid, wanting to get away with something, craving a sensation, intention to deceive or hide, etc. And when it comes to speech actions, this includes things like “I’m just saying something to say something” or “I just said something off/false/inauthentic” or “I didn’t quite mean what I just said or am saying”.
Although, the motivations to really look out for are like “I want someone else to hurt” or “I want to hurt myself” or “I hate” or “I’m doing this out of fear” or “I covet” or “I feel entitled to this / they don’t deserve this” or a whole host of things that tend to hide from our conscious minds. Or in IFS terms, we can get ‘blended’ with these without realizing we’re blended, and then act out of them.
Sometimes, I could be in the middle of asking a question and notice that the initial motivation for asking it wasn’t noble or clean, and then by the end of asking the question, I change my inner resolve or motive to be something more noble and clean. This is NOT some kind of verbal sentence like going from “I wanted to just gossip” to “Now I want to do what I can to help.” It does not work like that. It’s more like changing a martial arts stance. And then I am more properly balanced and landed on my feet, ready to engage more appropriately in the conversation.
What does it mean to take personal responsibility?
I mean, for one example, if I later find out something I did caused harm, I would try to ‘take responsibility’ for that thing in some way. That can include a whole host of possible actions, including just resolving not to do that in the future. Or apologizing. Or fixing a broken thing.
And for another thing, I try to realize that my actions have consequences and that it’s my responsibility to improve my actions. Including getting more clear on the true motives behind my actions. And learning how to do more wholesome actions and fewer unwholesome actions, over time.
I almost never use a calculating frame to try to think about this. I think that’s inadvisable and can drive people onto a dark or deluded path 😅
I 100% agree it’s good to cultivate an internal sense of motivation, and move to act from motives more like curiosity and care, and less like prurient gossip and cruelty. I don’t necessarily think we can transition by fiat, but I share the goal.
But I strongly reject “I am responsible for mitigating all negative consequences of my actions”. If I truthfully accuse someone of a crime and it correctly gets them fired, am I responsible for feeding and housing them? If I truthfully accuse someone of a crime but people overreact, am I responsible for harm caused by overreaction? Given that the benefits of my statement accrue mostly to other people, having me bear the costs seems like a great way to reduce the supply of truthful, useful negative facts being shared in public.
I agree it’s good to acknowledge the consequences, and that this might lead to different actions on the margin. But that’s very different than making it a mandate.
Thanks for adding this. I felt really hamstrung by not knowing exactly what kind of conversation we were talking about, and this helps a lot.
I think it’s legit that this type of conversation feels shitty to the person it is about. Having people talk about you like you’re not a person feels awful. If it included someone with whom you had a personal relation with, I think it’s legit that this hurts the relationships. Relationships are based on viewing each other as people. And I can see how a lot of generators of this kind of conversation would be bad.
But I think it’s pretty important that people be able to do these kind of checks, for the purpose of updating their world model, without needing to fully boot up personal caring modules as if you were a friend they had an obligation to take care of. There are wholesome generators that would lead to this kind of conversation, and having this kind of conversation is useful to a bunch of wholesome goals.
Which doesn’t make it feel any less painful. You’re absolutely entitled to feel hurt, and have this affect your relationship with the people who do it. But this isn’t (yet) a sufficient argument for ”...and therefore people shouldn’t have these kinds of conversations”.
There is a chance we don’t have a disagreement, and there is a chance we do.
In brief, to see if there’s a crux anywhere in here:
Don’t need ppl to boot up ‘care as a friend’ module.
Do believe compassion should be the motivation behind these conversations, even if not friends, where compassion = treats people as real and relationships as real.
So it matters if the convo is like (A) “I care about the world, and doing good in the world, and knowing about Renshin’s sanity is about that, at the base. I will use this information for good, not for evil.” Ideally the info is relevant to something they’re responsible for, so that it’s somewhat plausible the info would be useful and beneficial.
Versus (B) “I’m just idly curious about it, but I don’t need to know and if it required real effort to know, I wouldn’t bother. It doesn’t help me or anyone to know it. I just want to eat it like I crave a potato chip. I want satisfaction, stimulation, or to feel ‘I’m being productive’ even if it’s not truly so, and I am entitled to feel that just b/c I want to. I might use the info in a harmful way later, but I don’t care. I am not really responsible for info I take in or how I use info.”
And I personally think the whole endeavor of modeling the world should be for the (A) motive and not the (B) motive, and that taking in any-and-all information isn’t, like, neutral or net-positive by default. People should endeavor to use their intelligence, their models, and their knowledge for good, not for evil or selfish gain or to feed an addiction to feeling a certain way.
I used a lot of ‘should’ but that doesn’t mean I think people should be punished for going against a ‘should’. It’s more like healthy cultures, imo, reinforce such norms, and unhealthy cultures fail to see or acknowledge the difference between the two sets of actions.
This was a great reply, very crunchy, I appreciate you spelling out your beliefs so legibly.
I’m confused here because that’s not my definition of compassion and the sentence doesn’t quite make sense to me if you plug that definition in.
But I agree those questions should be done treating everyone involved as real and human. I don’t believe they need to be done out of concern for the person. I also don’t think the question needs to be motivated by any specific concern; desire for good models is enough. It’s good if people ultimately use their models to help themselves and others, but I think it’s bad to make specific questions or models justify their usefulness before they can be asked.
Hm, neither of the motives I named include any specific concern for the person. Or any specific concern at all. Although I do think having a specific concern is a good bonus? Somehow you interpreted what I said as though there needs to be specific concerns.
RE: The bullet point on compassion… maybe just strike that bullet point. It doesn’t really affect the rest of the points.
I think I get what you’re getting at. And I feel in agreement with this sentiment. I don’t want well-intentioned people to hamstring themselves.
I certainly am not claiming ppl should make a model justify its usefulness in a specific way.
I’m more saying ppl should be responsible for their info-gathering and treat that with a certain weight. Like a moral responsibility comes with information. So they shouldn’t be cavalier about it.… but especially they should not delude themselves into believing they have good intentions for info when they do not.
And so to casually ask about Alice’s sanity, without taking responsibility for the impact of speech actions and failing to acknowledge the potential damage to relationships (Alice’s or others), is irresponsible. Even if Alice never hears about this exchange, it nonetheless can cause a bunch of damage, and a person should speak about these with eyes open, about that.
Could you say more on what you mean by “with compassion” and “taking responsibility for the impact of speech actions”?
I’m fine with drilling deeper but I currently don’t know where your confusion is.
I assume we exist in different frames, but it’s hard for me to locate your assumptions.
I don’t like meandering in a disagreement without very specific examples to work with. So maybe this is as far as it is reasonable to go for now.
That makes sense. Let me take a stab at clarifying, but if that doesn’t work seems good to stop.
You said
When I read that, my first thought is that before (most?) every question, you want people to think hard and calculate the specific consequences asking that question might have, and ask only if the math comes out strongly positive. They bear personal responsibility for anything in which their question played any causal role. I think that such a policy would be deeply harmful.
But another thing you could mean is that people who have a policy of asking questions like this should be aware and open about the consequences of their general policies on questions they ask, and have feedback loops that steer themselves towards policies that produce good results on average. That seems good to me. I’m generally in favor of openly acknowledging costs even when they’re outweighed by benefits, and I care more that people have good feedback loops than that any one action is optimal.
I would never have put it as either of these, but the second one is closer.
For me personally, I try to always have an internal sense of my inner motivation before/during doing things. I don’t expect most people do, but I’ve developed this as a practice, and I am guessing most people can, with some effort or practice.
I can pretty much generally tell whether my motivation has these qualities: wanting to avoid, wanting to get away with something, craving a sensation, intention to deceive or hide, etc. And when it comes to speech actions, this includes things like “I’m just saying something to say something” or “I just said something off/false/inauthentic” or “I didn’t quite mean what I just said or am saying”.
Although, the motivations to really look out for are like “I want someone else to hurt” or “I want to hurt myself” or “I hate” or “I’m doing this out of fear” or “I covet” or “I feel entitled to this / they don’t deserve this” or a whole host of things that tend to hide from our conscious minds. Or in IFS terms, we can get ‘blended’ with these without realizing we’re blended, and then act out of them.
Sometimes, I could be in the middle of asking a question and notice that the initial motivation for asking it wasn’t noble or clean, and then by the end of asking the question, I change my inner resolve or motive to be something more noble and clean. This is NOT some kind of verbal sentence like going from “I wanted to just gossip” to “Now I want to do what I can to help.” It does not work like that. It’s more like changing a martial arts stance. And then I am more properly balanced and landed on my feet, ready to engage more appropriately in the conversation.
What does it mean to take personal responsibility?
I mean, for one example, if I later find out something I did caused harm, I would try to ‘take responsibility’ for that thing in some way. That can include a whole host of possible actions, including just resolving not to do that in the future. Or apologizing. Or fixing a broken thing.
And for another thing, I try to realize that my actions have consequences and that it’s my responsibility to improve my actions. Including getting more clear on the true motives behind my actions. And learning how to do more wholesome actions and fewer unwholesome actions, over time.
I almost never use a calculating frame to try to think about this. I think that’s inadvisable and can drive people onto a dark or deluded path 😅
I 100% agree it’s good to cultivate an internal sense of motivation, and move to act from motives more like curiosity and care, and less like prurient gossip and cruelty. I don’t necessarily think we can transition by fiat, but I share the goal.
But I strongly reject “I am responsible for mitigating all negative consequences of my actions”. If I truthfully accuse someone of a crime and it correctly gets them fired, am I responsible for feeding and housing them? If I truthfully accuse someone of a crime but people overreact, am I responsible for harm caused by overreaction? Given that the benefits of my statement accrue mostly to other people, having me bear the costs seems like a great way to reduce the supply of truthful, useful negative facts being shared in public.
I agree it’s good to acknowledge the consequences, and that this might lead to different actions on the margin. But that’s very different than making it a mandate.