IIUC, I agree with your vision being desirable. (And, IDK, it’s sort of plausible that you can basically do it with a good toolbox that could be developed straightforwardly-ish.)
But there might be a gnarly, fundamental-ish “levers problem” here:
It’s often hard to do [the sort of empathy whereby you see into your blindspot that they can see]
without also doing [the sort of empathy that leads to you adopting some of their values, or even blindspots].
(A levers problem is analogous to a buckets problem, but with actions instead of beliefs. You have an available action VW which does both V and W, but you don’t have V and W available as separate actions. V seems good to do and W seems bad to do, so you’re conflicted, aahh.)
I would guess that what we call empathy isn’t exactly well-described as “a mental motion whereby one tracks and/or mirrors the emotions and belief-perspective of another”. The primordial thing—the thing that comes first evolutionarily and developmentally, and that is simpler—is more like “a mental motion whereby one adopts whatever aspects of another’s mind are available for adoption”. Think of all the mysterious bonding that happens when people hang out, and copying mannerisms, and getting a shoulder-person, and gaining loyalty. This is also far from exactly right. Obviously you don’t just copy everything, it matters what you pay attention to and care about, and there’s probably more prior structure, e.g. an emphasis on copying aspects that are important for coordinating / synching up values. IDK the real shape of primordial empathy.
But my point is just: Maybe, if you deeply empathize with someone, then by default, you’ll also adopt value-laden mental stances from them. If you’re in a conflict with someone, adopting value-laden mental stances from them feels and/or is dangerous.
To say it another way, you want to entertain propositions from another person. But your brain doesn’t neatly separate propositions from values and plans. So entertaining a proposition is also sort of questioning your plans, which bleeds into changing your values. Empathy good enough to show you blindspots involves entertaining propositions that you care about and that you disagree with.
Or anyway, this was my experience of things, back when I tried stuff like this.
Thanks; I love this description of the primordial thing, had not noticed this this clearly/articulately before, it is helpful.
Re: why I’m hopeful about the available levers here:
I’m hoping that, instead of Susan putting primary focal attention on Robert (“how can he vote this way, what is he thinking?”), Susan might be able to put primary focal attention on the process generating the scissors statements: “how is this thing trying to trick me and Robert, how does it work?”
A bit like how a person watching a commercial for sugary snacks, instead of putting primary focal attention on the smiling person on the screen who seems to desire the snacks, might instead put primary focal attention on “this is trying to trick me.”
(My hope is that this can become more feasible if we can provide accurate patterns for how the scissors-generating-process is trying to trick Susan(/Robert). And that if Susan is trying to figure out how she and Robert were tricked, by modeling the tricking process, this can somehow help undo the trick, without needing to empathize at any point with “what if candidate X is great.”
My hope is that this can become more feasible if we can provide accurate patterns for how the scissors-generating-process is trying to trick Susan(/Robert). And that if Susan is trying to figure out how she and Robert were tricked, by modeling the tricking process, this can somehow help undo the trick, without needing to empathize at any point with “what if candidate X is great.”
This is clarifying...
Does it actually have much to do with Robert? Maybe it would be more helpful to talk with Tusan and Vusan, who are also A-blind, B-seeing, candidate Y supporters. They’re the ones who would punish non-punishers of supporting candidate X / talking about A. (Which Susan would become, if she were talking to an A-seer without pushing back, let alone if she could see into her A-blindspot.) You could talk to Robert about how he’s embedded in threats of punishment for non-punishment of supporting candidate Y / talking about B, but that seems more confusing? IDK.
You raise a good point that Susan’s relationship to Tusan and Vusan is part of what keeps her opinions stuck/stable.
But I’m hopeful that if Susan tries to “put primary focal attention on where the scissors comes from, and how it is working to trick Susan and Robert at once”, this’ll help with her stuckness re: Tusan and Vusan. Like, it’ll still be hard, but it’ll be less hard than “what if Robert is right” would be.
Reasons I’m hopeful:
I’m partly working from a toy model in which (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all used to be members of a common moral community, before it got scissored. And the norms and memories of that community haven’t faded all the way.
Also, in my model, Susan’s fear of Tusan’s and Vusan’s punishment isn’t mostly fear of e.g. losing her income or other material-world costs. It is mostly fear of not having a moral community she can be part of. Like, of there being nobody who upholds norms that make sense to her and sees her as a member-in-good-standing of that group of people-with-sensible-norms.
Contemplating the scissoring process… does risk her fellowship with Tusan and Vusan, and that is scary and costly for Susan.
But:
a) Tusan and Vusan are not *as* threatened by it as if Susan had e.g. been considering more directly whether Candidate X was good. I think.
b) Susan is at least partially compensated by her partial-risk-of-losing-Tusan-and-Vusan, by the hope/memory of the previous society that (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all shared, which she has some hope of reaccessing here
b2) Tusan and Vusan are maybe also a bit tempted by this, which on their simpler models (since they’re engaging with Susan’s thoughts only very loosely / from a distance, as they complain about Susan) renders as “maybe she can change some of the candidate X supporters, since she’s discussing how they got tricked”
c) There are maybe some remnant-norms within the larger (pre-scissored) community that can appreciate/welcome Susan and her efforts.
I’m not sure I’m thinking about this well, or explicating it well. But I feel there should be some unscissoring process?
Susan could try to put focal attention on the scissor origins; but one way that would be difficult is that she’d get pushback from her community.
which I did say in a parenthetical, but I was mainly instead saying
Susan’s community is a key substrate for the scissor origins, maybe more than Susan’s interaction with Robert. Therefore, to put focal attention on the scissor origins, a good first step might be looking at her community—how it plays the role of one half of a scissor statement.
Your reasons for hope make sense.
hope/memory of the previous society that (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all shared, which she has some hope of reaccessing here
Anecdata: In my case it would be mostly a hope, not a memory. E.g. I don’t remember a time when “I understand what you’re saying, but...” was a credible statement… Maybe it never was? E.g. I don’t remember a time when I would expect people to be sufficiently committed to computing “what would work for everyone to live together” that they kept doing so in political contexts.
In my experience, the first step in reconciling conflict is to understand one’s own values, before listening to those of others. There are multiple reasons for this step, but the one relevant to your point is that by reflecting on the tradeoffs that I accept or reject and why, I can feel secure in listening to someone else’s point of view. If their approach addresses my own concerns, then I can recognize it and that dissolves the disagreement. If it doesn’t, then I know enough about what I really want to suggest modifications to their approach that would address my concerns. Either way, it keeps me safe from value-drift, especially on important principles like ethics.
Just because someone else has valid concerns doesn’t mean I have to give up any of my own, but it doesn’t mean we’re at an impasse either. Humans have a habit of turning disagreements into false dichotomies. When they listen to each other, the conversation becomes, “alright, I understand your concerns, but you understand why mine are more important, right?” They are so quick to ask other people to sacrifice their values that they don’t think of exploring alternative approaches, ones that can change the situation to fulfill the values of all the stakeholders. That’s what I’m working on changing.
It’s hard to get clear enough on your values. In practice (and maybe also in theory) it’s an ongoing process.
Values aren’t the only thing going on. There are stances that aren’t even close to being either a value, a plan, or a belief. An example is a person who thinks/acts in terms of who they trust, and who seems good; if a lot of people that they know who seem good also think some other person seems good, then they’ll adopt that stance.
IIUC, I agree with your vision being desirable. (And, IDK, it’s sort of plausible that you can basically do it with a good toolbox that could be developed straightforwardly-ish.)
But there might be a gnarly, fundamental-ish “levers problem” here:
It’s often hard to do [the sort of empathy whereby you see into your blindspot that they can see]
without also doing [the sort of empathy that leads to you adopting some of their values, or even blindspots].
(A levers problem is analogous to a buckets problem, but with actions instead of beliefs. You have an available action VW which does both V and W, but you don’t have V and W available as separate actions. V seems good to do and W seems bad to do, so you’re conflicted, aahh.)
I would guess that what we call empathy isn’t exactly well-described as “a mental motion whereby one tracks and/or mirrors the emotions and belief-perspective of another”. The primordial thing—the thing that comes first evolutionarily and developmentally, and that is simpler—is more like “a mental motion whereby one adopts whatever aspects of another’s mind are available for adoption”. Think of all the mysterious bonding that happens when people hang out, and copying mannerisms, and getting a shoulder-person, and gaining loyalty. This is also far from exactly right. Obviously you don’t just copy everything, it matters what you pay attention to and care about, and there’s probably more prior structure, e.g. an emphasis on copying aspects that are important for coordinating / synching up values. IDK the real shape of primordial empathy.
But my point is just: Maybe, if you deeply empathize with someone, then by default, you’ll also adopt value-laden mental stances from them. If you’re in a conflict with someone, adopting value-laden mental stances from them feels and/or is dangerous.
To say it another way, you want to entertain propositions from another person. But your brain doesn’t neatly separate propositions from values and plans. So entertaining a proposition is also sort of questioning your plans, which bleeds into changing your values. Empathy good enough to show you blindspots involves entertaining propositions that you care about and that you disagree with.
Or anyway, this was my experience of things, back when I tried stuff like this.
Thanks; I love this description of the primordial thing, had not noticed this this clearly/articulately before, it is helpful.
Re: why I’m hopeful about the available levers here:
I’m hoping that, instead of Susan putting primary focal attention on Robert (“how can he vote this way, what is he thinking?”), Susan might be able to put primary focal attention on the process generating the scissors statements: “how is this thing trying to trick me and Robert, how does it work?”
A bit like how a person watching a commercial for sugary snacks, instead of putting primary focal attention on the smiling person on the screen who seems to desire the snacks, might instead put primary focal attention on “this is trying to trick me.”
(My hope is that this can become more feasible if we can provide accurate patterns for how the scissors-generating-process is trying to trick Susan(/Robert). And that if Susan is trying to figure out how she and Robert were tricked, by modeling the tricking process, this can somehow help undo the trick, without needing to empathize at any point with “what if candidate X is great.”
This is clarifying...
Does it actually have much to do with Robert? Maybe it would be more helpful to talk with Tusan and Vusan, who are also A-blind, B-seeing, candidate Y supporters. They’re the ones who would punish non-punishers of supporting candidate X / talking about A. (Which Susan would become, if she were talking to an A-seer without pushing back, let alone if she could see into her A-blindspot.) You could talk to Robert about how he’s embedded in threats of punishment for non-punishment of supporting candidate Y / talking about B, but that seems more confusing? IDK.
You raise a good point that Susan’s relationship to Tusan and Vusan is part of what keeps her opinions stuck/stable.
But I’m hopeful that if Susan tries to “put primary focal attention on where the scissors comes from, and how it is working to trick Susan and Robert at once”, this’ll help with her stuckness re: Tusan and Vusan. Like, it’ll still be hard, but it’ll be less hard than “what if Robert is right” would be.
Reasons I’m hopeful:
I’m partly working from a toy model in which (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all used to be members of a common moral community, before it got scissored. And the norms and memories of that community haven’t faded all the way.
Also, in my model, Susan’s fear of Tusan’s and Vusan’s punishment isn’t mostly fear of e.g. losing her income or other material-world costs. It is mostly fear of not having a moral community she can be part of. Like, of there being nobody who upholds norms that make sense to her and sees her as a member-in-good-standing of that group of people-with-sensible-norms.
Contemplating the scissoring process… does risk her fellowship with Tusan and Vusan, and that is scary and costly for Susan.
But:
a) Tusan and Vusan are not *as* threatened by it as if Susan had e.g. been considering more directly whether Candidate X was good. I think.
b) Susan is at least partially compensated by her partial-risk-of-losing-Tusan-and-Vusan, by the hope/memory of the previous society that (Susan and Tusan and Vusan) and (Robert and Sobert and Tobert) all shared, which she has some hope of reaccessing here
b2) Tusan and Vusan are maybe also a bit tempted by this, which on their simpler models (since they’re engaging with Susan’s thoughts only very loosely / from a distance, as they complain about Susan) renders as “maybe she can change some of the candidate X supporters, since she’s discussing how they got tricked”
c) There are maybe some remnant-norms within the larger (pre-scissored) community that can appreciate/welcome Susan and her efforts.
I’m not sure I’m thinking about this well, or explicating it well. But I feel there should be some unscissoring process?
I think you might have been responding to
which I did say in a parenthetical, but I was mainly instead saying
Your reasons for hope make sense.
Anecdata: In my case it would be mostly a hope, not a memory. E.g. I don’t remember a time when “I understand what you’re saying, but...” was a credible statement… Maybe it never was? E.g. I don’t remember a time when I would expect people to be sufficiently committed to computing “what would work for everyone to live together” that they kept doing so in political contexts.
In my experience, the first step in reconciling conflict is to understand one’s own values, before listening to those of others. There are multiple reasons for this step, but the one relevant to your point is that by reflecting on the tradeoffs that I accept or reject and why, I can feel secure in listening to someone else’s point of view. If their approach addresses my own concerns, then I can recognize it and that dissolves the disagreement. If it doesn’t, then I know enough about what I really want to suggest modifications to their approach that would address my concerns. Either way, it keeps me safe from value-drift, especially on important principles like ethics.
Just because someone else has valid concerns doesn’t mean I have to give up any of my own, but it doesn’t mean we’re at an impasse either. Humans have a habit of turning disagreements into false dichotomies. When they listen to each other, the conversation becomes, “alright, I understand your concerns, but you understand why mine are more important, right?” They are so quick to ask other people to sacrifice their values that they don’t think of exploring alternative approaches, ones that can change the situation to fulfill the values of all the stakeholders. That’s what I’m working on changing.
Does that all make sense?
I think I agree, but
It’s hard to get clear enough on your values. In practice (and maybe also in theory) it’s an ongoing process.
Values aren’t the only thing going on. There are stances that aren’t even close to being either a value, a plan, or a belief. An example is a person who thinks/acts in terms of who they trust, and who seems good; if a lot of people that they know who seem good also think some other person seems good, then they’ll adopt that stance.