How important is local & tactical vs. global & strategic cooperation? Is it more important to get alignment with people on fundamental beliefs, epistemology, and world-models? Or is it more important to get cooperate on a local task in the present?
Company domain
From the outside, a “company” is an example of a group of people cooperating on a local task. They don’t have to see eye-to-eye about everything in the world, so long as they can work out their disagreements about the task at hand.
From the inside of a company—it’s standard advice in company culture, to note that the most successful teams are more likely to have shared idealistic visions and to get along with each other as friends outside of work. Purely transactional, working-for-a-paycheck, arrangements don’t tend to inspire excellent work.
Work domain
I expect getting to alignment on principles to be really hard, expensive, and unlikely to work for me, (It would be valuable to ask myself why that is, but not right now). I expect to get a lot of mileage out of relatively transactional or local cooperation. For example donors to my organisation who don’t buy into all of my ideals, but are willing to donate anyway.
I used to approach my interpersonal “wants”, by “debating around” the issue.
For example I might have said:
“If we don’t do what I want, horrible things A, B, and C will happen!”
This degenerates into a miserable argument over how likely A, B, and C are, or a referendum on how neurotic or pessimistic I am.
“You’re such an awful person for not having done [thing I want]!”
This degenerates into a miserable argument about each other’s general worth.)
“Authority Figure Bob will disapprove if we don’t do [thing I want]!”
Back at MetaMed, I had a coworker who believed in alternative medicine. I didn’t, and I was very frustrated. I didn’t want any harm to come to patients from misinformation, and I couldn’t see how I could prevent harm. This caused a lot of conflict, both spoken and unspoken. There were global values issues on debate; reason vs. emotion, logic vs. social charisma, who’s perspective on life was good or bad. I’m aware now and embarrassed to say I came across rude and inappropriate. I was coming from a well-meaning place. I didn’t want any harm to come to patients from misinformation, and I was very frustrated, because I didn’t see how I could prevent that outcome.
Finally, at my wit’s end, I blurted out what I wanted: I wanted to have veto power over information we sent to patients, to make sure it didn’t contain any factual inaccuracies.
She agreed instantly.
Turns out, people respond better if I just say, “I really want [thing I want] Can we do that?”
This phrase doesn’t guarantee I’ll get my way. It does mean that when I do get into a negotiation, that discussion stays targeted to the decision I disagree about, not the justification points.
I was aiming for global alignment on local issues when I should have aimed for simple agreement.
Personal domain
Narrowing the scope of the request and being clear about what I want, resolves debates with my husband too. If I watch myself quoting an expert nutritionist to argue that we should have home-cooked family dinners, my motivation feels like it’s not curiosity about the long-term health dangers of not eating as a family, rather that I want family dinners and I’m throwing spaghetti at a wall hoping some pro-dinner argument will work. The “empirical” or “intellectual” debate is rhetorical window dressing for my underlying request to fulfil my need. When I notice that is going on, it’s better to redirect to the actual underlying desire.
Then I can get to the actual negotiation. What makes family dinners undesirable to you? How could we balance those needs? What alternatives would work for both of us?
Alternative strategy
My friend Michael Vassar prefers getting to alignment with people on fundamentals first. Epistemics, beliefs, values. He believes there’s a lot more value to be gained from someone who keeps actively pursuing goals aligned with yours, even when they’re far away and you haven’t spoken in a long time, than from someone you can persuade to do a specific action you want right now, but who will need further persuading in the future.
----------
Assuming humans divide into two rough categories, “the great people who have value alignment with me” and “the terrible people who don’t have value alignment with me”, I think both Michael and myself would agree it’s unwise to benefit in the short term from allying with terrible people. The question is, who counts as terrible? Which value misalignments are acceptable human fallibility and which lapses in rigorous thinking make a human seriously untrustworthy?
I’d be interested to read some discussion about when and how much it makes sense to prioritise long and short term alliance.
Upvoted for the effort, and since it helped me get value out of the post, but I do think many authors can feel violated when they see other people significantly edit their writing and then publish it like this. I have some underlying models here that I might be able to explicate, but for now I just wanted to state the high-level outputs of my thinking.
Tactical vs. Strategic Cooperation
How important is local & tactical vs. global & strategic cooperation? Is it more important to get alignment with people on fundamental beliefs, epistemology, and world-models? Or is it more important to get cooperate on a local task in the present?
Company domain
From the outside, a “company” is an example of a group of people cooperating on a local task. They don’t have to see eye-to-eye about everything in the world, so long as they can work out their disagreements about the task at hand.
From the inside of a company—it’s standard advice in company culture, to note that the most successful teams are more likely to have shared idealistic visions and to get along with each other as friends outside of work. Purely transactional, working-for-a-paycheck, arrangements don’t tend to inspire excellent work.
Work domain
I expect getting to alignment on principles to be really hard, expensive, and unlikely to work for me, (It would be valuable to ask myself why that is, but not right now). I expect to get a lot of mileage out of relatively transactional or local cooperation. For example donors to my organisation who don’t buy into all of my ideals, but are willing to donate anyway.
I used to approach my interpersonal “wants”, by “debating around” the issue.
For example I might have said:
“If we don’t do what I want, horrible things A, B, and C will happen!”
This degenerates into a miserable argument over how likely A, B, and C are, or a referendum on how neurotic or pessimistic I am.
“You’re such an awful person for not having done [thing I want]!”
This degenerates into a miserable argument about each other’s general worth.)
“Authority Figure Bob will disapprove if we don’t do [thing I want]!”
This degenerates into a miserable argument about whether we should respect Bob’s authority.
Back at MetaMed, I had a coworker who believed in alternative medicine. I didn’t, and I was very frustrated. I didn’t want any harm to come to patients from misinformation, and I couldn’t see how I could prevent harm. This caused a lot of conflict, both spoken and unspoken. There were global values issues on debate; reason vs. emotion, logic vs. social charisma, who’s perspective on life was good or bad. I’m aware now and embarrassed to say I came across rude and inappropriate. I was coming from a well-meaning place. I didn’t want any harm to come to patients from misinformation, and I was very frustrated, because I didn’t see how I could prevent that outcome.
Finally, at my wit’s end, I blurted out what I wanted: I wanted to have veto power over information we sent to patients, to make sure it didn’t contain any factual inaccuracies.
She agreed instantly.
Turns out, people respond better if I just say, “I really want [thing I want] Can we do that?”
This phrase doesn’t guarantee I’ll get my way. It does mean that when I do get into a negotiation, that discussion stays targeted to the decision I disagree about, not the justification points.
I was aiming for global alignment on local issues when I should have aimed for simple agreement.
Personal domain
Narrowing the scope of the request and being clear about what I want, resolves debates with my husband too. If I watch myself quoting an expert nutritionist to argue that we should have home-cooked family dinners, my motivation feels like it’s not curiosity about the long-term health dangers of not eating as a family, rather that I want family dinners and I’m throwing spaghetti at a wall hoping some pro-dinner argument will work. The “empirical” or “intellectual” debate is rhetorical window dressing for my underlying request to fulfil my need. When I notice that is going on, it’s better to redirect to the actual underlying desire.
Then I can get to the actual negotiation. What makes family dinners undesirable to you? How could we balance those needs? What alternatives would work for both of us?
Alternative strategy
My friend Michael Vassar prefers getting to alignment with people on fundamentals first. Epistemics, beliefs, values. He believes there’s a lot more value to be gained from someone who keeps actively pursuing goals aligned with yours, even when they’re far away and you haven’t spoken in a long time, than from someone you can persuade to do a specific action you want right now, but who will need further persuading in the future.
----------
Assuming humans divide into two rough categories, “the great people who have value alignment with me” and “the terrible people who don’t have value alignment with me”, I think both Michael and myself would agree it’s unwise to benefit in the short term from allying with terrible people. The question is, who counts as terrible? Which value misalignments are acceptable human fallibility and which lapses in rigorous thinking make a human seriously untrustworthy?
I’d be interested to read some discussion about when and how much it makes sense to prioritise long and short term alliance.
Upvoted for the effort, and since it helped me get value out of the post, but I do think many authors can feel violated when they see other people significantly edit their writing and then publish it like this. I have some underlying models here that I might be able to explicate, but for now I just wanted to state the high-level outputs of my thinking.
Seconding this (I, personally, would feel something along those lines).
Also, there are issues of copyright to be aware of.