Engaging with people in ways such that they often feel heard/seen/understood
This is not a reasonable norm. In some circumstances (including, it sounds like, some of the conversations under discussion) meeting this standard would require a large amount of additional effort, not related to the ostensible reason for talking in the first place.
Engaging with people in ways such that they rarely feel dismissed/disrespected
Again, a pretty unreasonable norm. For some topics, such as “is what you’re doing actually making progress towards that thing you’ve arranged your life (including social context) around making progress on?”, it’s very easy for people to feel this way, even if they are being told true, useful, relevant things.
Something fuzzy that lots of people would call “kindness” or “typical levels of warmth”
Ditto, though significantly less strongly; I do think there’s ways to do this that stay honest and on-mission without too much tradeoff.
I think it’s not a reasonable norm to make sure your interlocutors never e.g. feel dismissed/disrespected, but it is reasonable to take some measures to avoid having someone consistently feel dismissed/disrespected if you spend over 200 hours talking with their team and loosely mentoring them (which to be clear Nate did, it’s just difficult in his position and so was only mildly successful).
I’m not sure kindness/warmth should even be a norm because it’s pretty difficult to define.
The details matter here; I don’t feel I can guess from what you’ve said whether we’d agree or not.
For example:
Tam: says some idea about alignment
Newt: says some particular flaw ”...and this is an instance of a general problem, which you’ll have to address if you want to make progress...” gestures a bit at the general problem
Tam: makes a tweak to the proposal that locally addresses the particular flaw
Newt: “This still doesn’t address the problem.”
Tam: “But it seems to solve the concrete problem, at least as you stated it. It’s not obvious to me that there’s a general problem here; if we can solve instances of it case-by-case, that seems like a lot of progress.”
Newt: “Look, we could play this game for some more rounds, where you add more gears and boxes to make it harder to see that there’s a problem that isn’t being addressed at all, and maybe after a few rounds you’ll get the point. But can we just skip ahead to you generalizing to the class of problem, or at least trying to do that on your own?”
Tam: feels dismissed/disrespected
I think Newt could have been more graceful and more helpful, e.g. explicitly stating that he’s had a history of conversations like this, and setting boundaries about how much effort he feels exciting about putting in, and using body language that is non-conflictual… But even if he doesn’t do that, I don’t really think he’s violating a norm here. And depending on context this sort of behavior might be about as well as Newt can do for now.
You can choose to ignore all these “unreasonable norms”, but they still have consequences. Such as people thinking you are an asshole. Or leaving the organization because of you. It is easy to underestimate these costs, because most of the time people won’t tell you (or they will, but you will ignore them and quickly forget).
This is a cost that people working with Nate should not ignore, even if Nate does.
I see three options:
try making Nate change—this may not be possible, but I think it’s worth trying;
isolate Nate from… well, everyone else, except for volunteers who were explicitly warned;
hire a separate person whose full time job will be to make Nate happy.
Anything else, I am afraid, will mean paying the costs and most likely being in denial about them.
I see at least two other options (which, ideally, should be used in tandem):
don’t hire people who are so terribly sensitive to above-average blutness
hire managers who will take care of ops/personnel problems more effectively, thus reducing the necessity for researchers to navigate interpersonal situations that arise from such problems
don’t hire people who are so terribly sensitive to above-average blutness
If I translate it mentally to “don’t hire people from the bottom 99% of thick skin”, I actually agree. Though they may be difficult to find, especially in combination with other requirements.
Do you really think it’d take 99th percentile skin-thickness to deal with this sort of thing without having some sort of emotional breakdown? This seems to me to be an extraordinary claim.
Are you available for the job? ;-)
While I probably qualify in this regard, I don’t think that I have any other relevant qualifications.
My experience is that people who I think of as having at least 90th percentile (and probably 99th if I think about it harder) thick-skin have been brought to tears from an intense conversation with Nate.
My guess is that this wouldn’t happen for a lot of possible employees from the broader economy, and this isn’t because they’ve got thicker skin, but it’s because they’re not very emotionally invested in the organization’s work, and generally don’t bring themselves to their work enough to risk this level of emotion/hurt.
My experience is that people who I think of as having at least 90th percentile (and probably 99th if I think about it harder) thick-skin have been brought to tears from an intense conversation with Nate.
This is a truly extraordinary claim! I don’t know what evidence I’d need to see in order to believe it, but whatever that evidence is, I sure haven’t seen it yet.
My guess is that this wouldn’t happen for a lot of possible employees from the broader economy, and this isn’t because they’ve got thicker skin, but it’s because they’re not very emotionally invested in the organization’s work, and generally don’t bring themselves to their work enough to risk this level of emotion/hurt.
This just can’t be right. I’ve met a decent number of people who are very invested in their work and the mission of whatever organization they’re part of, and I can’t imagine them being brought to tears by “an intense conversation” with one of their co-workers (nor have I heard of such a thing happening to the people I have in mind).
Something else is going on here, it seems to me; and the most obvious candidate for what that “something else” might be is simply that your view of what the distribution of “thick-skinned-ness” is like, is very mis-calibrated.
(Don’t know why some folks have downvoted the above comment, seems like a totally normal epistemic state for Person A not to believe what Person B believes about something after simply learning that Person B believes it, and to think Person B is likely miscalibrated. I have strong upvoted the comment back to clearly positive.)
This is not a reasonable norm. In some circumstances (including, it sounds like, some of the conversations under discussion) meeting this standard would require a large amount of additional effort, not related to the ostensible reason for talking in the first place.
Again, a pretty unreasonable norm. For some topics, such as “is what you’re doing actually making progress towards that thing you’ve arranged your life (including social context) around making progress on?”, it’s very easy for people to feel this way, even if they are being told true, useful, relevant things.
Ditto, though significantly less strongly; I do think there’s ways to do this that stay honest and on-mission without too much tradeoff.
I think it’s not a reasonable norm to make sure your interlocutors never e.g. feel dismissed/disrespected, but it is reasonable to take some measures to avoid having someone consistently feel dismissed/disrespected if you spend over 200 hours talking with their team and loosely mentoring them (which to be clear Nate did, it’s just difficult in his position and so was only mildly successful).
I’m not sure kindness/warmth should even be a norm because it’s pretty difficult to define.
The details matter here; I don’t feel I can guess from what you’ve said whether we’d agree or not.
For example:
Tam: says some idea about alignment
Newt: says some particular flaw ”...and this is an instance of a general problem, which you’ll have to address if you want to make progress...” gestures a bit at the general problem
Tam: makes a tweak to the proposal that locally addresses the particular flaw
Newt: “This still doesn’t address the problem.”
Tam: “But it seems to solve the concrete problem, at least as you stated it. It’s not obvious to me that there’s a general problem here; if we can solve instances of it case-by-case, that seems like a lot of progress.”
Newt: “Look, we could play this game for some more rounds, where you add more gears and boxes to make it harder to see that there’s a problem that isn’t being addressed at all, and maybe after a few rounds you’ll get the point. But can we just skip ahead to you generalizing to the class of problem, or at least trying to do that on your own?”
Tam: feels dismissed/disrespected
I think Newt could have been more graceful and more helpful, e.g. explicitly stating that he’s had a history of conversations like this, and setting boundaries about how much effort he feels exciting about putting in, and using body language that is non-conflictual… But even if he doesn’t do that, I don’t really think he’s violating a norm here. And depending on context this sort of behavior might be about as well as Newt can do for now.
You can choose to ignore all these “unreasonable norms”, but they still have consequences. Such as people thinking you are an asshole. Or leaving the organization because of you. It is easy to underestimate these costs, because most of the time people won’t tell you (or they will, but you will ignore them and quickly forget).
This is a cost that people working with Nate should not ignore, even if Nate does.
I see three options:
try making Nate change—this may not be possible, but I think it’s worth trying;
isolate Nate from… well, everyone else, except for volunteers who were explicitly warned;
hire a separate person whose full time job will be to make Nate happy.
Anything else, I am afraid, will mean paying the costs and most likely being in denial about them.
I see at least two other options (which, ideally, should be used in tandem):
don’t hire people who are so terribly sensitive to above-average blutness
hire managers who will take care of ops/personnel problems more effectively, thus reducing the necessity for researchers to navigate interpersonal situations that arise from such problems
If I translate it mentally to “don’t hire people from the bottom 99% of thick skin”, I actually agree. Though they may be difficult to find, especially in combination with other requirements.
Are you available for the job? ;-)
Do you really think it’d take 99th percentile skin-thickness to deal with this sort of thing without having some sort of emotional breakdown? This seems to me to be an extraordinary claim.
While I probably qualify in this regard, I don’t think that I have any other relevant qualifications.
My experience is that people who I think of as having at least 90th percentile (and probably 99th if I think about it harder) thick-skin have been brought to tears from an intense conversation with Nate.
My guess is that this wouldn’t happen for a lot of possible employees from the broader economy, and this isn’t because they’ve got thicker skin, but it’s because they’re not very emotionally invested in the organization’s work, and generally don’t bring themselves to their work enough to risk this level of emotion/hurt.
This is a truly extraordinary claim! I don’t know what evidence I’d need to see in order to believe it, but whatever that evidence is, I sure haven’t seen it yet.
This just can’t be right. I’ve met a decent number of people who are very invested in their work and the mission of whatever organization they’re part of, and I can’t imagine them being brought to tears by “an intense conversation” with one of their co-workers (nor have I heard of such a thing happening to the people I have in mind).
Something else is going on here, it seems to me; and the most obvious candidate for what that “something else” might be is simply that your view of what the distribution of “thick-skinned-ness” is like, is very mis-calibrated.
To me the obvious candidate is that people are orienting around Nate in particular in an especially weird way.
(Don’t know why some folks have downvoted the above comment, seems like a totally normal epistemic state for Person A not to believe what Person B believes about something after simply learning that Person B believes it, and to think Person B is likely miscalibrated. I have strong upvoted the comment back to clearly positive.)