An important factor that pushes further towards this dynamic, is that often the previous evidence gets integrated into a feeling,and the specifics are forgotten. I had this with someone once where a lot of things they did made me feel unhappy, but I brushed the individual events off as small, until at the end I was in a mental state of “I don’t like being around this person and am angry at them internally, but not sure why”, and I was frustrated at myself for having such irrational emotions. This did not seem a productive thing to bring up in a conversation with them, especially when the original reasons had been forgotten.
I observe that the things I did / have done since include:
In the particular case: Introspect until I did find examples of the problems, and then discuss those with my friends and eventually the person.
In the general case: Trust my emotions more to be guides to evidence, rather than irritating things getting in the way of my plans. For example, if I feel like I like / dislike a person, I’m likely to ask questions like “I wonder what experiences I’ve had that lead me to this conclusion” rather than “This emotion is getting in the way of my ability to think straight! I should suppress it.”
On the fundamentallevel (aka on the other end of the pipeline): be much more honest about how I feel about something. In general I have a prohibition against saying false things, and now in this sort of situation I will still try to comfort you if you feel awkward, say, but I won’t say false things like “it wasn’t a problem”. I hadn’t noticed I was saying false things, but I’ve learned to notice it more strongly, and I will lean only on true things for support.
This kind of inconsistency in myself bothers me a lot (although, I think it doesn’t especially bother me in others).
There’s something interesting going on here with the courtroom analogy. The first problem you mention, before the dialogue, involves setting a precedent. Then, after the dialogue, you talk about ruling evidence inadmissible and then wanting it back.
We don’t want our epistemology to act like a courtroom, but, that doesn’t mean these ideas are purely dysfunctional. Setting a precedent has relatively little to do with perfectly rational agents, especially with single agents who don’t have to interact with others. Boundedly rational agents, or agents interacting with others, have a use for them.
So, I have in mind two different ways of trying to resolve the inconsistency.
One involves trying to ditch the courtroom model. Forget about social precedent: you do what you do, and you can try to help others understand what to expect. Forget about ruling evidence inadmissible: that’s not a Bayesian mental movement; you may be sorry if you misled them about the importance of their infractions, and you may try to be more well-calibrated about it in the future, but these things happen and should by no means influence the way you reason about the degree of the problem now.
The second way involves being more charitable toward legalistic due process. Social precedents are in fact quite important for smooth interaction; someone who changes what is acceptable/preferred all the time can be unpleasant to be around. Sticking to your word is similarly valuable; if you say “don’t worry about it”, then it becomes your responsibility if you blow up later. If there is in fact a repeated problem, you can stop saying “don’t worry about it” and start issuing warnings; it may take longer to build up the evidence to support your case, now, but that is the price you pay for the (socially useful) ability to say “don’t worry about it” and mean what you say.
I’m not sure which of those answers I like better.
When you find yourself reassuring someone that it’s fine, it’s fine, take a moment to check if it’s really fine, and if it’s not (or if you know you’re bad at predicting your future emotions generally), try to warn them, or give them a better model of how the dynamic is costly for you and maybe get them to mirror it back in a way that lets you confirm that they really understood.
I particularly liked this suggestion, in that it introduces a sort of indirectness (“It’s fine, but notice that as a point of policy, …”) which can reduce the social instincts involved. This is a case where framing things abstractly in a way that doesn’t connect with gut intuitions can be good. But, I wanted to emphasize your parenthetical clause here. I think treating these kinds of assertions as “modeling my future self” rather than “making commitments”, and therefore injecting the epistemic humility fit for that task, is a good move here.
I adamantly do not mean that every statement I make is an estimate rather than a promise; I think that’s a problematic perspective as well. But, it seems helpful to separate estimation from commitment-making, and default to estimates before making commitments (even small internal commitments, like whether you are “rounding this social infraction down to zero”).
For example, I have a stereotype that a savvy businessperson will never sign a contract in the same meeting in which the initial offer was made; at least one night should be taken to think it over, no matter how positive the feelings about it are during the initial meeting. I don’t know where I got that from, but that kind of thinking seems very useful. It allows you to correct for bias due to social pressures in the conversation. The “five-second version” is called the pauseby the Focusing Institute: you simply ask to think for a moment during a conversation.
I have in mind a certain way of thinking, in which you treat yourself as not having direct access to your true reactions. It’s fine to express estimates of your disposition, but they should be marked as such, at least in your head and often out loud. “I think I’m happy about that. Let me think. Yes, I’m happy about it.” This may not be the best way to accomplish the change in thinking—maybe it leaves you more disconnected from yourself than is desirable. But, perhaps it is more honest.
edit: I kept reading, you say this same thing in the following paragraph from the one I quoted. kept for posterity.
treating these kinds of assertions as “modeling my future self” rather than “making commitments”
I like this, but I want to mention—I consider those to be separate actions. Making commitments is what I do after, once I’ve considered my future self, I decide to impose a cost on future self to give a promise of higher reliability.
The first time that Maximilien reassures Balthazar, telling him that it’s no big deal that he postponed their meeting, I think Maximilien wants to communicate something like:
There’s no need for you to worry about me in particular. I trust that you already have a good model of the costs of things like postponing meetings, and a system for trying to avoid those costs. You can just treat this as an instance within your existing model. I’m not going to update more than one should on a single data point—I get that these sorts of things happen sometimes, and my feelings about people don’t get too swayed by variance. And I am not unusually bothered by it—you don’t need a “Maximilien exception” in your model of how costly it is to postpone a meeting.
One thing to note: By the 7th time, much of that paragraph no longer holds. And that is because of the combined weight of all 7 data points.
Another thing to note: Maximilien is choosing not to make the negative feedback explicit because he assumes that Balthazar is already tracking it. That makes it hard for Balthazar to notice that he’s missing something, in the cases where he is missing something. He could easily hear “basically no cost” when Maximilien means “just the ordinary/obvious costs”. This is one of those situations where Maximilien is assuming that someone else has internalized the same cultural norms and models as him and is trying to communicate in a way that depends on that.
I think this is basically right. The feedback Max gives in the moment is a combination of “I do not especially hold such things against people” with “This particular instance has low costs, so given you are considering doing the other thing you should do the other thing” with “This particular instance has low costs so you don’t have anything you need to fix or make up for” with “You have enough reserve goodwill that doing this doesn’t push things over the edge” with “I have not yet observed so much flaking that I need to strike back at that pattern.”
That does not mean that the data point that Balthazar flaked isn’t being tracked. Those points have been spent, the models have been updated. I consider flaking on someone to always be paying a cost, and it always being on me to track when I’m doing that too often to the same person (or in general) and make sure that doesn’t happen. It’s not on them, it’s on me. It’s great if I get a warning shot of “you’ve flaked a lot recently” in some form before the real consequences or blowups happen, but that’s supererogatory.
It also might be because Max wants to gather the data. This is totally a thing. After the 4th time (or what not) Max suspects that Balthazar is not prioritizing the group. Finding out if this is true matters more to Max than getting Balthazar to show up on time on Monday, perhaps a lot more. So he doesn’t tell Balthazar, in favor of instead gathering more evidence. If he said something, Balthazar would (at least for a bit) make more of an effort, but that would make it unclear what Balthazar cares about. This is the silent test model.
There’s also the basic model that confrontation has high costs. If someone flakes on me once, I might or might not be upset (depends on a combination of things including costs, expectations for the person and the context, history of other reliability, whether they apologize slash explain slash warn, and so forth). If it happens several times, I might be upset, but not upset enough to spend the points required to confront them. If I’m not going to confront them, perhaps it makes sense to (for now) forgive and reassure them, so everything keeps going smoothly, even if I know that about two more of these and I’ll have to Have The Talk with them about it. It isn’t always easy/cheap to convey that you’re annoyed, especially while also saying that things are still fine, plus people are often quite dense about it when you try to do that.
Thus, you can have a situation in which someone suddenly runs out of Slack on the issue, but didn’t realize they were close to the edge.
This comes back to the question of whether the justification for the first six instances matters for possible future requests. It doesn’t matter in some important sense, but in another important sense it matters a lot. If previous actions were justified, you’ve been docking them too many points and you’ve been updating your model of them too much, and they should be cut a lot more Slack than you’re giving them. That matters. Also, finding out what logic they use to decide what is and isn’t morally justified matters for future requests. If they use the excuse “I had the option to do something really cool” as if it’s a good justification, they’re saying they think that’s a good justification, and they’ll use it again. Even if they agree not to, they’ll still think it’s a good justification. You should update accordingly. Similarly, if their response is to accept that they were not morally justified, that also updates you.
I think that a large part of this problem stems from how people think changing your opinion of someone works. An implicit belief that seems to exist in a lot of people’s minds is that when you break a commitment with someone, they can either decide to “hold it against you” or “let it go”. While there is a conscious part of your friend that is deciding whether or not your transgression was worth making a fuss over, I think that the more important change is that their mental model of you has been ever so slightly adjusted.
If you frequently show up late to meetings, even if your friends say it’s “okay”, they are still unconsciously updating their model of you to someone who isn’t reliably on time. This happens bit by bit, and is adjusted slightly each time you’re late or on time.
If you’re friend has slowly started blowing you off more often, and you keep saying it’s fine, you’re going to be slowly adjusting your model. At some point, the model of your friend that you use to control your anticipation will be at odds with your belief in the belief that your friend and you are “totally chill”. Then there will be one blow off too many, snapping your belief in belief, and it will appear to your friend like it all came out of nowhere.
It seems the best way to avoid these sorts of problems would be to create common knowledge on how we actual update our opinions of each other. I’m not sure of what would be a smooth way to add that into conversation.
An important factor that pushes further towards this dynamic, is that often the previous evidence gets integrated into a feeling, and the specifics are forgotten. I had this with someone once where a lot of things they did made me feel unhappy, but I brushed the individual events off as small, until at the end I was in a mental state of “I don’t like being around this person and am angry at them internally, but not sure why”, and I was frustrated at myself for having such irrational emotions. This did not seem a productive thing to bring up in a conversation with them, especially when the original reasons had been forgotten.
I observe that the things I did / have done since include:
In the particular case: Introspect until I did find examples of the problems, and then discuss those with my friends and eventually the person.
In the general case: Trust my emotions more to be guides to evidence, rather than irritating things getting in the way of my plans. For example, if I feel like I like / dislike a person, I’m likely to ask questions like “I wonder what experiences I’ve had that lead me to this conclusion” rather than “This emotion is getting in the way of my ability to think straight! I should suppress it.”
On the fundamental level (aka on the other end of the pipeline): be much more honest about how I feel about something. In general I have a prohibition against saying false things, and now in this sort of situation I will still try to comfort you if you feel awkward, say, but I won’t say false things like “it wasn’t a problem”. I hadn’t noticed I was saying false things, but I’ve learned to notice it more strongly, and I will lean only on true things for support.
This kind of inconsistency in myself bothers me a lot (although, I think it doesn’t especially bother me in others).
There’s something interesting going on here with the courtroom analogy. The first problem you mention, before the dialogue, involves setting a precedent. Then, after the dialogue, you talk about ruling evidence inadmissible and then wanting it back.
We don’t want our epistemology to act like a courtroom, but, that doesn’t mean these ideas are purely dysfunctional. Setting a precedent has relatively little to do with perfectly rational agents, especially with single agents who don’t have to interact with others. Boundedly rational agents, or agents interacting with others, have a use for them.
So, I have in mind two different ways of trying to resolve the inconsistency.
One involves trying to ditch the courtroom model. Forget about social precedent: you do what you do, and you can try to help others understand what to expect. Forget about ruling evidence inadmissible: that’s not a Bayesian mental movement; you may be sorry if you misled them about the importance of their infractions, and you may try to be more well-calibrated about it in the future, but these things happen and should by no means influence the way you reason about the degree of the problem now.
The second way involves being more charitable toward legalistic due process. Social precedents are in fact quite important for smooth interaction; someone who changes what is acceptable/preferred all the time can be unpleasant to be around. Sticking to your word is similarly valuable; if you say “don’t worry about it”, then it becomes your responsibility if you blow up later. If there is in fact a repeated problem, you can stop saying “don’t worry about it” and start issuing warnings; it may take longer to build up the evidence to support your case, now, but that is the price you pay for the (socially useful) ability to say “don’t worry about it” and mean what you say.
I’m not sure which of those answers I like better.
I particularly liked this suggestion, in that it introduces a sort of indirectness (“It’s fine, but notice that as a point of policy, …”) which can reduce the social instincts involved. This is a case where framing things abstractly in a way that doesn’t connect with gut intuitions can be good. But, I wanted to emphasize your parenthetical clause here. I think treating these kinds of assertions as “modeling my future self” rather than “making commitments”, and therefore injecting the epistemic humility fit for that task, is a good move here.
I adamantly do not mean that every statement I make is an estimate rather than a promise; I think that’s a problematic perspective as well. But, it seems helpful to separate estimation from commitment-making, and default to estimates before making commitments (even small internal commitments, like whether you are “rounding this social infraction down to zero”).
For example, I have a stereotype that a savvy businessperson will never sign a contract in the same meeting in which the initial offer was made; at least one night should be taken to think it over, no matter how positive the feelings about it are during the initial meeting. I don’t know where I got that from, but that kind of thinking seems very useful. It allows you to correct for bias due to social pressures in the conversation. The “five-second version” is called the pause by the Focusing Institute: you simply ask to think for a moment during a conversation.
I have in mind a certain way of thinking, in which you treat yourself as not having direct access to your true reactions. It’s fine to express estimates of your disposition, but they should be marked as such, at least in your head and often out loud. “I think I’m happy about that. Let me think. Yes, I’m happy about it.” This may not be the best way to accomplish the change in thinking—maybe it leaves you more disconnected from yourself than is desirable. But, perhaps it is more honest.
edit: I kept reading, you say this same thing in the following paragraph from the one I quoted. kept for posterity.
I like this, but I want to mention—I consider those to be separate actions. Making commitments is what I do after, once I’ve considered my future self, I decide to impose a cost on future self to give a promise of higher reliability.
The first time that Maximilien reassures Balthazar, telling him that it’s no big deal that he postponed their meeting, I think Maximilien wants to communicate something like:
There’s no need for you to worry about me in particular. I trust that you already have a good model of the costs of things like postponing meetings, and a system for trying to avoid those costs. You can just treat this as an instance within your existing model. I’m not going to update more than one should on a single data point—I get that these sorts of things happen sometimes, and my feelings about people don’t get too swayed by variance. And I am not unusually bothered by it—you don’t need a “Maximilien exception” in your model of how costly it is to postpone a meeting.
One thing to note: By the 7th time, much of that paragraph no longer holds. And that is because of the combined weight of all 7 data points.
Another thing to note: Maximilien is choosing not to make the negative feedback explicit because he assumes that Balthazar is already tracking it. That makes it hard for Balthazar to notice that he’s missing something, in the cases where he is missing something. He could easily hear “basically no cost” when Maximilien means “just the ordinary/obvious costs”. This is one of those situations where Maximilien is assuming that someone else has internalized the same cultural norms and models as him and is trying to communicate in a way that depends on that.
I think this is basically right. The feedback Max gives in the moment is a combination of “I do not especially hold such things against people” with “This particular instance has low costs, so given you are considering doing the other thing you should do the other thing” with “This particular instance has low costs so you don’t have anything you need to fix or make up for” with “You have enough reserve goodwill that doing this doesn’t push things over the edge” with “I have not yet observed so much flaking that I need to strike back at that pattern.”
That does not mean that the data point that Balthazar flaked isn’t being tracked. Those points have been spent, the models have been updated. I consider flaking on someone to always be paying a cost, and it always being on me to track when I’m doing that too often to the same person (or in general) and make sure that doesn’t happen. It’s not on them, it’s on me. It’s great if I get a warning shot of “you’ve flaked a lot recently” in some form before the real consequences or blowups happen, but that’s supererogatory.
It also might be because Max wants to gather the data. This is totally a thing. After the 4th time (or what not) Max suspects that Balthazar is not prioritizing the group. Finding out if this is true matters more to Max than getting Balthazar to show up on time on Monday, perhaps a lot more. So he doesn’t tell Balthazar, in favor of instead gathering more evidence. If he said something, Balthazar would (at least for a bit) make more of an effort, but that would make it unclear what Balthazar cares about. This is the silent test model.
There’s also the basic model that confrontation has high costs. If someone flakes on me once, I might or might not be upset (depends on a combination of things including costs, expectations for the person and the context, history of other reliability, whether they apologize slash explain slash warn, and so forth). If it happens several times, I might be upset, but not upset enough to spend the points required to confront them. If I’m not going to confront them, perhaps it makes sense to (for now) forgive and reassure them, so everything keeps going smoothly, even if I know that about two more of these and I’ll have to Have The Talk with them about it. It isn’t always easy/cheap to convey that you’re annoyed, especially while also saying that things are still fine, plus people are often quite dense about it when you try to do that.
Thus, you can have a situation in which someone suddenly runs out of Slack on the issue, but didn’t realize they were close to the edge.
This comes back to the question of whether the justification for the first six instances matters for possible future requests. It doesn’t matter in some important sense, but in another important sense it matters a lot. If previous actions were justified, you’ve been docking them too many points and you’ve been updating your model of them too much, and they should be cut a lot more Slack than you’re giving them. That matters. Also, finding out what logic they use to decide what is and isn’t morally justified matters for future requests. If they use the excuse “I had the option to do something really cool” as if it’s a good justification, they’re saying they think that’s a good justification, and they’ll use it again. Even if they agree not to, they’ll still think it’s a good justification. You should update accordingly. Similarly, if their response is to accept that they were not morally justified, that also updates you.
I think that a large part of this problem stems from how people think changing your opinion of someone works. An implicit belief that seems to exist in a lot of people’s minds is that when you break a commitment with someone, they can either decide to “hold it against you” or “let it go”. While there is a conscious part of your friend that is deciding whether or not your transgression was worth making a fuss over, I think that the more important change is that their mental model of you has been ever so slightly adjusted.
If you frequently show up late to meetings, even if your friends say it’s “okay”, they are still unconsciously updating their model of you to someone who isn’t reliably on time. This happens bit by bit, and is adjusted slightly each time you’re late or on time.
If you’re friend has slowly started blowing you off more often, and you keep saying it’s fine, you’re going to be slowly adjusting your model. At some point, the model of your friend that you use to control your anticipation will be at odds with your belief in the belief that your friend and you are “totally chill”. Then there will be one blow off too many, snapping your belief in belief, and it will appear to your friend like it all came out of nowhere.
It seems the best way to avoid these sorts of problems would be to create common knowledge on how we actual update our opinions of each other. I’m not sure of what would be a smooth way to add that into conversation.
This is great.