I think this is basically right. The feedback Max gives in the moment is a combination of “I do not especially hold such things against people” with “This particular instance has low costs, so given you are considering doing the other thing you should do the other thing” with “This particular instance has low costs so you don’t have anything you need to fix or make up for” with “You have enough reserve goodwill that doing this doesn’t push things over the edge” with “I have not yet observed so much flaking that I need to strike back at that pattern.”
That does not mean that the data point that Balthazar flaked isn’t being tracked. Those points have been spent, the models have been updated. I consider flaking on someone to always be paying a cost, and it always being on me to track when I’m doing that too often to the same person (or in general) and make sure that doesn’t happen. It’s not on them, it’s on me. It’s great if I get a warning shot of “you’ve flaked a lot recently” in some form before the real consequences or blowups happen, but that’s supererogatory.
It also might be because Max wants to gather the data. This is totally a thing. After the 4th time (or what not) Max suspects that Balthazar is not prioritizing the group. Finding out if this is true matters more to Max than getting Balthazar to show up on time on Monday, perhaps a lot more. So he doesn’t tell Balthazar, in favor of instead gathering more evidence. If he said something, Balthazar would (at least for a bit) make more of an effort, but that would make it unclear what Balthazar cares about. This is the silent test model.
There’s also the basic model that confrontation has high costs. If someone flakes on me once, I might or might not be upset (depends on a combination of things including costs, expectations for the person and the context, history of other reliability, whether they apologize slash explain slash warn, and so forth). If it happens several times, I might be upset, but not upset enough to spend the points required to confront them. If I’m not going to confront them, perhaps it makes sense to (for now) forgive and reassure them, so everything keeps going smoothly, even if I know that about two more of these and I’ll have to Have The Talk with them about it. It isn’t always easy/cheap to convey that you’re annoyed, especially while also saying that things are still fine, plus people are often quite dense about it when you try to do that.
Thus, you can have a situation in which someone suddenly runs out of Slack on the issue, but didn’t realize they were close to the edge.
This comes back to the question of whether the justification for the first six instances matters for possible future requests. It doesn’t matter in some important sense, but in another important sense it matters a lot. If previous actions were justified, you’ve been docking them too many points and you’ve been updating your model of them too much, and they should be cut a lot more Slack than you’re giving them. That matters. Also, finding out what logic they use to decide what is and isn’t morally justified matters for future requests. If they use the excuse “I had the option to do something really cool” as if it’s a good justification, they’re saying they think that’s a good justification, and they’ll use it again. Even if they agree not to, they’ll still think it’s a good justification. You should update accordingly. Similarly, if their response is to accept that they were not morally justified, that also updates you.
I think this is basically right. The feedback Max gives in the moment is a combination of “I do not especially hold such things against people” with “This particular instance has low costs, so given you are considering doing the other thing you should do the other thing” with “This particular instance has low costs so you don’t have anything you need to fix or make up for” with “You have enough reserve goodwill that doing this doesn’t push things over the edge” with “I have not yet observed so much flaking that I need to strike back at that pattern.”
That does not mean that the data point that Balthazar flaked isn’t being tracked. Those points have been spent, the models have been updated. I consider flaking on someone to always be paying a cost, and it always being on me to track when I’m doing that too often to the same person (or in general) and make sure that doesn’t happen. It’s not on them, it’s on me. It’s great if I get a warning shot of “you’ve flaked a lot recently” in some form before the real consequences or blowups happen, but that’s supererogatory.
It also might be because Max wants to gather the data. This is totally a thing. After the 4th time (or what not) Max suspects that Balthazar is not prioritizing the group. Finding out if this is true matters more to Max than getting Balthazar to show up on time on Monday, perhaps a lot more. So he doesn’t tell Balthazar, in favor of instead gathering more evidence. If he said something, Balthazar would (at least for a bit) make more of an effort, but that would make it unclear what Balthazar cares about. This is the silent test model.
There’s also the basic model that confrontation has high costs. If someone flakes on me once, I might or might not be upset (depends on a combination of things including costs, expectations for the person and the context, history of other reliability, whether they apologize slash explain slash warn, and so forth). If it happens several times, I might be upset, but not upset enough to spend the points required to confront them. If I’m not going to confront them, perhaps it makes sense to (for now) forgive and reassure them, so everything keeps going smoothly, even if I know that about two more of these and I’ll have to Have The Talk with them about it. It isn’t always easy/cheap to convey that you’re annoyed, especially while also saying that things are still fine, plus people are often quite dense about it when you try to do that.
Thus, you can have a situation in which someone suddenly runs out of Slack on the issue, but didn’t realize they were close to the edge.
This comes back to the question of whether the justification for the first six instances matters for possible future requests. It doesn’t matter in some important sense, but in another important sense it matters a lot. If previous actions were justified, you’ve been docking them too many points and you’ve been updating your model of them too much, and they should be cut a lot more Slack than you’re giving them. That matters. Also, finding out what logic they use to decide what is and isn’t morally justified matters for future requests. If they use the excuse “I had the option to do something really cool” as if it’s a good justification, they’re saying they think that’s a good justification, and they’ll use it again. Even if they agree not to, they’ll still think it’s a good justification. You should update accordingly. Similarly, if their response is to accept that they were not morally justified, that also updates you.