There’s two kinds of apologies. Those where you admit you intentionally made an error and sincerely regret what you did e.g. calling someone an idiot because they support Republicans/Democrats/Official Monster Raving Loony Party. And those where you did something by accident (usually something minor) and apologize to signal that you had no bad intent e.g. bumping into someone. You’re invoking the first sort, which should be pretty clear, but if someone thinks of examples which don’t match, this may be why.
Apologizing and changing your behaviour is a core rationalist skill. Merely changing your behaviour/model to not make that specific mistake again isn’t enough. The generator has to change too. For instance: if you threatened a Loony supporter, apologized, and stopped belitteling that particular person then the generator of the behaviour is still there. It is the instrumental/value counterpart to rationalizing, contorting your model to locally account for contradictory evidence. You have to change the generator to something that wouldn’t have made that type of error in the first place. Not threatening Loony supporters is better, not threating Flying Spaghetti Monster Cultists too is better still and not threatening people you disagree is, perhaps, best.
If the analogy between updating and apologizing is a good one, then practices/counditions for updating well should transfer to apologizing properly. It could be interesting to test this out. For instance, this recent post suggests some requirements for bayesian updating, and what it looks like in practice. Are there analogues for the three requirements? I.e.
Ability to inhabit hypothetical worlds i.e. you can simulate how reality would seem to be in a world where that hypothesis is actually true. Within that hypothetical, updates propogate through all the variables in the causal graph in all directions.
Ability to form any plausible hypothesis at all i.e. hypothesis that wouldn’t register lots of suprise at the evidence at hand.
Avoid double-counting dependant observations. You’ve got to be able to actually condition on all of the pieces of evidence, which is hard if the observations aren’t independant in a given world, or you struggle to inhabit that world deeply.
If I had to suggest some equivalents to the three requirements above, they might be:
Emphatizing with another person’s perspective to the extent that you can see all the implications of your actions being wrong. See the flaws in character that implies.
See some perspective where what you did is blatantly wrong Though this feels where bc. of a values-free energy analogy where high value worlds have low suprise.
There’s two kinds of apologies. Those where you admit you intentionally made an error and sincerely regret what you did e.g. calling someone an idiot because they support Republicans/Democrats/Official Monster Raving Loony Party. And those where you did something by accident (usually something minor) and apologize to signal that you had no bad intent e.g. bumping into someone. You’re invoking the first sort, which should be pretty clear, but if someone thinks of examples which don’t match, this may be why.
Apologizing and changing your behaviour is a core rationalist skill. Merely changing your behaviour/model to not make that specific mistake again isn’t enough. The generator has to change too. For instance: if you threatened a Loony supporter, apologized, and stopped belitteling that particular person then the generator of the behaviour is still there. It is the instrumental/value counterpart to rationalizing, contorting your model to locally account for contradictory evidence. You have to change the generator to something that wouldn’t have made that type of error in the first place. Not threatening Loony supporters is better, not threating Flying Spaghetti Monster Cultists too is better still and not threatening people you disagree is, perhaps, best.
If the analogy between updating and apologizing is a good one, then practices/counditions for updating well should transfer to apologizing properly. It could be interesting to test this out. For instance, this recent post suggests some requirements for bayesian updating, and what it looks like in practice. Are there analogues for the three requirements? I.e.
Ability to inhabit hypothetical worlds i.e. you can simulate how reality would seem to be in a world where that hypothesis is actually true. Within that hypothetical, updates propogate through all the variables in the causal graph in all directions.
Ability to form any plausible hypothesis at all i.e. hypothesis that wouldn’t register lots of suprise at the evidence at hand.
Avoid double-counting dependant observations. You’ve got to be able to actually condition on all of the pieces of evidence, which is hard if the observations aren’t independant in a given world, or you struggle to inhabit that world deeply.
If I had to suggest some equivalents to the three requirements above, they might be:
Emphatizing with another person’s perspective to the extent that you can see all the implications of your actions being wrong. See the flaws in character that implies.
See some perspective where what you did is blatantly wrong Though this feels where bc. of a values-free energy analogy where high value worlds have low suprise.
??? Suggestions welcome.