The principle I would draw is not, “you should separate the meta- and object-level discussions”. Rather, I think the important thing is that meta-level discussions of how social sanctions work shouldn’t be generated by backward-chaining from an ambiguous case. If people think that your meta-level argument about what circumstances it’s okay to punch people, is actually about whether to punch Bob, then Bob’s allies and enemies will engage with that conversation in a biased way. But separately, if people think that you had some Vaguebob in mind that they didn’t know about, and that you might be Vaguebob’s friend or enemy, then they’ll rightly suspect you of being biased in the same way.
Rather, I think the important thing is that meta-level discussions of how social sanctions work shouldn’t be generated by backward-chaining from an ambiguous case.
I think I disagree with this. When a social sanction is born of a particular case, I think it is quite important to actually have that case as a part of the discussion. First, this means the social alliances are in the open instead of hidden, second, this means that discussions over what principles actually bear on the situation becomes on-topic as well.
I think also it’s quite difficult for people to think about tradeoffs in the abstract; “should annoying people be allowed at meetups?” is different from “should we let Bob keep coming to meetups?”, and generally the latter is a more productive question.
The other option is making social sanctions preemptively, but there it’s not clear what violations might be possible or probable, and so not making rules until they’ve been violated seems sensible. (Of course, many rules have been violated before in human experience, such that in forming a new group you might import existing rules.)
I think I disagree with this. When a social sanction is born of a particular case, I think it is quite important to actually have that case as a part of the discussion.
Clarification: what I meant is that it’s better if the rules are created in a context where there are no cases pending for the rules to bear on; ie, I’m not objecting to admitting that a rules-discussion is about a specific case that it would bear on, but to it actually being about a specific pending case.
I think also it’s quite difficult for people to think about tradeoffs in the abstract; “should annoying people be allowed at meetups?” is different from “should we let Bob keep coming to meetups?”, and generally the latter is a more productive question.
I think discussing the latter question is less likely to produce the right result, though.
Clarification: what I meant is that it’s better if the rules are created in a context where there are no cases pending for the rules to bear on; ie, I’m not objecting to admitting that a rules-discussion is about a specific case that it would bear on, but to it actually being about a specific pending case.
This is how things are often done in law, tho; I think “common law” is a pretty good way to grow a body of rules. You can then abstract out principles and refactor. It’s not clear yet how much of this is about cost minimization; one of the benefits of deciding cases as they come up is you only ever need to decide as many cases as happened, which is not true if you try to decide cases before they become pending.
I agree with Raemon here. It would be good to think about ambiguous cases in advance, and I like the idea that fiction is one way of doing so.
But ambiguous cases are still going to come up, and you need to have some way of dealing with them. (And if you deal with them by never punching anyone, then you’re encouraging bad actors to seek them out.)
I agree with this, but with the unfortunate caveat that I think people are most likely to think about when it’s appropriate to harm people when they have some motivation to either harm or prevent someone from coming to harm.
And I’m not 100% sure if the takeaway of “think about which circumstances it’s okay to harm people sometimes at random” is actually better (although I lean towards it).
Actually, it occurs to me that I’ve sort of been doing this via fiction.
My group house is currently watching “Walking Dead” which has a large number of instances of people having to negotiate with each other during high-stakes situations, where people disagree about object and meta level a lot. This has led to my house having a bunch of discussions about how the group-rationality of the characters is checking out, which is (mostly) divorced from considerations of actual real people.
This includes things like “it’s necessary to punish Bob in this situation, even though Bob was object-level-right, because allowing people to act like Bob did willy-nilly would destabilize their fragile society”. And this sort of thing happens at various scales, ranging from places where civilization is just 2 people, to civilization being a small town.
(If you want to consider cases where civilization is millions of people, you’ll need to watch Battlestar Galactica instead)
Sure, it’s fun to discuss what’s right in bizarre situations, but that’s very different from the decisions philh is talking about. I strongly doubt that your group house has decided “We like you, and that act was right for that situation, but we’re going to punish you so others won’t try it”.
I totally buy the argument _IN GROUPS LARGE ENOUGH TO BE IMPERSONAL_ that you punish deviance from the norm, even when that deviance is correct and necessary. More hero they, who suffer for their necessary actions. Stanislav Petrov was a hero to disobey orders, and the Soviet government was correct to reprimand him.
I do not think this is true in groups smaller than some multiple of Dunbar’s number. If you can discuss the specifics with a significant percentage of members, then you can do the right thing contextually, rather than blindly enforcing the rules (which, even for complex unwritten norms, are too simple for reality).
I strongly doubt that your group house has decided “We like you, and that act was right for that situation, but we’re going to punish you so others won’t try it”.
We’ve definitely done things of the form “okay, in this case it seems like the house is okay with this action, but we can tell that if people started doing it all the time it’d start to cause resentment, so lets basically install a Pigouvian Tax on this action so that it only ends up happening when it’s important enough.”
In a TV show where stakes are life-and-death, the consequences might look like “banishment” and in a group house the consequences are more like “pay $5 to the house”, but it feels like fairly similar principles at play.
You definitely do need different tools and principles as things grow larger and more impersonal, for sure. And I’d definitely like to see a show where the situations getting hashed out are more applicable-to-life than “zombie apocalypse.” But I do think Walking Dead is a fairly uniquely-good-show at depicting group rationality though.
that’s very different from the decisions philh is talking about.
So, I’ve had the feeling from all of your comments on this thread that you think I’m talking about something different from what I think I’m taking about. I’ve not felt like going to the effort of teasing out the confusion, and I still don’t. But I would like to make it clear that I do not endorse this statement.
Ok, then I’m very confused. “punching” is intentional harm or intimidation, typically to establish hierarchy or enforce compliance. If you meant something else, you should use different words.
Specifically, if you meant pigouvian taxes or coasean redress (both of which are not punitive, but rather fee-for-costs-imposed), rather than censure and retribution, then most of my disagreement evaporates.
I was thinking of actions, not motivations. If Alice wants to convince people to punch Bob, then her motivations (punishment, protection, deterrence, restoration) will be relevant to what sort of arguments she makes and whether other people are likely to agree. But I don’t think they’re particularly relevant to the contents of this post.
Not 100% sure I grok what philh meant in the first place, but I also want to note that I didn’t mean for my example-from-fiction to precisely match what I interpreted philh to mean. It was just an easily-accessible example from thinking about the show and game theory.
I do happen to also think there are generalizable lessons from that, which apply to both punishment and pigouvian tax. But that was sort of accidental. (i.e. I quickly searched my brain for the most relevant seeming fictional example, found one that seemed relevant, and it happened to be reasonably relevant)
One could implement a monetary tax that involves shame and social stigma, which’d feel more like being punched. One could also have a culture where being punched comes with less stigma, and is a quick “take your lumps” sort of thing. There are benefits and tradeoffs to wielding shame/stigma/dominance as part of a punishment strategy. In all cases though, you’re trying to impose a cost on an action that you want to see less of.
The principle I would draw is not, “you should separate the meta- and object-level discussions”. Rather, I think the important thing is that meta-level discussions of how social sanctions work shouldn’t be generated by backward-chaining from an ambiguous case. If people think that your meta-level argument about what circumstances it’s okay to punch people, is actually about whether to punch Bob, then Bob’s allies and enemies will engage with that conversation in a biased way. But separately, if people think that you had some Vaguebob in mind that they didn’t know about, and that you might be Vaguebob’s friend or enemy, then they’ll rightly suspect you of being biased in the same way.
I think I disagree with this. When a social sanction is born of a particular case, I think it is quite important to actually have that case as a part of the discussion. First, this means the social alliances are in the open instead of hidden, second, this means that discussions over what principles actually bear on the situation becomes on-topic as well.
I think also it’s quite difficult for people to think about tradeoffs in the abstract; “should annoying people be allowed at meetups?” is different from “should we let Bob keep coming to meetups?”, and generally the latter is a more productive question.
The other option is making social sanctions preemptively, but there it’s not clear what violations might be possible or probable, and so not making rules until they’ve been violated seems sensible. (Of course, many rules have been violated before in human experience, such that in forming a new group you might import existing rules.)
Clarification: what I meant is that it’s better if the rules are created in a context where there are no cases pending for the rules to bear on; ie, I’m not objecting to admitting that a rules-discussion is about a specific case that it would bear on, but to it actually being about a specific pending case.
I think discussing the latter question is less likely to produce the right result, though.
This is how things are often done in law, tho; I think “common law” is a pretty good way to grow a body of rules. You can then abstract out principles and refactor. It’s not clear yet how much of this is about cost minimization; one of the benefits of deciding cases as they come up is you only ever need to decide as many cases as happened, which is not true if you try to decide cases before they become pending.
On the other hand, clear systematic codes may reduce the # of cases that come up (or evaluation time per case) by reducing ambiguity.
I agree with Raemon here. It would be good to think about ambiguous cases in advance, and I like the idea that fiction is one way of doing so.
But ambiguous cases are still going to come up, and you need to have some way of dealing with them. (And if you deal with them by never punching anyone, then you’re encouraging bad actors to seek them out.)
I agree with this, but with the unfortunate caveat that I think people are most likely to think about when it’s appropriate to harm people when they have some motivation to either harm or prevent someone from coming to harm.
And I’m not 100% sure if the takeaway of “think about which circumstances it’s okay to harm people sometimes at random” is actually better (although I lean towards it).
Actually, it occurs to me that I’ve sort of been doing this via fiction.
My group house is currently watching “Walking Dead” which has a large number of instances of people having to negotiate with each other during high-stakes situations, where people disagree about object and meta level a lot. This has led to my house having a bunch of discussions about how the group-rationality of the characters is checking out, which is (mostly) divorced from considerations of actual real people.
This includes things like “it’s necessary to punish Bob in this situation, even though Bob was object-level-right, because allowing people to act like Bob did willy-nilly would destabilize their fragile society”. And this sort of thing happens at various scales, ranging from places where civilization is just 2 people, to civilization being a small town.
(If you want to consider cases where civilization is millions of people, you’ll need to watch Battlestar Galactica instead)
Sure, it’s fun to discuss what’s right in bizarre situations, but that’s very different from the decisions philh is talking about. I strongly doubt that your group house has decided “We like you, and that act was right for that situation, but we’re going to punish you so others won’t try it”.
I totally buy the argument _IN GROUPS LARGE ENOUGH TO BE IMPERSONAL_ that you punish deviance from the norm, even when that deviance is correct and necessary. More hero they, who suffer for their necessary actions. Stanislav Petrov was a hero to disobey orders, and the Soviet government was correct to reprimand him.
I do not think this is true in groups smaller than some multiple of Dunbar’s number. If you can discuss the specifics with a significant percentage of members, then you can do the right thing contextually, rather than blindly enforcing the rules (which, even for complex unwritten norms, are too simple for reality).
We’ve definitely done things of the form “okay, in this case it seems like the house is okay with this action, but we can tell that if people started doing it all the time it’d start to cause resentment, so lets basically install a Pigouvian Tax on this action so that it only ends up happening when it’s important enough.”
In a TV show where stakes are life-and-death, the consequences might look like “banishment” and in a group house the consequences are more like “pay $5 to the house”, but it feels like fairly similar principles at play.
You definitely do need different tools and principles as things grow larger and more impersonal, for sure. And I’d definitely like to see a show where the situations getting hashed out are more applicable-to-life than “zombie apocalypse.” But I do think Walking Dead is a fairly uniquely-good-show at depicting group rationality though.
So, I’ve had the feeling from all of your comments on this thread that you think I’m talking about something different from what I think I’m taking about. I’ve not felt like going to the effort of teasing out the confusion, and I still don’t. But I would like to make it clear that I do not endorse this statement.
Ok, then I’m very confused. “punching” is intentional harm or intimidation, typically to establish hierarchy or enforce compliance. If you meant something else, you should use different words.
Specifically, if you meant pigouvian taxes or coasean redress (both of which are not punitive, but rather fee-for-costs-imposed), rather than censure and retribution, then most of my disagreement evaporates.
I was thinking of actions, not motivations. If Alice wants to convince people to punch Bob, then her motivations (punishment, protection, deterrence, restoration) will be relevant to what sort of arguments she makes and whether other people are likely to agree. But I don’t think they’re particularly relevant to the contents of this post.
Not 100% sure I grok what philh meant in the first place, but I also want to note that I didn’t mean for my example-from-fiction to precisely match what I interpreted philh to mean. It was just an easily-accessible example from thinking about the show and game theory.
I do happen to also think there are generalizable lessons from that, which apply to both punishment and pigouvian tax. But that was sort of accidental. (i.e. I quickly searched my brain for the most relevant seeming fictional example, found one that seemed relevant, and it happened to be reasonably relevant)
One could implement a monetary tax that involves shame and social stigma, which’d feel more like being punched. One could also have a culture where being punched comes with less stigma, and is a quick “take your lumps” sort of thing. There are benefits and tradeoffs to wielding shame/stigma/dominance as part of a punishment strategy. In all cases though, you’re trying to impose a cost on an action that you want to see less of.