I enjoyed your post. Specifically, using programs as an analogy for society seems like something that could generate a few interesting ideas. I have actually done the same and will share one of my own thoughts in this space at the end.
To summarize some of your key points:
A person can define their trust of a group by how well they can mentally predict the behavior of that group.
We don’t seem to have good social interfaces for large groups, perhaps because we cannot simulate large groups.
There is a continuum of formality for social rules with very formal written laws on one end and culture on the other. Understanding and enforcement are high in formal rule sets and low in informal ones.
Different social interfaces are more stable at different group sizes.
Distance between social groups increase friction, reduce trust, reduce accountability, reduce predictability, and reduce consistency.
Enforcement of rules implies monopoly of force implies hierarchy implies inequality.
Regarding mental prediction of group behavior as the definition of trust. I am not sure on this one. What about when you reliably predict someone will lie?
Regarding the continuum of formality for social rules I agree that formality is an important dimension. Although I would suggest decoupling enforcement and understanding. Consider people who work at corporations or live in tyrannies- these environments have high enforcement/concentrations of power, but often an opaque ruleset. Carl Popper in his book The Open Society spends a good amount of time discussing the institutionalization of norms into policies/laws etc, vs rules which simply give people in a hierarchy discretionary power. You may enjoy it. Chapter 17 section VII. The overall point though is that for rules to be understandable in a meaningful sense (beyond “don’t piss off the monarch”) they can’t delegate discretion to other people.
Creating interfaces that are consistent means the circumstances of individuals have to be abstracted away.
Is the idea behind this maybe something like everybody in a democracy implements get_vote(issue) → true|false?
Is this a problem?
Lastly, to share an idea I am currently trying to research more extensively, but uses the software analogy:
What if someone founded a new political party whose candidates run on the platform that if elected they will send every bill voted on to their constituents using an app of sorts and will always vote the way the constituency says. Essentially having no opinions of their own. I think of this political party as an adapter that turns a representative democracy into a direct (or liquid or whatever you implement in the app) democracy.
I think I am troubled by the same situation as you. How to organize society that uses hierarchy less, but still has law, order, and good coordination between people. To me, more direct forms of democracy are the next logical step. Doing the above would erode lobbying power/corruption. I am researching similar concepts for companies as well.
Thanks for the thoughtful response. Great summary. I think this is missing something:
We don’t seem to have good social interfaces for large groups, perhaps because we cannot simulate large groups.
Not exactly what I was going for. Many actors + game theoretic concerns → complex simulation. Eventually good simulation becomes intractable. However, when a common set of rules is enforced strongly enough, each individual’s utility function aligns with that set of rules. This simplifies the situation and creates a higher level interface. This is why I thought to include enforcement as an important dimension.
In response to this:
Regarding mental prediction of group behavior as the definition of trust. I am not sure on this one. What about when you reliably predict someone will lie?
If you can reliably predict that someone’s statements are untruths, then you can trust them to do the opposite of what they said. Sarcasm is trustworthy untruth. I think that the lack of trust arises only when I’m highly uncertain about which statements are truths vs. lies.
That said, I do think that this definition of trust is imperfect. You might “trust” your doctor to prescribe the right medicine, even if you don’t know what decision they will make. I guess I could argue that my prediction is about the doctor acting in my best interest, rather than the particular action… I think the definition is imprecise, but still useful.
I appreciate the book recommendation and the intro to your thinking on this topic. I’ll have to update when I have a chance to do the suggested reading :)
Thank you for additional detail, I understand your point about conformity to rules, the way that increases predictability, and how that allows for larger groups to coordinate effectively. I think I am getting hung up on the word trust, as I tend to think of it as when I take for granted someone has good intentions towards me and basic shared values. (e.g. they can’t think whats best for me is to kill me) I think I am pretty much on board with everything else about the article.
I wonder if another productive way to think about all this would be (continuing to riff on interfaces, and largely restating what you have already said) something like: when people form relationships they understand how each other will behave, relationships enable coordination, humans can handle understanding and coordinating up to Dunbar’s number, to work around this limit above 150 we begin grouping people- essentially abstracting them back down to a single person (named for example ‘Sales’ or ‘The IT Department’), if that group of people follow rules/process then the group becomes understandable and we can have a relationship and coordinate with that group, and if we all follow shared rules, everyone can understand and coordinate with everyone else without having to know them. I think I am pretty much agreeing with the point you make about small groups being able to predict each other’s behavior, and that being key. Instead of saying one person trusts another person, I’d favor one person understands another person. I think this language is compatible with your examples of sarcasm, lies, and the prisoner’s dilemma.
Anyway, I’ll leave it at that. Thank you for the discussion.
The idea that you erode lobbying power by direct democracy misunderstands political power. In a direct democracy, when there’s a bill you don’t like you don’t need to convince anyone who actually read the bill that the bill is bad. You can just run a lot of ads that say “bill X is bad because of Y”.
To get good governance you need a system that allows votes for laws to be made based on a good analysis of the merits of the law.
I think its fair to say direct democracy would not eliminate lobbying power. And to your final point, I agree that reliable educational resources or perhaps some other solution would be needed to make sure whomever is doing the voting is as rational as they can be. It’s not sufficient to only give everyone a vote.
Regarding your point around running ads, to make sure I am understanding: do you mean the number of people who actually read the bill will be sufficiently low, that a viable strategy to get something passed would be to appeal to the non-reading voters and misinform them?
And to your final point, I agree that reliable educational resources or perhaps some other solution would be needed to make sure whomever is doing the voting is as rational as they can be.
Understanding what a law does takes effort and time even if you are generally educated. Even if there are educational resources available plenty of people don’t have the time to inform themselves about every law.
Representative democracy is about giving that job of understanding laws to democratically elected officials and their staff.
In the absence of that, the people who spent full time engaging with laws are people who need to get a paycheck from somewhere else. Those can be lobbyists. They can also be journalists. Most journalists also get paid by corporate masters.
I enjoyed your post. Specifically, using programs as an analogy for society seems like something that could generate a few interesting ideas. I have actually done the same and will share one of my own thoughts in this space at the end.
To summarize some of your key points:
A person can define their trust of a group by how well they can mentally predict the behavior of that group.
We don’t seem to have good social interfaces for large groups, perhaps because we cannot simulate large groups.
There is a continuum of formality for social rules with very formal written laws on one end and culture on the other. Understanding and enforcement are high in formal rule sets and low in informal ones.
Different social interfaces are more stable at different group sizes.
Distance between social groups increase friction, reduce trust, reduce accountability, reduce predictability, and reduce consistency.
Enforcement of rules implies monopoly of force implies hierarchy implies inequality.
Regarding mental prediction of group behavior as the definition of trust. I am not sure on this one. What about when you reliably predict someone will lie?
Regarding the continuum of formality for social rules I agree that formality is an important dimension. Although I would suggest decoupling enforcement and understanding. Consider people who work at corporations or live in tyrannies- these environments have high enforcement/concentrations of power, but often an opaque ruleset. Carl Popper in his book The Open Society spends a good amount of time discussing the institutionalization of norms into policies/laws etc, vs rules which simply give people in a hierarchy discretionary power. You may enjoy it. Chapter 17 section VII. The overall point though is that for rules to be understandable in a meaningful sense (beyond “don’t piss off the monarch”) they can’t delegate discretion to other people.
Is the idea behind this maybe something like everybody in a democracy implements get_vote(issue) → true|false?
Is this a problem?
Lastly, to share an idea I am currently trying to research more extensively, but uses the software analogy:
What if someone founded a new political party whose candidates run on the platform that if elected they will send every bill voted on to their constituents using an app of sorts and will always vote the way the constituency says. Essentially having no opinions of their own. I think of this political party as an adapter that turns a representative democracy into a direct (or liquid or whatever you implement in the app) democracy.
I think I am troubled by the same situation as you. How to organize society that uses hierarchy less, but still has law, order, and good coordination between people. To me, more direct forms of democracy are the next logical step. Doing the above would erode lobbying power/corruption. I am researching similar concepts for companies as well.
Thanks for the thoughtful response. Great summary. I think this is missing something:
Not exactly what I was going for. Many actors + game theoretic concerns → complex simulation. Eventually good simulation becomes intractable. However, when a common set of rules is enforced strongly enough, each individual’s utility function aligns with that set of rules. This simplifies the situation and creates a higher level interface. This is why I thought to include enforcement as an important dimension.
In response to this:
If you can reliably predict that someone’s statements are untruths, then you can trust them to do the opposite of what they said. Sarcasm is trustworthy untruth. I think that the lack of trust arises only when I’m highly uncertain about which statements are truths vs. lies.
That said, I do think that this definition of trust is imperfect. You might “trust” your doctor to prescribe the right medicine, even if you don’t know what decision they will make. I guess I could argue that my prediction is about the doctor acting in my best interest, rather than the particular action… I think the definition is imprecise, but still useful.
I appreciate the book recommendation and the intro to your thinking on this topic. I’ll have to update when I have a chance to do the suggested reading :)
Thank you for additional detail, I understand your point about conformity to rules, the way that increases predictability, and how that allows for larger groups to coordinate effectively. I think I am getting hung up on the word trust, as I tend to think of it as when I take for granted someone has good intentions towards me and basic shared values. (e.g. they can’t think whats best for me is to kill me) I think I am pretty much on board with everything else about the article.
I wonder if another productive way to think about all this would be (continuing to riff on interfaces, and largely restating what you have already said) something like: when people form relationships they understand how each other will behave, relationships enable coordination, humans can handle understanding and coordinating up to Dunbar’s number, to work around this limit above 150 we begin grouping people- essentially abstracting them back down to a single person (named for example ‘Sales’ or ‘The IT Department’), if that group of people follow rules/process then the group becomes understandable and we can have a relationship and coordinate with that group, and if we all follow shared rules, everyone can understand and coordinate with everyone else without having to know them. I think I am pretty much agreeing with the point you make about small groups being able to predict each other’s behavior, and that being key. Instead of saying one person trusts another person, I’d favor one person understands another person. I think this language is compatible with your examples of sarcasm, lies, and the prisoner’s dilemma.
Anyway, I’ll leave it at that. Thank you for the discussion.
The idea that you erode lobbying power by direct democracy misunderstands political power. In a direct democracy, when there’s a bill you don’t like you don’t need to convince anyone who actually read the bill that the bill is bad. You can just run a lot of ads that say “bill X is bad because of Y”.
To get good governance you need a system that allows votes for laws to be made based on a good analysis of the merits of the law.
I think its fair to say direct democracy would not eliminate lobbying power. And to your final point, I agree that reliable educational resources or perhaps some other solution would be needed to make sure whomever is doing the voting is as rational as they can be. It’s not sufficient to only give everyone a vote.
Regarding your point around running ads, to make sure I am understanding: do you mean the number of people who actually read the bill will be sufficiently low, that a viable strategy to get something passed would be to appeal to the non-reading voters and misinform them?
Understanding what a law does takes effort and time even if you are generally educated. Even if there are educational resources available plenty of people don’t have the time to inform themselves about every law.
Representative democracy is about giving that job of understanding laws to democratically elected officials and their staff.
In the absence of that, the people who spent full time engaging with laws are people who need to get a paycheck from somewhere else. Those can be lobbyists. They can also be journalists. Most journalists also get paid by corporate masters.