Yes, less privacy leads to more conformity. But I don’t think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity—ideologies and religions.
Only ones that don’t structurally depend on huge levels of hypocrisy. People can lie. It’s currently cheap and effective in a wide variety of circumstances. This does not make the lies true.
Conformity-based strategies only benefit from reductions in privacy, when they’re based on actual conformity. If they’re based on pretend/outer conformity, then they get exposed with less privacy.
Ah, gotcha. Yeah that makes sense, although it in turn depends a lot on what you think happens when lack-of-privacy forces the strategy to adapt.
(note: following comment didn’t end up engaging with a strong version of the claim, and I ran out of time to think through other scenarios.)
If you have a workplace (with a low generativity strategy) in which people are supposed to work 8 hours, but they actually only work 2 (and goof off the rest of the time), and then suddenly everyone has access to exactly how much people work, I’d expect one of a few things to happen:
1. People actually start working harder
2. People actually end up getting 2 hour work days (and then go home)
3. People continue working for 2 hours and then goofing off (with or without maintaining some kind of plausible fiction – i.e. I could easily imagine that even with full information, people still maintain the polite fiction that people work 8 hours a day, and people only go to the efforts of directing attention to those who goof off when they are a political enemy. “Polite” society often seems to not just be about concealing information but actively choosing to look away)
4. People start finding things to do with their extra 6 hours that look enough like work (but are low effort / fun) that even though people could theoretically check on them and expose them, there’d still be enough plausible deniability that it’d require effort to expose them and punish them.
These options range in how good they are – hopefully you get 1 or 2 depending on how much more valuable the extra 6 hours are.
But none of them actually change the underlying fact that this business is pursuing a simple, collectivist strategy.
(this line of options doesn’t really interface with the original claim that simple collective strategies are easier under a privacy-less regime, I think I’d have to look at several plausible examples to build up a better model and ran out of time to write this comment before, um, returning to work. [hi habryka])
I think the main thing is I can’t think of many examples where it seems like the active-ingredient in the strategy is the conformity-that-would-be-ruined-by-information.
The most common sort of strategy I’m imagining is “we are a community that requires costly signals for group membership” (i.e. strict sexual norms, subscribing to and professing the latest dogma, giving to the poor), but costly signals are, well, costly, so there’s incentive for people to pretend to meet them without actually doing so.
If it became common knowledge that nobody or very few people were “really” doing the work, one thing that might happen is that the community’s bonds would weaken or disintegrate. But I think these sorts of social norms would mostly just adapt to the new environment, in one of a few ways:
come up with new norms that are more complicated, such that it’s harder to check (even given perfect information) whether someone is meeting them. I think this what often happened in academia. (See jokes about postmodernism, where people can review each other’s work, but the work is sort of deliberately inscrutable so it’s hard to see if it says anything meaningful)
people just develop a norm of not checking in on each other (cooperating for the sake of preserving the fiction), and scrutiny is only actually deployed against political opponents.
(The latter one at least creates an interesting mutually assured destruction thing that probably makes people less willing to attack each other openly, but humans also just seem pretty good at taking social games into whatever domain seems most plausibly deniable)
Yes, less privacy leads to more conformity. But I don’t think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity—ideologies and religions.
OK, you’re right that less privacy gives significant advantage to non-generative conformity-based strategies, which seems like a problem. Hmm.
Only ones that don’t structurally depend on huge levels of hypocrisy. People can lie. It’s currently cheap and effective in a wide variety of circumstances. This does not make the lies true.
[edit: actually, I’m just generally confused about what the parent comment is claiming]
Conformity-based strategies only benefit from reductions in privacy, when they’re based on actual conformity. If they’re based on pretend/outer conformity, then they get exposed with less privacy.
Ah, gotcha. Yeah that makes sense, although it in turn depends a lot on what you think happens when lack-of-privacy forces the strategy to adapt.
(note: following comment didn’t end up engaging with a strong version of the claim, and I ran out of time to think through other scenarios.)
If you have a workplace (with a low generativity strategy) in which people are supposed to work 8 hours, but they actually only work 2 (and goof off the rest of the time), and then suddenly everyone has access to exactly how much people work, I’d expect one of a few things to happen:
1. People actually start working harder
2. People actually end up getting 2 hour work days (and then go home)
3. People continue working for 2 hours and then goofing off (with or without maintaining some kind of plausible fiction – i.e. I could easily imagine that even with full information, people still maintain the polite fiction that people work 8 hours a day, and people only go to the efforts of directing attention to those who goof off when they are a political enemy. “Polite” society often seems to not just be about concealing information but actively choosing to look away)
4. People start finding things to do with their extra 6 hours that look enough like work (but are low effort / fun) that even though people could theoretically check on them and expose them, there’d still be enough plausible deniability that it’d require effort to expose them and punish them.
These options range in how good they are – hopefully you get 1 or 2 depending on how much more valuable the extra 6 hours are.
But none of them actually change the underlying fact that this business is pursuing a simple, collectivist strategy.
(this line of options doesn’t really interface with the original claim that simple collective strategies are easier under a privacy-less regime, I think I’d have to look at several plausible examples to build up a better model and ran out of time to write this comment before, um, returning to work. [hi habryka])
I think the main thing is I can’t think of many examples where it seems like the active-ingredient in the strategy is the conformity-that-would-be-ruined-by-information.
The most common sort of strategy I’m imagining is “we are a community that requires costly signals for group membership” (i.e. strict sexual norms, subscribing to and professing the latest dogma, giving to the poor), but costly signals are, well, costly, so there’s incentive for people to pretend to meet them without actually doing so.
If it became common knowledge that nobody or very few people were “really” doing the work, one thing that might happen is that the community’s bonds would weaken or disintegrate. But I think these sorts of social norms would mostly just adapt to the new environment, in one of a few ways:
come up with new norms that are more complicated, such that it’s harder to check (even given perfect information) whether someone is meeting them. I think this what often happened in academia. (See jokes about postmodernism, where people can review each other’s work, but the work is sort of deliberately inscrutable so it’s hard to see if it says anything meaningful)
people just develop a norm of not checking in on each other (cooperating for the sake of preserving the fiction), and scrutiny is only actually deployed against political opponents.
(The latter one at least creates an interesting mutually assured destruction thing that probably makes people less willing to attack each other openly, but humans also just seem pretty good at taking social games into whatever domain seems most plausibly deniable)
Only if you assume everyone loses an equal amount of privacy.