Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)
Maybe I’m misreading and you’re arguing that it will help us and enemies equally? But even that seems impossible. If Big Bad Wolf can run faster than Little Red Hood, mutual visibility ensures that Little Red Hood gets eaten.
OK, I can defend this claim, which seems different from the “less privacy means we get closer to a world of angels” claim; it’s about asymmetric advantages in conflict situations.
In the example you gave, more generally available information about people’s locations helps Big Bad Wolf more than Little Red Hood. If I’m strategically identifying with Big Bad Wolf then I want more information available, and if I’m strategically identifying with Little Red Hood then I want less information available. I haven’t seen a good argument that my strategic position is more like Little Red Hood’s than Big Bad Wolf’s (yes, the names here are producing moral connotations that I think are off).
So, why would info help us more than our enemies? I think efforts to do big, important things (e.g. solve AI safety or aging) really often get derailed by predatory patterns (see Geeks, Mops, Sociopaths), which usually aren’t obvious to the people cooperative with the original goal for a while. These patterns derail the group and cause it to stop actually targeting its original mission. It seems like having more information about strategies would help solve this problem.
Of course, it also gives the predators more information. But I think it helps defense more than offense, since there are more non-predators to start with than predators, and non-predators are (presently) at a more severe information disadvantage than the predators are, with respect to this conflict.
Anyway, I’m not that confident in the overall judgment, but I currently think more available info about strategies is good in expectation with respect to conflict situations.
Yes, less privacy leads to more conformity. But I don’t think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity—ideologies and religions.
Only ones that don’t structurally depend on huge levels of hypocrisy. People can lie. It’s currently cheap and effective in a wide variety of circumstances. This does not make the lies true.
Conformity-based strategies only benefit from reductions in privacy, when they’re based on actual conformity. If they’re based on pretend/outer conformity, then they get exposed with less privacy.
Ah, gotcha. Yeah that makes sense, although it in turn depends a lot on what you think happens when lack-of-privacy forces the strategy to adapt.
(note: following comment didn’t end up engaging with a strong version of the claim, and I ran out of time to think through other scenarios.)
If you have a workplace (with a low generativity strategy) in which people are supposed to work 8 hours, but they actually only work 2 (and goof off the rest of the time), and then suddenly everyone has access to exactly how much people work, I’d expect one of a few things to happen:
1. People actually start working harder
2. People actually end up getting 2 hour work days (and then go home)
3. People continue working for 2 hours and then goofing off (with or without maintaining some kind of plausible fiction – i.e. I could easily imagine that even with full information, people still maintain the polite fiction that people work 8 hours a day, and people only go to the efforts of directing attention to those who goof off when they are a political enemy. “Polite” society often seems to not just be about concealing information but actively choosing to look away)
4. People start finding things to do with their extra 6 hours that look enough like work (but are low effort / fun) that even though people could theoretically check on them and expose them, there’d still be enough plausible deniability that it’d require effort to expose them and punish them.
These options range in how good they are – hopefully you get 1 or 2 depending on how much more valuable the extra 6 hours are.
But none of them actually change the underlying fact that this business is pursuing a simple, collectivist strategy.
(this line of options doesn’t really interface with the original claim that simple collective strategies are easier under a privacy-less regime, I think I’d have to look at several plausible examples to build up a better model and ran out of time to write this comment before, um, returning to work. [hi habryka])
I think the main thing is I can’t think of many examples where it seems like the active-ingredient in the strategy is the conformity-that-would-be-ruined-by-information.
The most common sort of strategy I’m imagining is “we are a community that requires costly signals for group membership” (i.e. strict sexual norms, subscribing to and professing the latest dogma, giving to the poor), but costly signals are, well, costly, so there’s incentive for people to pretend to meet them without actually doing so.
If it became common knowledge that nobody or very few people were “really” doing the work, one thing that might happen is that the community’s bonds would weaken or disintegrate. But I think these sorts of social norms would mostly just adapt to the new environment, in one of a few ways:
come up with new norms that are more complicated, such that it’s harder to check (even given perfect information) whether someone is meeting them. I think this what often happened in academia. (See jokes about postmodernism, where people can review each other’s work, but the work is sort of deliberately inscrutable so it’s hard to see if it says anything meaningful)
people just develop a norm of not checking in on each other (cooperating for the sake of preserving the fiction), and scrutiny is only actually deployed against political opponents.
(The latter one at least creates an interesting mutually assured destruction thing that probably makes people less willing to attack each other openly, but humans also just seem pretty good at taking social games into whatever domain seems most plausibly deniable)
Maybe I’m misreading and you’re arguing that it will help us and enemies equally? But even that seems impossible. If Big Bad Wolf can run faster than Little Red Hood, mutual visibility ensures that Little Red Hood gets eaten.
OK, I can defend this claim, which seems different from the “less privacy means we get closer to a world of angels” claim; it’s about asymmetric advantages in conflict situations.
In the example you gave, more generally available information about people’s locations helps Big Bad Wolf more than Little Red Hood. If I’m strategically identifying with Big Bad Wolf then I want more information available, and if I’m strategically identifying with Little Red Hood then I want less information available. I haven’t seen a good argument that my strategic position is more like Little Red Hood’s than Big Bad Wolf’s (yes, the names here are producing moral connotations that I think are off).
So, why would info help us more than our enemies? I think efforts to do big, important things (e.g. solve AI safety or aging) really often get derailed by predatory patterns (see Geeks, Mops, Sociopaths), which usually aren’t obvious to the people cooperative with the original goal for a while. These patterns derail the group and cause it to stop actually targeting its original mission. It seems like having more information about strategies would help solve this problem.
Of course, it also gives the predators more information. But I think it helps defense more than offense, since there are more non-predators to start with than predators, and non-predators are (presently) at a more severe information disadvantage than the predators are, with respect to this conflict.
Anyway, I’m not that confident in the overall judgment, but I currently think more available info about strategies is good in expectation with respect to conflict situations.
Yes, less privacy leads to more conformity. But I don’t think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity—ideologies and religions.
OK, you’re right that less privacy gives significant advantage to non-generative conformity-based strategies, which seems like a problem. Hmm.
Only ones that don’t structurally depend on huge levels of hypocrisy. People can lie. It’s currently cheap and effective in a wide variety of circumstances. This does not make the lies true.
[edit: actually, I’m just generally confused about what the parent comment is claiming]
Conformity-based strategies only benefit from reductions in privacy, when they’re based on actual conformity. If they’re based on pretend/outer conformity, then they get exposed with less privacy.
Ah, gotcha. Yeah that makes sense, although it in turn depends a lot on what you think happens when lack-of-privacy forces the strategy to adapt.
(note: following comment didn’t end up engaging with a strong version of the claim, and I ran out of time to think through other scenarios.)
If you have a workplace (with a low generativity strategy) in which people are supposed to work 8 hours, but they actually only work 2 (and goof off the rest of the time), and then suddenly everyone has access to exactly how much people work, I’d expect one of a few things to happen:
1. People actually start working harder
2. People actually end up getting 2 hour work days (and then go home)
3. People continue working for 2 hours and then goofing off (with or without maintaining some kind of plausible fiction – i.e. I could easily imagine that even with full information, people still maintain the polite fiction that people work 8 hours a day, and people only go to the efforts of directing attention to those who goof off when they are a political enemy. “Polite” society often seems to not just be about concealing information but actively choosing to look away)
4. People start finding things to do with their extra 6 hours that look enough like work (but are low effort / fun) that even though people could theoretically check on them and expose them, there’d still be enough plausible deniability that it’d require effort to expose them and punish them.
These options range in how good they are – hopefully you get 1 or 2 depending on how much more valuable the extra 6 hours are.
But none of them actually change the underlying fact that this business is pursuing a simple, collectivist strategy.
(this line of options doesn’t really interface with the original claim that simple collective strategies are easier under a privacy-less regime, I think I’d have to look at several plausible examples to build up a better model and ran out of time to write this comment before, um, returning to work. [hi habryka])
I think the main thing is I can’t think of many examples where it seems like the active-ingredient in the strategy is the conformity-that-would-be-ruined-by-information.
The most common sort of strategy I’m imagining is “we are a community that requires costly signals for group membership” (i.e. strict sexual norms, subscribing to and professing the latest dogma, giving to the poor), but costly signals are, well, costly, so there’s incentive for people to pretend to meet them without actually doing so.
If it became common knowledge that nobody or very few people were “really” doing the work, one thing that might happen is that the community’s bonds would weaken or disintegrate. But I think these sorts of social norms would mostly just adapt to the new environment, in one of a few ways:
come up with new norms that are more complicated, such that it’s harder to check (even given perfect information) whether someone is meeting them. I think this what often happened in academia. (See jokes about postmodernism, where people can review each other’s work, but the work is sort of deliberately inscrutable so it’s hard to see if it says anything meaningful)
people just develop a norm of not checking in on each other (cooperating for the sake of preserving the fiction), and scrutiny is only actually deployed against political opponents.
(The latter one at least creates an interesting mutually assured destruction thing that probably makes people less willing to attack each other openly, but humans also just seem pretty good at taking social games into whatever domain seems most plausibly deniable)
Only if you assume everyone loses an equal amount of privacy.