This assumes that implementing law/bureaucracy internally at lower levels than the inner council is insufficient for detecting effective werewolf behavior. Certainly, it’s harder, but it doesn’t follow that it isn’t possible.
My core thesis here is that if you have a lowel-level manager that is as competent at detecting werewolves, you will be more powerful if you instead promote those people to a higher level, so that you can expand and gain more territory. You can choose not to do that, but then another kingdom which does promote it’s competent mid-level agents to expansionary goals will overtake you and probably swallow you.
I do think that the eventually longterm victory for good guys looks something like “an inner council with enough global power, who wont fall to whatever forces eventually seem to kill empires by default, who then has so much free energy they can slowly improve the quality of their mid-level systems.”
But you don’t actually get to succeed fully at that until you are actually in a position to win. (And winning is [at the very least] on the universal scale, if not the multiverse scale, so you probably don’t get to win in a domain that humans feel comfortable in).
[acknowleding that that paragraph has loads of complex unstated assumptions, which I think are beyond scope of the thread for me to argue for. I have some plans to write up a post about my current understanding and confusions about the Long Game]
That said, because the world is a mixed game, and we currently live in the Dream Time, and some people just actually want to be mid-level-managers despite it being strategically better for them to help conquer new territory, and some people have skills to detect werewolves but lack other skills that make them inappropriate to promote, we do have a “utopia has started to arrive early, but is unevenly distributed.”
This sort of maps to “you can do it, but it’s hard.” But actually, it’s only possible-and-strategically-advisable if you have managers who are limited in their growth potential (i.e. people who for some reason you can’t use for expansionary purposes). The good guys would win more quickly if their mid-level-people-who-are-perfectly-good-at-werewolf-spotting also gained other skills that allowed them to expand.
(You obviously need at least some werewolf spotting skills at the mid levels, just that that’s not where it’s strategically adviseable to put your best people there).
I should note that I’m not very confident about the strategy here. (I’m speaking in stereotypical more-confident-than-I-have-right-to-be-rationalist-voice that gets criticized sometimes. But I’m, like, less than 50% confident of any of my claims here. This whole model has the pluratity but not majority of my confidence)
I think there are probably ways to build a pocket of power and trustworthiness, which does get absorbed by powerful empires which rise and fall around it, and doesn’t lose it’s soul in the absorption process. Rather than try and compete with empires on their own terms, make sure you either look uninteresting or illegible to empires, or build good relationships with them so you get to keep surviving. (Hmm. The illegible strategy maps to the Roma, the good-relationship strategy maps to the Amish?)
Then, slowly expand. Optimize for lasting longer than empires at the expense of power. Maybe you incrementally gain illegible power and eventually get to win on the global scale. I think this would work fine if you don’t have important time-sensitive goals on the global scale.
I think there are probably ways to build a pocket of power and trustworthiness, which does get absorbed by powerful empires which rise and fall around it, and doesn’t lose it’s soul in the absorption process. Rather than try and compete with empires on their own terms, make sure you either look uninteresting or illegible to empires, or build good relationships with them so you get to keep surviving. (Hmm. The illegible strategy maps to the Roma, the good-relationship strategy maps to the Amish?)
I’ll add Israelites / Jews to the list here. Not the same kinds of good relations as the Amish—we’re more of a target—but we seem to be able to survive in many more, varied political environments, and with a different kind of long-run ambition—and we’ve been around for longer.
Getting conquered and exiled by the Babylonians precipitated a rapid change in strategies from territorial integrity around a central physical cultic site, to memetic integrity oriented around a set of texts. This hasty transition worked well enough that when the Persians conquered the Babylonian empire, Jews were able to play court politics well enough to (a) avoid getting murdered for retaining a group identity, (b) return to their original territory, and (c) get permission to rebuild their physical cultic site, all without having to fight at a scale that could take on the empire.
By the time of the Roman exile, the portable cultural tech had been improved enough to sustain something recognizable for multiple millennia, though the pace of progress also slowed by quite a lot.
There was also a prior transition from having almost exclusively nonhierarchical distributed governance, to having a king, also partially in response to external pressure.
I agree with the strategy in this comment, for some notions of “absorbed”; being absorbed territorially or economically might be fine, but being absorbed culturally/intellectually probably isn’t. Illegibility and good relationships seem like the most useful approaches.
Nod. In modern times, I’d consider a relevant strategy is “how to get your company purchased by a larger company in a way that lets your company mostly get to keep doing what it’s doing.”
Then, slowly expand. Optimize for lasting longer than empires at the expense of power. Maybe you incrementally gain illegible power and eventually get to win on the global scale. I think this would work fine if you don’t have important time-sensitive goals on the global scale.
I have a stub post about this in drafts, but the sources are directly relevant to this section and talk about underlying mechanisms, so I’ll produce it here:
Accumulation of power, and longevity in power, are largely a matter of keeping options open
In order to keep options as open as possible, commit to as few explicit goals as possible
This conflicts with our goal-orientation
Sacrifice longevity in exchange for explicit goal achievement: be expendable
Longevity is therefore only a condition of accumulation—survive long enough to be able to strike, and then strike
Explicit goal achievement does not inherently conflict with robust action or multivocality, but probably does put even more onus on calculating the goal well beforehand
~~~
Robust action and multivocality are sociological terms. In a nutshell, the former means ‘actions which are very difficult to interfere with’ and the latter means ‘communication which can be interpreted different ways by different audiences’. Also, it’s a pretty good paper in its own right.
My core thesis here is that if you have a lowel-level manager that is as competent at detecting werewolves, you will be more powerful if you instead promote those people to a higher level, so that you can expand and gain more territory.
Either it’s possible to produce people/systems that detect werewolves at scale, or it isn’t. If it isn’t, we have problems. If it is, you have a choice of how many of these people to use as lower-level managers versus how many to use for expansion. It definitely isn’t the case that you should use all of them for expansion, otherwise your existing territories become less useful/productive, and you lose control of them. The most competitive empire will create werewolf detectors at scale and use them for lower management in addition to expansion.
Part of my thesis is that, if you live in a civilization dominated by werewolves and you’re the first to implement anti-werewolf systems, you get a big lead, and you don’t have to worry about direct competitors (who also have anti-werewolf systems but who want to expand indefinitely/unsustainably) for a while; by then, you have a large lead.
My current guess is that many organizations already have anti-werewolf systems, but it’s a continuous anti-inductive process, and everyone is deploying their most powerful anti-werewolf social tech at the highest levels. (which consists of a small number of people. So, the typical observer notices “jeez, there are lots of werewolves around, why isn’t anyone doing anything about this?” but actually yes people are doing that, and they’re keeping their anti-werewolf tech illegible so that it’s hard for werewolves to adapt to it. This just isn’t much benefit to the average person.
I also assume anti-werewolf tech is only one of many things you need to succeed. If you were to develop a dramatic advance in anti-werewolf tech, it’d give you an edge, if you are also competent at other things. And the probably is that most of the things you need are anti-inductive – at least the werewolf tech, having a core product that is better than the competition. And many other good business practices are, if not anti-inductive, at least quite hard.
If it were possible to develop anti-werewolf tech that is robust to being fully public about how it works, I agree that’d be a huge advance. I’m personally skeptical that this will ever work as a silver bullet* though.
*(lol at accidental werewolf metaphor)
To be clear, I’m very glad you’re working on anti-werewolf tech, I think it’s one of the necessary things to have good guys working on, I just don’t expect it to translate into decisive strategic advantage.
Either it’s possible to produce people/systems that detect werewolves at scale, or it isn’t. If it isn’t, we have problems.
My prediction is that “we have problems”, and that the solutions will necessarily involve dealing with those problems for a very long time, the hard way.
(I’d also reword to “it’s either possible to produce werewolf detection at a scale and reliability that outpaces werewolf evolution, or it isn’t.” Which I think maps pretty cleanly to medicine – we discovered antibiotics, which was real powerful for awhile, but eventually runs the risk of stopping working)
To be clear, I’m very glad you’re working on anti-werewolf tech, I think it’s one of the necessary things to have good guys working on, I just don’t expect it to translate into decisive strategic advantage.
I agree, it’s necessary to reach at least the standard of mediocrity on other aspects of e.g. running a business, and often higher standards than that. My belief isn’t that anti-werewolf tech immediately causes you to win, so much as that it expands your computational ability to the point where you are in a much better position to compute and implement the path to victory, which itself has many object-level parts to it, and requires adjustment over time.
Nod. I think we may disagree upon some of the nuances of the ways reality-is-likely-to-turn out, but are in rough agreement on the broad strokes (and in any case I think I’ve run out of cached beliefs that are relevant to the conversation, and don’t expect to make progress in the immediate future on figuring out the details of those nuances).
My core thesis here is that if you have a lowel-level manager that is as competent at detecting werewolves, you will be more powerful if you instead promote those people to a higher level, so that you can expand and gain more territory. You can choose not to do that, but then another kingdom which does promote it’s competent mid-level agents to expansionary goals will overtake you and probably swallow you.
I do think that the eventually longterm victory for good guys looks something like “an inner council with enough global power, who wont fall to whatever forces eventually seem to kill empires by default, who then has so much free energy they can slowly improve the quality of their mid-level systems.”
But you don’t actually get to succeed fully at that until you are actually in a position to win. (And winning is [at the very least] on the universal scale, if not the multiverse scale, so you probably don’t get to win in a domain that humans feel comfortable in).
[acknowleding that that paragraph has loads of complex unstated assumptions, which I think are beyond scope of the thread for me to argue for. I have some plans to write up a post about my current understanding and confusions about the Long Game]
That said, because the world is a mixed game, and we currently live in the Dream Time, and some people just actually want to be mid-level-managers despite it being strategically better for them to help conquer new territory, and some people have skills to detect werewolves but lack other skills that make them inappropriate to promote, we do have a “utopia has started to arrive early, but is unevenly distributed.”
This sort of maps to “you can do it, but it’s hard.” But actually, it’s only possible-and-strategically-advisable if you have managers who are limited in their growth potential (i.e. people who for some reason you can’t use for expansionary purposes). The good guys would win more quickly if their mid-level-people-who-are-perfectly-good-at-werewolf-spotting also gained other skills that allowed them to expand.
(You obviously need at least some werewolf spotting skills at the mid levels, just that that’s not where it’s strategically adviseable to put your best people there).
I should note that I’m not very confident about the strategy here. (I’m speaking in stereotypical more-confident-than-I-have-right-to-be-rationalist-voice that gets criticized sometimes. But I’m, like, less than 50% confident of any of my claims here. This whole model has the pluratity but not majority of my confidence)
I think there are probably ways to build a pocket of power and trustworthiness, which does get absorbed by powerful empires which rise and fall around it, and doesn’t lose it’s soul in the absorption process. Rather than try and compete with empires on their own terms, make sure you either look uninteresting or illegible to empires, or build good relationships with them so you get to keep surviving. (Hmm. The illegible strategy maps to the Roma, the good-relationship strategy maps to the Amish?)
Then, slowly expand. Optimize for lasting longer than empires at the expense of power. Maybe you incrementally gain illegible power and eventually get to win on the global scale. I think this would work fine if you don’t have important time-sensitive goals on the global scale.
I’ll add Israelites / Jews to the list here. Not the same kinds of good relations as the Amish—we’re more of a target—but we seem to be able to survive in many more, varied political environments, and with a different kind of long-run ambition—and we’ve been around for longer.
Getting conquered and exiled by the Babylonians precipitated a rapid change in strategies from territorial integrity around a central physical cultic site, to memetic integrity oriented around a set of texts. This hasty transition worked well enough that when the Persians conquered the Babylonian empire, Jews were able to play court politics well enough to (a) avoid getting murdered for retaining a group identity, (b) return to their original territory, and (c) get permission to rebuild their physical cultic site, all without having to fight at a scale that could take on the empire.
By the time of the Roman exile, the portable cultural tech had been improved enough to sustain something recognizable for multiple millennia, though the pace of progress also slowed by quite a lot.
There was also a prior transition from having almost exclusively nonhierarchical distributed governance, to having a king, also partially in response to external pressure.
I agree with the strategy in this comment, for some notions of “absorbed”; being absorbed territorially or economically might be fine, but being absorbed culturally/intellectually probably isn’t. Illegibility and good relationships seem like the most useful approaches.
Nod. In modern times, I’d consider a relevant strategy is “how to get your company purchased by a larger company in a way that lets your company mostly get to keep doing what it’s doing.”
Example: Deepmind?
I have a stub post about this in drafts, but the sources are directly relevant to this section and talk about underlying mechanisms, so I’ll produce it here:
~~~
The blog post is: Francisco Franco, Robust Action, and the Power of Non-Commitment
The paper is: Robust Action and the Rise of the Medici
Accumulation of power, and longevity in power, are largely a matter of keeping options open
In order to keep options as open as possible, commit to as few explicit goals as possible
This conflicts with our goal-orientation
Sacrifice longevity in exchange for explicit goal achievement: be expendable
Longevity is therefore only a condition of accumulation—survive long enough to be able to strike, and then strike
Explicit goal achievement does not inherently conflict with robust action or multivocality, but probably does put even more onus on calculating the goal well beforehand
~~~
Robust action and multivocality are sociological terms. In a nutshell, the former means ‘actions which are very difficult to interfere with’ and the latter means ‘communication which can be interpreted different ways by different audiences’. Also, it’s a pretty good paper in its own right.
Either it’s possible to produce people/systems that detect werewolves at scale, or it isn’t. If it isn’t, we have problems. If it is, you have a choice of how many of these people to use as lower-level managers versus how many to use for expansion. It definitely isn’t the case that you should use all of them for expansion, otherwise your existing territories become less useful/productive, and you lose control of them. The most competitive empire will create werewolf detectors at scale and use them for lower management in addition to expansion.
Part of my thesis is that, if you live in a civilization dominated by werewolves and you’re the first to implement anti-werewolf systems, you get a big lead, and you don’t have to worry about direct competitors (who also have anti-werewolf systems but who want to expand indefinitely/unsustainably) for a while; by then, you have a large lead.
My current guess is that many organizations already have anti-werewolf systems, but it’s a continuous anti-inductive process, and everyone is deploying their most powerful anti-werewolf social tech at the highest levels. (which consists of a small number of people. So, the typical observer notices “jeez, there are lots of werewolves around, why isn’t anyone doing anything about this?” but actually yes people are doing that, and they’re keeping their anti-werewolf tech illegible so that it’s hard for werewolves to adapt to it. This just isn’t much benefit to the average person.
I also assume anti-werewolf tech is only one of many things you need to succeed. If you were to develop a dramatic advance in anti-werewolf tech, it’d give you an edge, if you are also competent at other things. And the probably is that most of the things you need are anti-inductive – at least the werewolf tech, having a core product that is better than the competition. And many other good business practices are, if not anti-inductive, at least quite hard.
If it were possible to develop anti-werewolf tech that is robust to being fully public about how it works, I agree that’d be a huge advance. I’m personally skeptical that this will ever work as a silver bullet* though.
*(lol at accidental werewolf metaphor)
To be clear, I’m very glad you’re working on anti-werewolf tech, I think it’s one of the necessary things to have good guys working on, I just don’t expect it to translate into decisive strategic advantage.
Or to put another way:
My prediction is that “we have problems”, and that the solutions will necessarily involve dealing with those problems for a very long time, the hard way.
(I’d also reword to “it’s either possible to produce werewolf detection at a scale and reliability that outpaces werewolf evolution, or it isn’t.” Which I think maps pretty cleanly to medicine – we discovered antibiotics, which was real powerful for awhile, but eventually runs the risk of stopping working)
I agree, it’s necessary to reach at least the standard of mediocrity on other aspects of e.g. running a business, and often higher standards than that. My belief isn’t that anti-werewolf tech immediately causes you to win, so much as that it expands your computational ability to the point where you are in a much better position to compute and implement the path to victory, which itself has many object-level parts to it, and requires adjustment over time.
Nod. I think we may disagree upon some of the nuances of the ways reality-is-likely-to-turn out, but are in rough agreement on the broad strokes (and in any case I think I’ve run out of cached beliefs that are relevant to the conversation, and don’t expect to make progress in the immediate future on figuring out the details of those nuances).