A lot of coordination work. I have a theory that humans prefer mutual information (radical, I know) so a surprising-to-other-people amount of work goes into things like implementing global holidays, a global standard educational curriculum, ensuring people get to see direct representatives of the World-Emperor during their lives at least a few times, etc. This is because shared experiences generate the most mutual information.
I feel like in order for this to succeed it has to be happening during the takeover already. I cannot give any credence to the Cobra Commander style global takeover by threatening to use a doomsday machine method, so I expect simultaneous campaigns of cultural, military, and commercial conquest to be the conditions under which these things get developed.
A corollary of promoting internal coordination is disrupting external coordination. This I imagine as pretty normal intelligence and diplomatic activity for the most part, because this is already what those organizations do. The big difference is that the true goal is taking over the world, which means the proximate goal is getting everyone to switch from coordinating externally to coordinating with the takeover. This universality implies some different priorities and a much different timescale than most conflicts—namely it allows an abnormally deep amount of shared assumptions and objectives among the people who are doing intelligence/diplomatic work. The basic strategy is to produce the biggest coordination differential possible.
By contrast, I feel like an AI can achieve world takeover with no one (or few people) the wiser. We don’t have to acknowledge our new robot overlords if they are in control of the information we consume, advise all the decisions we make, and suggest all the plans that we choose from. This is still practical control over almost all outcomes. Which of course means I will be intensely suspicious if the world is suddenly, continuously getting better across all dimensions at once, and most suspicious if high ranking people stop making terrible decisions within the span of a few years.
Military experience: I was in the infantry for 5 years, and deployed twice. I gained an extremely high appreciation for several important things: the amount of human effort that goes into moving huge amounts of people and stuff from A to B; growth and decay in the coordination and commitment of a team; the mindblowingly-enormous gap between how a strategy looks from on high to how it looks on the ground (which mostly means looking at failure a lot).
Jayne’s macroscopic prediction paper: I am extremely libertine in how I apply these insights, but the relevant intuition here is “remember the phase volume in the future” which winds up being the key for me to think about the long term in a way that can be operationalized. As an aside, this tends to break down into two heuristics—one is to do stuff that generates options sometimes, and the other is to weigh closing options negatively when choosing what to do.
Broad history reading: most germane are those times when some conqueror seized territory, and then had to hustle back two years later when it rebelled. Or those times when one of the conquering armies switched sides or struck out on their own. Or the charge of the light brigade. There are a huge number of high level coordination and alignment failures.
The most similar established line to the way I think about this stuff is Boyd’s OODA loop, which is also my guess for where you trace the source (did I guess right?). I confess I never actually think in terms of OODA loops. Mostly I think lower-level, which is stuff like “be sure you can act fast” and “be sure you can achieve every important type of objective.”
The most similar established line to the way I think about this stuff is Boyd’s OODA loop, which is also my guess for where you trace the source (did I guess right?).
The Human Case:
A lot of coordination work. I have a theory that humans prefer mutual information (radical, I know) so a surprising-to-other-people amount of work goes into things like implementing global holidays, a global standard educational curriculum, ensuring people get to see direct representatives of the World-Emperor during their lives at least a few times, etc. This is because shared experiences generate the most mutual information.
I feel like in order for this to succeed it has to be happening during the takeover already. I cannot give any credence to the Cobra Commander style global takeover by threatening to use a doomsday machine method, so I expect simultaneous campaigns of cultural, military, and commercial conquest to be the conditions under which these things get developed.
A corollary of promoting internal coordination is disrupting external coordination. This I imagine as pretty normal intelligence and diplomatic activity for the most part, because this is already what those organizations do. The big difference is that the true goal is taking over the world, which means the proximate goal is getting everyone to switch from coordinating externally to coordinating with the takeover. This universality implies some different priorities and a much different timescale than most conflicts—namely it allows an abnormally deep amount of shared assumptions and objectives among the people who are doing intelligence/diplomatic work. The basic strategy is to produce the biggest coordination differential possible.
By contrast, I feel like an AI can achieve world takeover with no one (or few people) the wiser. We don’t have to acknowledge our new robot overlords if they are in control of the information we consume, advise all the decisions we make, and suggest all the plans that we choose from. This is still practical control over almost all outcomes. Which of course means I will be intensely suspicious if the world is suddenly, continuously getting better across all dimensions at once, and most suspicious if high ranking people stop making terrible decisions within the span of a few years.
Follow-up question: do you know where your models/intuitions on this came from? If so, where?
(I ask because this answer comes closest so far to what I picture, and I’m curious whether you trace the source to the same place I do.)
Yes. The dominant ones are:
Military experience: I was in the infantry for 5 years, and deployed twice. I gained an extremely high appreciation for several important things: the amount of human effort that goes into moving huge amounts of people and stuff from A to B; growth and decay in the coordination and commitment of a team; the mindblowingly-enormous gap between how a strategy looks from on high to how it looks on the ground (which mostly means looking at failure a lot).
Jayne’s macroscopic prediction paper: I am extremely libertine in how I apply these insights, but the relevant intuition here is “remember the phase volume in the future” which winds up being the key for me to think about the long term in a way that can be operationalized. As an aside, this tends to break down into two heuristics—one is to do stuff that generates options sometimes, and the other is to weigh closing options negatively when choosing what to do.
Broad history reading: most germane are those times when some conqueror seized territory, and then had to hustle back two years later when it rebelled. Or those times when one of the conquering armies switched sides or struck out on their own. Or the charge of the light brigade. There are a huge number of high level coordination and alignment failures.
The most similar established line to the way I think about this stuff is Boyd’s OODA loop, which is also my guess for where you trace the source (did I guess right?). I confess I never actually think in terms of OODA loops. Mostly I think lower-level, which is stuff like “be sure you can act fast” and “be sure you can achieve every important type of objective.”
Yup, exactly.