As well as being fast/violent/disruptive, these changes tend to not be good for incumbents.
This also probably happened in the US after the emergence of nuclear weapons, the Revolt of the Admirals. I’m pretty sure that ~three top US generals died under mysterious circumstances in the period after WW2 before US-Soviet conflict heated up (my research skills weren’t as good when I looked this up 3 years ago).
Yes, although I did this research ~3 years ago as an undergrad, filed it away, and I’m not sure if it meets my current epistemic/bayesian standards; I didn’t read any books or documentaries, I just read the tons of Wikipedia articles and noted that the circumstances of these deaths stood out. The three generals who died under suspicious circumstances are:
James Forrestal, first Secretary of Defense (but with strong ties to the Navy during its decline), ousted and died by suicide at a military psychiatric hospital during the Revolt of the Admirals at age 57 in May 1949. This is what drew my attention to military figures during the period.
George Patton, the top general of the US occupation forces in Germany, opposed Eisenhower’s denazification policy of senior government officials due to concerns that it would destabilize the German government (which he correctly predicted would become West Germany and oppose the nearby Soviet forces). Ousted and died in a car accident less than 6 months later, in December 1945.
Hoyt Vandenberg, Chief of staff of the Air Force, retired immediately after opposing Air Force budget cuts in 1953 when the Air Force was becoming an independent branch from the Army and a key part of the nuclear arsenal. Died 9 months later from cancer at age 55 on April 1954.
At the time (2020), it didn’t occur to me to look into base rates (the Revolt of the Admirals and interservice conflict over nukes were clearly NOT base rates though). I also thought at the time that this was helpful to understand the modern US military, but I was wrong; WW2 was started with intent to kill, they thought diplomacy would inevitably fail for WW3 just like the last 2 world wars did, and they would carpet bomb Russia with nukes, like Berlin and Toyko were carpet bombed with conventional bombs.
Modern military norms very clearly have intent to maneuver around deterrence rather than intent to conquer, in large part because game theory wasn’t discovered until the 50s. Governments and militaries are now OOD in plenty of other major ways as well.
This doesn’t seem like it actually has anything to do with the topic of the OP, unless you’re proposing that the US military used nuclear weapons on each other. Your second paragraph is even less relevant it’s about something you claim to be a transfomative technological advance but is not generally considered by other people to be one, and it doesn’t describe any actual transformation it has brought about but merely conjectures that some people in a government agency might not have figured it out yet.
I do not think that trying to wedge your theories into places where they don’t really fit will make them more likely to find agreement.
During the start of the Cold War, the incumbents in the US Navy ended up marginalized and losing their status within the military, resulting in the Revolt of the Admirals. The Navy was pursuing naval-based nuclear weapons implementations in order to remain relevant in the new nuclear weapon paradigm due to their long-standing rivalry with the Army, but the plan was rejected by the central leadership due to lacking enough strategic value relative to the cost. As a result, the incumbents lost out; Roko explicitly referred to incumbents facing internal conflict: “European aristocracy would rather firearms had never been invented.”
If I’ve committed a faux pass or left a bad impression, then of course that’s a separate matter and I’m open to constructive criticism on that. If I’m unknowingly hemorrhaging reputation points left and right then that will naturally cause serious problems.
Roko’s post is not about the general problem of incumbents facing internal conflict (which happens all the time for many reasons) but about a specific class of situation where something is ripe to be taken over by a person or group with new capabilities, and those capabilities come along and it happens.
While nuclear weapons represented a dramatic gain in capabilities for the United States, what you’re talking about isn’t the US using its new nuclear capabilities to overturn the world order, but internal politicking within the US military. The arrival of nuclear weapons didn’t represent an unprecedented gain in politicking capabilities for one set of US military officials over another. It is not helpful to think about the “revolt of the admirals” in terms of a “(US military officials who could lose influence if a new kind of weapon comes along) overhang”, so far as I can tell. There’s no analogy here, just two situations that happen to be describable using the words “incumbents” and “new capabilities”.
My thinking on your theories about psychological manipulation is that they don’t belong in this thread, and I will not be drawn into an attempt to make this thread be about them. You’ve already made four posts about them in the last ~2 weeks and those are perfectly good places for that discussion.
In this 11-paragraph post, the last two paragraphs turn the focus to incumbents, so I thought that expanding the topic on incumbents was reasonable, especially because nuclear weapons history is highly relevant to the overall theme, and the Revolt of the Admirals was both 1) incumbent-focused, and 2) an interesting fact about the emergence of the nuclear weapons paradigm that few people are aware of.
My thinking on the current situation with AI-powered supermanipulation is that it’s highly relevant to AI safety because it’s reasonably likely to end up in a world that’s hostile to the AI safety community, and it’s also an excellent and relevant example of technological overhang. Anyone with a SWE background can, if they look, immediately verify that SOTA systems are orders of magnitude more powerful than needed to automatically and continuously track and research deep structures like belief causality, and I’m arguing that things like social media scrolling data are the main bottleneck for intense manipulation capabilities, strongly indicating that they are already prevalent and therefore relevant.
I haven’t been using Lesswrong for a seventh of the amount of time that you have, so I don’t know what kinds of bizarre pretenders and galaxy-brained social status grabs have happened over the years. But I still think that talking about technological change, and sorting bad technological forecasts from good forecasts, is a high-value activity that LW has prioritized and gotten good at but can also do better.
This also probably happened in the US after the emergence of nuclear weapons, the Revolt of the Admirals. I’m pretty sure that ~three top US generals died under mysterious circumstances in the period after WW2 before US-Soviet conflict heated up (my research skills weren’t as good when I looked this up 3 years ago).
I wouldn’t be surprised if a large portion of the NSA aren’t currently aware of the new psychology paradigm facilitated by the combination of 2020s AI and the NSA’s intercepted data (e.g. enough video calls to do human lie detection research based on facial expressions) because they don’t have the quant skills to comprehend it, even though it seems like this tech is already affecting the US-China conflict in Asia.
Can you name the generals you mean so that it’s easier to follow your claim?
Yes, although I did this research ~3 years ago as an undergrad, filed it away, and I’m not sure if it meets my current epistemic/bayesian standards; I didn’t read any books or documentaries, I just read the tons of Wikipedia articles and noted that the circumstances of these deaths stood out. The three generals who died under suspicious circumstances are:
James Forrestal, first Secretary of Defense (but with strong ties to the Navy during its decline), ousted and died by suicide at a military psychiatric hospital during the Revolt of the Admirals at age 57 in May 1949. This is what drew my attention to military figures during the period.
George Patton, the top general of the US occupation forces in Germany, opposed Eisenhower’s denazification policy of senior government officials due to concerns that it would destabilize the German government (which he correctly predicted would become West Germany and oppose the nearby Soviet forces). Ousted and died in a car accident less than 6 months later, in December 1945.
Hoyt Vandenberg, Chief of staff of the Air Force, retired immediately after opposing Air Force budget cuts in 1953 when the Air Force was becoming an independent branch from the Army and a key part of the nuclear arsenal. Died 9 months later from cancer at age 55 on April 1954.
At the time (2020), it didn’t occur to me to look into base rates (the Revolt of the Admirals and interservice conflict over nukes were clearly NOT base rates though). I also thought at the time that this was helpful to understand the modern US military, but I was wrong; WW2 was started with intent to kill, they thought diplomacy would inevitably fail for WW3 just like the last 2 world wars did, and they would carpet bomb Russia with nukes, like Berlin and Toyko were carpet bombed with conventional bombs.
Modern military norms very clearly have intent to maneuver around deterrence rather than intent to conquer, in large part because game theory wasn’t discovered until the 50s. Governments and militaries are now OOD in plenty of other major ways as well.
This doesn’t seem like it actually has anything to do with the topic of the OP, unless you’re proposing that the US military used nuclear weapons on each other. Your second paragraph is even less relevant it’s about something you claim to be a transfomative technological advance but is not generally considered by other people to be one, and it doesn’t describe any actual transformation it has brought about but merely conjectures that some people in a government agency might not have figured it out yet.
I do not think that trying to wedge your theories into places where they don’t really fit will make them more likely to find agreement.
During the start of the Cold War, the incumbents in the US Navy ended up marginalized and losing their status within the military, resulting in the Revolt of the Admirals. The Navy was pursuing naval-based nuclear weapons implementations in order to remain relevant in the new nuclear weapon paradigm due to their long-standing rivalry with the Army, but the plan was rejected by the central leadership due to lacking enough strategic value relative to the cost. As a result, the incumbents lost out; Roko explicitly referred to incumbents facing internal conflict: “European aristocracy would rather firearms had never been invented.”
Regarding human thought/behavior prediction/steering, my thinking is that Lesswrong was ahead of the curve on Crypto, AI, and Covid by seriously evaluating change based on math rather than deferring to prevailing wisdom. In the case of 2020s AI influence technologies, intense impression-hacking capabilities are probably easy for large companies and agencies to notice and acquire, so long as they have large secure datasets, and the math behind this is solid e.g. tracking large networks of belief causality, and modern governments and militaries are highly predisposed to investing in influence systems.
If I’ve committed a faux pass or left a bad impression, then of course that’s a separate matter and I’m open to constructive criticism on that. If I’m unknowingly hemorrhaging reputation points left and right then that will naturally cause serious problems.
Roko’s post is not about the general problem of incumbents facing internal conflict (which happens all the time for many reasons) but about a specific class of situation where something is ripe to be taken over by a person or group with new capabilities, and those capabilities come along and it happens.
While nuclear weapons represented a dramatic gain in capabilities for the United States, what you’re talking about isn’t the US using its new nuclear capabilities to overturn the world order, but internal politicking within the US military. The arrival of nuclear weapons didn’t represent an unprecedented gain in politicking capabilities for one set of US military officials over another. It is not helpful to think about the “revolt of the admirals” in terms of a “(US military officials who could lose influence if a new kind of weapon comes along) overhang”, so far as I can tell. There’s no analogy here, just two situations that happen to be describable using the words “incumbents” and “new capabilities”.
My thinking on your theories about psychological manipulation is that they don’t belong in this thread, and I will not be drawn into an attempt to make this thread be about them. You’ve already made four posts about them in the last ~2 weeks and those are perfectly good places for that discussion.
In this 11-paragraph post, the last two paragraphs turn the focus to incumbents, so I thought that expanding the topic on incumbents was reasonable, especially because nuclear weapons history is highly relevant to the overall theme, and the Revolt of the Admirals was both 1) incumbent-focused, and 2) an interesting fact about the emergence of the nuclear weapons paradigm that few people are aware of.
My thinking on the current situation with AI-powered supermanipulation is that it’s highly relevant to AI safety because it’s reasonably likely to end up in a world that’s hostile to the AI safety community, and it’s also an excellent and relevant example of technological overhang. Anyone with a SWE background can, if they look, immediately verify that SOTA systems are orders of magnitude more powerful than needed to automatically and continuously track and research deep structures like belief causality, and I’m arguing that things like social media scrolling data are the main bottleneck for intense manipulation capabilities, strongly indicating that they are already prevalent and therefore relevant.
I haven’t been using Lesswrong for a seventh of the amount of time that you have, so I don’t know what kinds of bizarre pretenders and galaxy-brained social status grabs have happened over the years. But I still think that talking about technological change, and sorting bad technological forecasts from good forecasts, is a high-value activity that LW has prioritized and gotten good at but can also do better.