This doesn’t seem like it actually has anything to do with the topic of the OP, unless you’re proposing that the US military used nuclear weapons on each other. Your second paragraph is even less relevant it’s about something you claim to be a transfomative technological advance but is not generally considered by other people to be one, and it doesn’t describe any actual transformation it has brought about but merely conjectures that some people in a government agency might not have figured it out yet.
I do not think that trying to wedge your theories into places where they don’t really fit will make them more likely to find agreement.
During the start of the Cold War, the incumbents in the US Navy ended up marginalized and losing their status within the military, resulting in the Revolt of the Admirals. The Navy was pursuing naval-based nuclear weapons implementations in order to remain relevant in the new nuclear weapon paradigm due to their long-standing rivalry with the Army, but the plan was rejected by the central leadership due to lacking enough strategic value relative to the cost. As a result, the incumbents lost out; Roko explicitly referred to incumbents facing internal conflict: “European aristocracy would rather firearms had never been invented.”
If I’ve committed a faux pass or left a bad impression, then of course that’s a separate matter and I’m open to constructive criticism on that. If I’m unknowingly hemorrhaging reputation points left and right then that will naturally cause serious problems.
Roko’s post is not about the general problem of incumbents facing internal conflict (which happens all the time for many reasons) but about a specific class of situation where something is ripe to be taken over by a person or group with new capabilities, and those capabilities come along and it happens.
While nuclear weapons represented a dramatic gain in capabilities for the United States, what you’re talking about isn’t the US using its new nuclear capabilities to overturn the world order, but internal politicking within the US military. The arrival of nuclear weapons didn’t represent an unprecedented gain in politicking capabilities for one set of US military officials over another. It is not helpful to think about the “revolt of the admirals” in terms of a “(US military officials who could lose influence if a new kind of weapon comes along) overhang”, so far as I can tell. There’s no analogy here, just two situations that happen to be describable using the words “incumbents” and “new capabilities”.
My thinking on your theories about psychological manipulation is that they don’t belong in this thread, and I will not be drawn into an attempt to make this thread be about them. You’ve already made four posts about them in the last ~2 weeks and those are perfectly good places for that discussion.
In this 11-paragraph post, the last two paragraphs turn the focus to incumbents, so I thought that expanding the topic on incumbents was reasonable, especially because nuclear weapons history is highly relevant to the overall theme, and the Revolt of the Admirals was both 1) incumbent-focused, and 2) an interesting fact about the emergence of the nuclear weapons paradigm that few people are aware of.
My thinking on the current situation with AI-powered supermanipulation is that it’s highly relevant to AI safety because it’s reasonably likely to end up in a world that’s hostile to the AI safety community, and it’s also an excellent and relevant example of technological overhang. Anyone with a SWE background can, if they look, immediately verify that SOTA systems are orders of magnitude more powerful than needed to automatically and continuously track and research deep structures like belief causality, and I’m arguing that things like social media scrolling data are the main bottleneck for intense manipulation capabilities, strongly indicating that they are already prevalent and therefore relevant.
I haven’t been using Lesswrong for a seventh of the amount of time that you have, so I don’t know what kinds of bizarre pretenders and galaxy-brained social status grabs have happened over the years. But I still think that talking about technological change, and sorting bad technological forecasts from good forecasts, is a high-value activity that LW has prioritized and gotten good at but can also do better.
This doesn’t seem like it actually has anything to do with the topic of the OP, unless you’re proposing that the US military used nuclear weapons on each other. Your second paragraph is even less relevant it’s about something you claim to be a transfomative technological advance but is not generally considered by other people to be one, and it doesn’t describe any actual transformation it has brought about but merely conjectures that some people in a government agency might not have figured it out yet.
I do not think that trying to wedge your theories into places where they don’t really fit will make them more likely to find agreement.
During the start of the Cold War, the incumbents in the US Navy ended up marginalized and losing their status within the military, resulting in the Revolt of the Admirals. The Navy was pursuing naval-based nuclear weapons implementations in order to remain relevant in the new nuclear weapon paradigm due to their long-standing rivalry with the Army, but the plan was rejected by the central leadership due to lacking enough strategic value relative to the cost. As a result, the incumbents lost out; Roko explicitly referred to incumbents facing internal conflict: “European aristocracy would rather firearms had never been invented.”
Regarding human thought/behavior prediction/steering, my thinking is that Lesswrong was ahead of the curve on Crypto, AI, and Covid by seriously evaluating change based on math rather than deferring to prevailing wisdom. In the case of 2020s AI influence technologies, intense impression-hacking capabilities are probably easy for large companies and agencies to notice and acquire, so long as they have large secure datasets, and the math behind this is solid e.g. tracking large networks of belief causality, and modern governments and militaries are highly predisposed to investing in influence systems.
If I’ve committed a faux pass or left a bad impression, then of course that’s a separate matter and I’m open to constructive criticism on that. If I’m unknowingly hemorrhaging reputation points left and right then that will naturally cause serious problems.
Roko’s post is not about the general problem of incumbents facing internal conflict (which happens all the time for many reasons) but about a specific class of situation where something is ripe to be taken over by a person or group with new capabilities, and those capabilities come along and it happens.
While nuclear weapons represented a dramatic gain in capabilities for the United States, what you’re talking about isn’t the US using its new nuclear capabilities to overturn the world order, but internal politicking within the US military. The arrival of nuclear weapons didn’t represent an unprecedented gain in politicking capabilities for one set of US military officials over another. It is not helpful to think about the “revolt of the admirals” in terms of a “(US military officials who could lose influence if a new kind of weapon comes along) overhang”, so far as I can tell. There’s no analogy here, just two situations that happen to be describable using the words “incumbents” and “new capabilities”.
My thinking on your theories about psychological manipulation is that they don’t belong in this thread, and I will not be drawn into an attempt to make this thread be about them. You’ve already made four posts about them in the last ~2 weeks and those are perfectly good places for that discussion.
In this 11-paragraph post, the last two paragraphs turn the focus to incumbents, so I thought that expanding the topic on incumbents was reasonable, especially because nuclear weapons history is highly relevant to the overall theme, and the Revolt of the Admirals was both 1) incumbent-focused, and 2) an interesting fact about the emergence of the nuclear weapons paradigm that few people are aware of.
My thinking on the current situation with AI-powered supermanipulation is that it’s highly relevant to AI safety because it’s reasonably likely to end up in a world that’s hostile to the AI safety community, and it’s also an excellent and relevant example of technological overhang. Anyone with a SWE background can, if they look, immediately verify that SOTA systems are orders of magnitude more powerful than needed to automatically and continuously track and research deep structures like belief causality, and I’m arguing that things like social media scrolling data are the main bottleneck for intense manipulation capabilities, strongly indicating that they are already prevalent and therefore relevant.
I haven’t been using Lesswrong for a seventh of the amount of time that you have, so I don’t know what kinds of bizarre pretenders and galaxy-brained social status grabs have happened over the years. But I still think that talking about technological change, and sorting bad technological forecasts from good forecasts, is a high-value activity that LW has prioritized and gotten good at but can also do better.