In this 11-paragraph post, the last two paragraphs turn the focus to incumbents, so I thought that expanding the topic on incumbents was reasonable, especially because nuclear weapons history is highly relevant to the overall theme, and the Revolt of the Admirals was both 1) incumbent-focused, and 2) an interesting fact about the emergence of the nuclear weapons paradigm that few people are aware of.
My thinking on the current situation with AI-powered supermanipulation is that it’s highly relevant to AI safety because it’s reasonably likely to end up in a world that’s hostile to the AI safety community, and it’s also an excellent and relevant example of technological overhang. Anyone with a SWE background can, if they look, immediately verify that SOTA systems are orders of magnitude more powerful than needed to automatically and continuously track and research deep structures like belief causality, and I’m arguing that things like social media scrolling data are the main bottleneck for intense manipulation capabilities, strongly indicating that they are already prevalent and therefore relevant.
I haven’t been using Lesswrong for a seventh of the amount of time that you have, so I don’t know what kinds of bizarre pretenders and galaxy-brained social status grabs have happened over the years. But I still think that talking about technological change, and sorting bad technological forecasts from good forecasts, is a high-value activity that LW has prioritized and gotten good at but can also do better.
In this 11-paragraph post, the last two paragraphs turn the focus to incumbents, so I thought that expanding the topic on incumbents was reasonable, especially because nuclear weapons history is highly relevant to the overall theme, and the Revolt of the Admirals was both 1) incumbent-focused, and 2) an interesting fact about the emergence of the nuclear weapons paradigm that few people are aware of.
My thinking on the current situation with AI-powered supermanipulation is that it’s highly relevant to AI safety because it’s reasonably likely to end up in a world that’s hostile to the AI safety community, and it’s also an excellent and relevant example of technological overhang. Anyone with a SWE background can, if they look, immediately verify that SOTA systems are orders of magnitude more powerful than needed to automatically and continuously track and research deep structures like belief causality, and I’m arguing that things like social media scrolling data are the main bottleneck for intense manipulation capabilities, strongly indicating that they are already prevalent and therefore relevant.
I haven’t been using Lesswrong for a seventh of the amount of time that you have, so I don’t know what kinds of bizarre pretenders and galaxy-brained social status grabs have happened over the years. But I still think that talking about technological change, and sorting bad technological forecasts from good forecasts, is a high-value activity that LW has prioritized and gotten good at but can also do better.