A big issue here is that both AI risk, and great power diplomacy, are fairly technical issues, and missing a single gear from your mental model can result in wildly different implications. A miscalculation in the math gets amplified as you layer more calculations on top of it.
AI safety probably requires around 20 hours of studying to know whether you can become an alignment researcher and have a professional stance. The debate itself seems unlikely to be resolved soon, but the debate is coherent, it’s publicly available, and thoroughly discussed. Meanwhile, understanding nuclear war dynamics e.g. treaties, is not so open, it requires reading the right books recommended by an expert you trust (not a pundit), instead of randomly selected from the pool which is full of wrong/bad books. I recommend the first two chapters of Thomas Schelling’s 1966 Arms and Influence, but only because the dynamic it describes is fundamental, almost guaranteed to be true, probably most of the generals have had that dynamic in mind for ~60 years, that dynamic is merely a single layer of the overall system (e.g. it says nothing about spies), and it’s only two chapters. Likewise, Raemon, for several years now, has considered Scott Alexander’s Superintelligence FAQ to be the best layman’s introduction to AI safety that anyone can send to new people.
I’m optimistic that lots of people can get a handle on this and come to an agreement here, because soon, basically anyone can go get a professional stance on either of these issues after only 20 hours of work. Anyone can go to a bunch of EA events and get a notecard full of names of nuclear war experts, cold email them, and then get the perfect books as a result and spend 10-20 hours reading them (they’re interesting). When the Lesswrong team finishes putting together the ~20 hours of AI alignment studying, anyone will be able to take a crack at that too. We’re on the brink of finally setting up unambiguous, fun/satisfying/interesting, and clear paths that allow anyone to get where they need to be.
A big issue here is that both AI risk, and great power diplomacy, are fairly technical issues, and missing a single gear from your mental model can result in wildly different implications. A miscalculation in the math gets amplified as you layer more calculations on top of it.
AI safety probably requires around 20 hours of studying to know whether you can become an alignment researcher and have a professional stance. The debate itself seems unlikely to be resolved soon, but the debate is coherent, it’s publicly available, and thoroughly discussed. Meanwhile, understanding nuclear war dynamics e.g. treaties, is not so open, it requires reading the right books recommended by an expert you trust (not a pundit), instead of randomly selected from the pool which is full of wrong/bad books. I recommend the first two chapters of Thomas Schelling’s 1966 Arms and Influence, but only because the dynamic it describes is fundamental, almost guaranteed to be true, probably most of the generals have had that dynamic in mind for ~60 years, that dynamic is merely a single layer of the overall system (e.g. it says nothing about spies), and it’s only two chapters. Likewise, Raemon, for several years now, has considered Scott Alexander’s Superintelligence FAQ to be the best layman’s introduction to AI safety that anyone can send to new people.
I’m optimistic that lots of people can get a handle on this and come to an agreement here, because soon, basically anyone can go get a professional stance on either of these issues after only 20 hours of work. Anyone can go to a bunch of EA events and get a notecard full of names of nuclear war experts, cold email them, and then get the perfect books as a result and spend 10-20 hours reading them (they’re interesting). When the Lesswrong team finishes putting together the ~20 hours of AI alignment studying, anyone will be able to take a crack at that too. We’re on the brink of finally setting up unambiguous, fun/satisfying/interesting, and clear paths that allow anyone to get where they need to be.
This will be extremely helpful to me and less-focused-but-highly-eager-to-help people like me, w.r.t. both technical and governance.