If there were a sufficiently smart government with a sufficiently demonstrated track record of cluefulness whose relevant officials seemed to genuinely get the idea of pro-humanity/pro-sentience/galactic-optimizing AI, the social ideals and technical impulse behind indirect normativity, and that AI was incredibly dangerous, I would consider trusting them to be in charge of a Manhattan Project with thousands of researchers with enforced norms against information leakage, like government cryptography projects. This might not cure required serial depth but it would let FAI parallelize more without leaking info that could be used to rapidly construct UFAI. I usually regard this scenario as a political impossibility.
Things that result in fewer resources going into AI specifically would result in fewer UFAI resources without reducing overall economic growth, but it needs to be kept in mind that some such research occurs in financial firms pushing trading algorithms, and a lot more in Google, not just in places like universities.
Things that result in fewer resources going into AI specifically would result in fewer UFAI resources without reducing overall economic growth, but it needs to be kept in mind that some such research occurs in financial firms pushing trading algorithms, and a lot more in Google, not just in places like universities.
To the extent that industry researchers publish less than academia (this seems particularly likely in financial firms, and to a lesser degree at Google), a hypothetical complete shutdown of academic AI research should reduce uFAI’s parallelization advantage by 2+ orders of magnitude, though (presumably, the largest industrial uFAI teams are much smaller than the entire academic AI research community). It seems that reducing academic funding for AI only somewhat should translate pretty well into less parallel uFAI development as well.
Any ideas to make FAI parallelize better? Or make there be less UFAI resources without reducing economic growth?
If there were a sufficiently smart government with a sufficiently demonstrated track record of cluefulness whose relevant officials seemed to genuinely get the idea of pro-humanity/pro-sentience/galactic-optimizing AI, the social ideals and technical impulse behind indirect normativity, and that AI was incredibly dangerous, I would consider trusting them to be in charge of a Manhattan Project with thousands of researchers with enforced norms against information leakage, like government cryptography projects. This might not cure required serial depth but it would let FAI parallelize more without leaking info that could be used to rapidly construct UFAI. I usually regard this scenario as a political impossibility.
Things that result in fewer resources going into AI specifically would result in fewer UFAI resources without reducing overall economic growth, but it needs to be kept in mind that some such research occurs in financial firms pushing trading algorithms, and a lot more in Google, not just in places like universities.
To the extent that industry researchers publish less than academia (this seems particularly likely in financial firms, and to a lesser degree at Google), a hypothetical complete shutdown of academic AI research should reduce uFAI’s parallelization advantage by 2+ orders of magnitude, though (presumably, the largest industrial uFAI teams are much smaller than the entire academic AI research community). It seems that reducing academic funding for AI only somewhat should translate pretty well into less parallel uFAI development as well.