Your first three bullet points seem to imply that entities like the NSA should be expected to have research programmes dedicated to things like pandemics and asteroid strikes. That seems unlikely to me; why would the NSA or CIA or whatever be the right venue for such research? The only advantage of doing it in house rather than letting organizations dedicated to health and space handle it would be if somehow there were some nation-specific interests optimized by keeping their research secret. Which seems unlikely, because if human life is wiped out by an asteroid strike or something then the distinction between US interests and PRC interests will be of … limited importance.
Now, would we expect unfriendly AI research to be any different? I can think of three ways it might be. (1) Maybe an organization like the NSA has more in-house expertise related to AI than related to asteroid strikes. (2) There aren’t large-scale (un)friendly AI research efforts out there to delegate to, whereas agencies like NASA and CDC exist. (3) If sufficiently-friendly AI can be made, it could be harnessed by a particular nation, so progress towards that goal might be kept secret. Of these, #1 might be right but I still think it unlikely that intelligence agencies have enough concentration of relevant experts to be good places for (U)FAI research; #2 is probably true but it seems like the way to fix it would be for the nation(s) in question to fund (U)FAI research if their experts say it’s worth doing; #3 might be correct, but hold onto that thought for a moment.
LW activists may have an interest in ‘penetrating’ intelligence agencies
Jiminy. Are you seriously suggesting that an effective way to enhance AI friendliness research would be an attempt to compromise the security of national intelligence agencies? That seems more likely to be an effective way to get killed, exiled, thrown into jail for a long time, etc.
Let me at this point remind you of the conclusion a couple of paragraphs ago: if in fact there is (U)FAI research going on in intelligence agencies, it’s probably because AI is seen as a possible advantage one nation can have over another. So your mental picture at this point should not be of someone like Edward Snowden extracting information from the NSA, it should be of someone trying to smuggle secret information out of the Manhattan Project. (Which did in fact happen, so I’m not claiming it’s impossible, but it sounds like a really unappealing job even aside from petty concerns about treason etc.)
I notice that your conclusion is that for some people, attempting to breach intelligence agencies’ security in order to extract information about (U)FAI research “may be a poor use of one’s time”. I can’t disagree with this, but it seems to me that something much stronger is true: for anyone, attempting to do that is almost certainly a really bad use of one’s time.
Your first three bullet points seem to imply that entities like the NSA should be expected to have research programmes dedicated to things like pandemics and asteroid strikes. That seems unlikely to me; why would the NSA or CIA or whatever be the right venue for such research? The only advantage of doing it in house rather than letting organizations dedicated to health and space handle it would be if somehow there were some nation-specific interests optimized by keeping their research secret. Which seems unlikely, because if human life is wiped out by an asteroid strike or something then the distinction between US interests and PRC interests will be of … limited importance.
Now, would we expect unfriendly AI research to be any different? I can think of three ways it might be. (1) Maybe an organization like the NSA has more in-house expertise related to AI than related to asteroid strikes. (2) There aren’t large-scale (un)friendly AI research efforts out there to delegate to, whereas agencies like NASA and CDC exist. (3) If sufficiently-friendly AI can be made, it could be harnessed by a particular nation, so progress towards that goal might be kept secret. Of these, #1 might be right but I still think it unlikely that intelligence agencies have enough concentration of relevant experts to be good places for (U)FAI research; #2 is probably true but it seems like the way to fix it would be for the nation(s) in question to fund (U)FAI research if their experts say it’s worth doing; #3 might be correct, but hold onto that thought for a moment.
Jiminy. Are you seriously suggesting that an effective way to enhance AI friendliness research would be an attempt to compromise the security of national intelligence agencies? That seems more likely to be an effective way to get killed, exiled, thrown into jail for a long time, etc.
Let me at this point remind you of the conclusion a couple of paragraphs ago: if in fact there is (U)FAI research going on in intelligence agencies, it’s probably because AI is seen as a possible advantage one nation can have over another. So your mental picture at this point should not be of someone like Edward Snowden extracting information from the NSA, it should be of someone trying to smuggle secret information out of the Manhattan Project. (Which did in fact happen, so I’m not claiming it’s impossible, but it sounds like a really unappealing job even aside from petty concerns about treason etc.)
I notice that your conclusion is that for some people, attempting to breach intelligence agencies’ security in order to extract information about (U)FAI research “may be a poor use of one’s time”. I can’t disagree with this, but it seems to me that something much stronger is true: for anyone, attempting to do that is almost certainly a really bad use of one’s time.