I don’t think that the argument, that people can be smart enough to create an AGI, that can take over the universe in a matter of hours, can be dumb enough not to recognize the dangers posed by such an AGI, is very strong. To fortify that argument you would either have to show that the people working for SIAI are vastly more intelligent than most AGI researchers, in which case they would be more likely to build the first AGI, or that the creation of an AGI, that is capable of explosive recursive self-improvement, demands much less intelligence and insight than which is necessary to recognize risks from AI.
One of the major messages which I think you should be picking up from the sequences is that it takes more than just intelligence to consistently separate good ideas from bad ones.
One of the major messages which I think you should be picking up from the sequences is that it takes more than just intelligence to consistently separate good ideas from bad ones.