Phil: It seems to me that the above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off, but if take-off took six months I still wouldn’t expect that humans would be able to react. The AGI would probably be able to remain hidden until it was in a position to create a singleton extremely suddenly.
Aron: It’s rational to plan for the most dangerous survivable situations.
However, it doesn’t really make sense to claim that we can build computers that are superior to ourselves but that they can’t improve themselves, since making them superior to us blatantly involves improving them. That said, yes it is possible that some other path to the singularity could produce transhuman minds that can’t quickly self-improve and which we can’t quickly improve, for instance drug enhanced humans, in which case hopefully those transhumans would share our values well enough that they could solve Friendlyness for us.
Phil: It seems to me that the above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off, but if take-off took six months I still wouldn’t expect that humans would be able to react. The AGI would probably be able to remain hidden until it was in a position to create a singleton extremely suddenly.
Aron: It’s rational to plan for the most dangerous survivable situations. However, it doesn’t really make sense to claim that we can build computers that are superior to ourselves but that they can’t improve themselves, since making them superior to us blatantly involves improving them. That said, yes it is possible that some other path to the singularity could produce transhuman minds that can’t quickly self-improve and which we can’t quickly improve, for instance drug enhanced humans, in which case hopefully those transhumans would share our values well enough that they could solve Friendlyness for us.