For inventing nanotechnology, the given example is AlphaFold 2.
For killing everyone in the same instant with nanotechnology, Eliezer often references Nanosystems by Eric Drexler. I haven’t read it, but I expect the insight is something like “Engineered nanomachines could do a lot more than those limited by designs that have a clear evolutionary path from chemicals that can form randomly in the primordial ooze of Earth.”
For how a system could get that smart, the canonical idea is recursive self improvement (i.e. an AGI capable of learning AGI engineering could design better versions of itself, which could in turn better design better versions, etc, to whatever limit.). But more recent history in machine learning suggests you might be able to go from sub-human to overwhelmingly super-human just by giving it a few orders of magnitude more compute, without any design changes.
This is the sequence post on it: https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message, it’s quite a fun read (to me), and should explain why something smart that thinks at transistor speeds should be able to figure things out.
For inventing nanotechnology, the given example is AlphaFold 2.
For killing everyone in the same instant with nanotechnology, Eliezer often references Nanosystems by Eric Drexler. I haven’t read it, but I expect the insight is something like “Engineered nanomachines could do a lot more than those limited by designs that have a clear evolutionary path from chemicals that can form randomly in the primordial ooze of Earth.”
For how a system could get that smart, the canonical idea is recursive self improvement (i.e. an AGI capable of learning AGI engineering could design better versions of itself, which could in turn better design better versions, etc, to whatever limit.). But more recent history in machine learning suggests you might be able to go from sub-human to overwhelmingly super-human just by giving it a few orders of magnitude more compute, without any design changes.