It’s not an arbitrary reference point. For a singularity/AI-goes-FOOM event to occur, it needs to have sufficient intelligence and capability to modify itself in a recursive self-improvement process. A chimpanzee is not smart enough to do this. We’ve posited that at least some human beings are capable of creating a more powerful intelligence either though AGI or IA. Therefore the important cutoff where a FOOM event becomes possible is somewhere in-between those two reference levels (the chimpanzee and the circa 2013 rationalist AGI/IA researcher).
Despite my careless phrasing, this isn’t some floating standard that depends on circumstances (having to be smarter than your creators). An AGI or IA simply has to meet some objective minimum level of rationalist and technological capability to start the recursive self-improvement process. The problem is our understanding of the nature of intelligence is not developed enough to predict where that hard cutoff is, so we’re resorting to making qualitative judgements. We think we are capable of starting a singularity event either through AGI or IA means. Therefore anything smarter than we are (“superhuman”) would be equally capable. This is a sufficient, but not necessary requirement—making humans smarter though IA doesn’t mean that an AGI suddenly has to be that much smarter to start its own recursive self-improvement cycle.
My point about software was that an AGI FOOM could happen today. There are datacenters at Google and research supercomputers that are powerful enough to run a recursively improving “artificial scientist” AGI. But IA technology to the level of being able to go super-critical basically requires molecular nanotechnology or equivalently powerful technology (to replace neurons) and/or mind uploading. You won’t get an IA FOOM until you can remove the limitations of biological wetware, but these technologies are at best multiple decades away.
It’s not an arbitrary reference point. For a singularity/AI-goes-FOOM event to occur, it needs to have sufficient intelligence and capability to modify itself in a recursive self-improvement process. A chimpanzee is not smart enough to do this. We’ve posited that at least some human beings are capable of creating a more powerful intelligence either though AGI or IA. Therefore the important cutoff where a FOOM event becomes possible is somewhere in-between those two reference levels (the chimpanzee and the circa 2013 rationalist AGI/IA researcher).
Despite my careless phrasing, this isn’t some floating standard that depends on circumstances (having to be smarter than your creators). An AGI or IA simply has to meet some objective minimum level of rationalist and technological capability to start the recursive self-improvement process. The problem is our understanding of the nature of intelligence is not developed enough to predict where that hard cutoff is, so we’re resorting to making qualitative judgements. We think we are capable of starting a singularity event either through AGI or IA means. Therefore anything smarter than we are (“superhuman”) would be equally capable. This is a sufficient, but not necessary requirement—making humans smarter though IA doesn’t mean that an AGI suddenly has to be that much smarter to start its own recursive self-improvement cycle.
My point about software was that an AGI FOOM could happen today. There are datacenters at Google and research supercomputers that are powerful enough to run a recursively improving “artificial scientist” AGI. But IA technology to the level of being able to go super-critical basically requires molecular nanotechnology or equivalently powerful technology (to replace neurons) and/or mind uploading. You won’t get an IA FOOM until you can remove the limitations of biological wetware, but these technologies are at best multiple decades away.