Furthermore, even if you suppose that Foom is likely, it’s not clear where the threshold for Foom is. Could a sub-human level AI foom? What about human-level intelligence? Or maybe we need super-human intelligence? Do we have good evidence for where the Foom-threshold would be?
A “threshold” implies a linear scale for intelligence, which is far from given, especially for non-human minds. For example, say you reverse engineer a mouse’s brain, but then speed it up, and give it much more memory (short-term and long-term—if those are just ram and/or disk space on a computer, expanding those is easy). How intelligent is the result? It thinks way faster than a human, remembers more, can make complex plans … but is it smarter than a human?
Probably not, but it may still be dangerous. Same for a “toddler AI” with those modifications.
Human level intelligence is fairly clearly just above the critical point (just look at what is happening now). However, machine brains have different strengths and weaknesses. Sub-human machines could accelerate the ongoing explosion a lot—if they are better than humans at just one thing—and such machines seem common.
Replace “threshold” with “critical point.” I’m using this terminology because EY himself uses it to frame his arguments. See Cascades, Cycles, Insight, where Eliezer draws an analogy between a fission reaction going critical and an AI FOOMing.
It thinks way faster than a human, remembers more, can make complex plans … but is it smarter than a human?
This seems to be tangential, but I’m gonna say no, as long as we assume that the rat brain doesn’t spontaneously acquire language or human-level abstract reasoning skills.
A “threshold” implies a linear scale for intelligence, which is far from given, especially for non-human minds. For example, say you reverse engineer a mouse’s brain, but then speed it up, and give it much more memory (short-term and long-term—if those are just ram and/or disk space on a computer, expanding those is easy). How intelligent is the result? It thinks way faster than a human, remembers more, can make complex plans … but is it smarter than a human?
Probably not, but it may still be dangerous. Same for a “toddler AI” with those modifications.
Human level intelligence is fairly clearly just above the critical point (just look at what is happening now). However, machine brains have different strengths and weaknesses. Sub-human machines could accelerate the ongoing explosion a lot—if they are better than humans at just one thing—and such machines seem common.
Even the Einstein of monkeys is still just a monkey.
Replace “threshold” with “critical point.” I’m using this terminology because EY himself uses it to frame his arguments. See Cascades, Cycles, Insight, where Eliezer draws an analogy between a fission reaction going critical and an AI FOOMing.
This seems to be tangential, but I’m gonna say no, as long as we assume that the rat brain doesn’t spontaneously acquire language or human-level abstract reasoning skills.