Goertzel is generalizing from the human example of intelligence, which is probably the most pernicious and widespread failure mode in thinking about AI.
Or he may be completely disconnected from anything even resembling the real world. I literally have trouble believing that a professional AI researcher could describe a primitive, dumber-than-human AGI as “toddler-level” in the same sentence he dismisses it as a self-modification threat.
Toddlers self-modify into people using brains made out of meat!
Toddlers self-modify into people using brains made out of meat!
No they don’t. Self-modification in the context of AGI doesn’t mean learning or growing, it means understanding the most fundamental architecture of your own mind and purposefully improving it.
That said, I think your first sentence is probably right. It looks like Ben can’t imagine a toddler-level AGI self-modifying because human toddlers can’t (or human adults, for that matter). But of course AGIs will be very different from human minds. For one thing, their source code will be a lot easier to understand than ours. For another, their minds will probably be much better at redesigning and improving code than ours are. Look at the kind of stuff that computer programs can do with code: Some of them already exceed human capabilities in some ways.
“Toddler-level AGI” is actually a very misleading term. Even if an AGI is approximately equal to a human toddler by some metrics, it will certainly not be equal by many other metrics. What does “toddler-level” mean when the AGI is vastly superior to even adult human minds in some respects?
“Understanding” and “purpose” are helpful abstractions for discussing human-like computational agents, but in more general cases I don’t think your definition of self-modification is carving reality at its joints.
ETA: I strongly agree with everything else in your comment.
Well, bad analogy. They don’t self-modify by understanding their source code and improving it. They gradually grow larger brains in a pre-set fashion while learning specific tasks. Humans have very little ability to self-modify.
Goertzel is generalizing from the human example of intelligence, which is probably the most pernicious and widespread failure mode in thinking about AI.
Or he may be completely disconnected from anything even resembling the real world. I literally have trouble believing that a professional AI researcher could describe a primitive, dumber-than-human AGI as “toddler-level” in the same sentence he dismisses it as a self-modification threat.
Toddlers self-modify into people using brains made out of meat!
No they don’t. Self-modification in the context of AGI doesn’t mean learning or growing, it means understanding the most fundamental architecture of your own mind and purposefully improving it.
That said, I think your first sentence is probably right. It looks like Ben can’t imagine a toddler-level AGI self-modifying because human toddlers can’t (or human adults, for that matter). But of course AGIs will be very different from human minds. For one thing, their source code will be a lot easier to understand than ours. For another, their minds will probably be much better at redesigning and improving code than ours are. Look at the kind of stuff that computer programs can do with code: Some of them already exceed human capabilities in some ways.
“Toddler-level AGI” is actually a very misleading term. Even if an AGI is approximately equal to a human toddler by some metrics, it will certainly not be equal by many other metrics. What does “toddler-level” mean when the AGI is vastly superior to even adult human minds in some respects?
“Understanding” and “purpose” are helpful abstractions for discussing human-like computational agents, but in more general cases I don’t think your definition of self-modification is carving reality at its joints.
ETA: I strongly agree with everything else in your comment.
Well, bad analogy. They don’t self-modify by understanding their source code and improving it. They gradually grow larger brains in a pre-set fashion while learning specific tasks. Humans have very little ability to self-modify.
Exactly! Humans can go from toddler to AGI start-up founder, and that’s trivial.
Whatever the hell the AGI equivalent of a toddler is, it’s all but guaranteed to be better at self-modification than the human model.