Toddlers self-modify into people using brains made out of meat!
No they don’t. Self-modification in the context of AGI doesn’t mean learning or growing, it means understanding the most fundamental architecture of your own mind and purposefully improving it.
That said, I think your first sentence is probably right. It looks like Ben can’t imagine a toddler-level AGI self-modifying because human toddlers can’t (or human adults, for that matter). But of course AGIs will be very different from human minds. For one thing, their source code will be a lot easier to understand than ours. For another, their minds will probably be much better at redesigning and improving code than ours are. Look at the kind of stuff that computer programs can do with code: Some of them already exceed human capabilities in some ways.
“Toddler-level AGI” is actually a very misleading term. Even if an AGI is approximately equal to a human toddler by some metrics, it will certainly not be equal by many other metrics. What does “toddler-level” mean when the AGI is vastly superior to even adult human minds in some respects?
“Understanding” and “purpose” are helpful abstractions for discussing human-like computational agents, but in more general cases I don’t think your definition of self-modification is carving reality at its joints.
ETA: I strongly agree with everything else in your comment.
No they don’t. Self-modification in the context of AGI doesn’t mean learning or growing, it means understanding the most fundamental architecture of your own mind and purposefully improving it.
That said, I think your first sentence is probably right. It looks like Ben can’t imagine a toddler-level AGI self-modifying because human toddlers can’t (or human adults, for that matter). But of course AGIs will be very different from human minds. For one thing, their source code will be a lot easier to understand than ours. For another, their minds will probably be much better at redesigning and improving code than ours are. Look at the kind of stuff that computer programs can do with code: Some of them already exceed human capabilities in some ways.
“Toddler-level AGI” is actually a very misleading term. Even if an AGI is approximately equal to a human toddler by some metrics, it will certainly not be equal by many other metrics. What does “toddler-level” mean when the AGI is vastly superior to even adult human minds in some respects?
“Understanding” and “purpose” are helpful abstractions for discussing human-like computational agents, but in more general cases I don’t think your definition of self-modification is carving reality at its joints.
ETA: I strongly agree with everything else in your comment.