Yep, you see the problem! It’s tempting to just think of an AI as “just the model”, and study that in isolation, but that just won’t be good enough longterm.
I see—you are implying that an AI model will leverage external system parts to augment itself. For example, a neural network would use an external scratch-pad as a different form of memory for itself. Or instantiate a clone of itself to do a certain task for it. Or perhaps use some sort of scaffolding.
I think these concerns probably don’t matter for an AGI, because I expect that data transfer latency would be a non-trivial blocker for storing data outside the model itself, and it is more efficient to to self-modify and improve one’s own intelligence than to use some form of ‘factored cognition’. Perhaps these things are issues for an ostensibly boxed AGI, and if that is the case, then this makes a lot of sense.
Yep, the latency and performance are real killers for embodied type cognition. I remember a tweet that suggested the entire Internet was not enough to train the model.
It would be nice if the AGI saw the humans running its compute resources as part of its body that it wants to protect. The problem is that we humans also tamper with our bodies… Humans are like hair on the body of the AGI and maybe it wants to shave and use a whig.
Yep, you see the problem! It’s tempting to just think of an AI as “just the model”, and study that in isolation, but that just won’t be good enough longterm.
I see—you are implying that an AI model will leverage external system parts to augment itself. For example, a neural network would use an external scratch-pad as a different form of memory for itself. Or instantiate a clone of itself to do a certain task for it. Or perhaps use some sort of scaffolding.
I think these concerns probably don’t matter for an AGI, because I expect that data transfer latency would be a non-trivial blocker for storing data outside the model itself, and it is more efficient to to self-modify and improve one’s own intelligence than to use some form of ‘factored cognition’. Perhaps these things are issues for an ostensibly boxed AGI, and if that is the case, then this makes a lot of sense.
I strongly disagree and do not think that will be how AGI will look, AGI isn’t magic. But this is a crux and I might be wrong of course.
Yep, the latency and performance are real killers for embodied type cognition. I remember a tweet that suggested the entire Internet was not enough to train the model.
It would be nice if the AGI saw the humans running its compute resources as part of its body that it wants to protect. The problem is that we humans also tamper with our bodies… Humans are like hair on the body of the AGI and maybe it wants to shave and use a whig.