For 1., we could totally find out that our AGI just plain cannot pick up on what a car or a dog is, and only classify/recognize their parts (or by halves, or just always misclassify them) but then not have any sense of what’s going on to cause it or how to fix it.
For 2. … I have no idea? I feel like that might be out of scope for what I want to think about. I don’t even know how I’d start attacking that problem in full generality or even in part.
For 1., we could totally find out that our AGI just plain cannot pick up on what a car or a dog is, and only classify/recognize their parts (or by halves, or just always misclassify them) but then not have any sense of what’s going on to cause it or how to fix it.
For 2. … I have no idea? I feel like that might be out of scope for what I want to think about. I don’t even know how I’d start attacking that problem in full generality or even in part.