I don’t think the conclusion follows from the premises. People often learn new concepts after studying stuff, and it seems likely (to me) that when studying human cognition, we’d first be confused because our previous concepts weren’t sufficient to understand it, and then slowly stop being confused as we built & understood concepts related to the subject. If an AI’s thoughts are like human thoughts, given a lot of time to understand them, what you describe doesn’t rule out that the AI’s thoughts would be comprehensible.
The mere existence of concepts we don’t know about in a subject doesn’t mean that we can’t learn those concepts. Most subjects have new concepts.
I don’t think the conclusion follows from the premises. People often learn new concepts after studying stuff, and it seems likely (to me) that when studying human cognition, we’d first be confused because our previous concepts weren’t sufficient to understand it, and then slowly stop being confused as we built & understood concepts related to the subject. If an AI’s thoughts are like human thoughts, given a lot of time to understand them, what you describe doesn’t rule out that the AI’s thoughts would be comprehensible.
The mere existence of concepts we don’t know about in a subject doesn’t mean that we can’t learn those concepts. Most subjects have new concepts.
I agree that with time, we might be able to understand. (I meant to communicate that via “might still be incomprehensible”)