For if the ‘capacity’ in question is mere computational speed and amount of memory, then we can understand the aspects in question with the help of computers – just as we have understood the world for centuries with the help of pencil and paper. As Einstein remarked, ‘My pencil and I are more clever than I.’
Sure, but then the understanding must lie in the combined human-pencil system, not the human brain alone, just as a human slowly following instructions in (forgive my use of this thought experiment, but it is an extension of the same idea) Searle’s Chinese room doesn’t understand Mandarin, even if the instructions they’re executing do. An AI’s CPU is not itself conscious, even if the AI is. The key in Einstein’s case is that after writing everything down as a memory aid and an error-correcting mechanism, the important points the pencil made are stored and processed in his brain, and he can reason further with them. You could show me a Matrioshka brain simulation a human with planck-scale precision, and prove to me it did so, but even if I built the thing myself I still wouldn’t understand it in the way I usually use the word “understand.” Like in thermodynamics, at some point more is qualitatively different.
Now, if you very slowly augmented my brain with better hardware (and/or wetware), such that my thoughts really interfaced seamlessly across my biological evolved brain and any added components used as aids, then I started to consider those part of my mind instead of external tools. So in that sense, yes, future-me could come to understand anything.
That just doesn’t mean they could come back in time and explain it in a way current-me could grasp, any more than I could meaningfully explain the implications of group theory for semiconductor physics to kindergarden-me (early-high-school-me could probably follow it with some extra effort, though). Kindergarden-me knew enough basic arithmetic and could have learned the symbol manipulations needed for Boolean logic (I think Scratch and Scratch Jr are proof enough that this is something young kids are capable of if it is presented correctly), so there’s no computational operation he couldn’t perform. He’d just have no idea why he’d be doing any of it, or how it related to anything else, and if he forgot it he couldn’t re-derive it and might not even notice the loss. It would not be truly part of him.
Sure, but then the understanding must lie in the combined human-pencil system, not the human brain alone, just as a human slowly following instructions in (forgive my use of this thought experiment, but it is an extension of the same idea) Searle’s Chinese room doesn’t understand Mandarin, even if the instructions they’re executing do. An AI’s CPU is not itself conscious, even if the AI is. The key in Einstein’s case is that after writing everything down as a memory aid and an error-correcting mechanism, the important points the pencil made are stored and processed in his brain, and he can reason further with them. You could show me a Matrioshka brain simulation a human with planck-scale precision, and prove to me it did so, but even if I built the thing myself I still wouldn’t understand it in the way I usually use the word “understand.” Like in thermodynamics, at some point more is qualitatively different.
Now, if you very slowly augmented my brain with better hardware (and/or wetware), such that my thoughts really interfaced seamlessly across my biological evolved brain and any added components used as aids, then I started to consider those part of my mind instead of external tools. So in that sense, yes, future-me could come to understand anything.
That just doesn’t mean they could come back in time and explain it in a way current-me could grasp, any more than I could meaningfully explain the implications of group theory for semiconductor physics to kindergarden-me (early-high-school-me could probably follow it with some extra effort, though). Kindergarden-me knew enough basic arithmetic and could have learned the symbol manipulations needed for Boolean logic (I think Scratch and Scratch Jr are proof enough that this is something young kids are capable of if it is presented correctly), so there’s no computational operation he couldn’t perform. He’d just have no idea why he’d be doing any of it, or how it related to anything else, and if he forgot it he couldn’t re-derive it and might not even notice the loss. It would not be truly part of him.