Turing completeness misses some important qualitative properties of what it means for people to understand something. When I understand something I don’t merely compute it, I form opinions about it, I fit it into a schema for thinking about the world, I have a representation of it in some latent space that allows it to be transformed in appropriate ways, etc.
I could, given a notebook of infinite size, infinite time, and lots of drugs, probably compute the Ackermann function A(5,5). But this has little to do with my ability to understand the result in the sense of being able to tell a story about the result to myself. In fact, there are things I can understand without actually computing, so long as I can form opinions about it, fit it into a picture of the world, represent it in a way that allows for transformations, etc.
it could be there are aspects of reality that are beyond the capacity of our brains.’ But that cannot be so. For if the ‘capacity’ in question is mere computational speed and amount of memory, then we can understand the aspects in question with the help of computers
I’m disagreeing with the notion, equivalent to taking turing completeness as understanding-universality, that the human capacity for understanding is the capacity for universal computation.
That’s a good point. It’s still not clear to me that he’s talking about precisely the same thing in both quotes. The point also remains that if you’re not associating “understanding” with a class as broad as turing-completeness, then you can construct things that humans can’t understand, e.g. by hiding them in complex patterns, or by using human blind spots.
But that creates it’s own problem: there’s no longer a strong reason to believe in Universal Explanation. We don’t know that humans are universal explainers, because if there is something a human can’t think of … well a human can’t think of it! All we can do is notice confusion.
Turing completeness misses some important qualitative properties of what it means for people to understand something. When I understand something I don’t merely compute it, I form opinions about it, I fit it into a schema for thinking about the world, I have a representation of it in some latent space that allows it to be transformed in appropriate ways, etc.
I could, given a notebook of infinite size, infinite time, and lots of drugs, probably compute the Ackermann function A(5,5). But this has little to do with my ability to understand the result in the sense of being able to tell a story about the result to myself. In fact, there are things I can understand without actually computing, so long as I can form opinions about it, fit it into a picture of the world, represent it in a way that allows for transformations, etc.
The quotes aren’t about Turing completeness. What you wrote is irrelevant to the quoted material.
I’m disagreeing with the notion, equivalent to taking turing completeness as understanding-universality, that the human capacity for understanding is the capacity for universal computation.
If you read the quote carefully you will find that it is incompatible with the position you are attributing to Deutsch. For example, he writes about
which would hardly be necessary if computational universality was equivalent to universal explainer.
That’s a good point. It’s still not clear to me that he’s talking about precisely the same thing in both quotes. The point also remains that if you’re not associating “understanding” with a class as broad as turing-completeness, then you can construct things that humans can’t understand, e.g. by hiding them in complex patterns, or by using human blind spots.
But that creates it’s own problem: there’s no longer a strong reason to believe in Universal Explanation. We don’t know that humans are universal explainers, because if there is something a human can’t think of … well a human can’t think of it! All we can do is notice confusion.