If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.
And while it’s a well-constructed pithy quote, I don’t think it’s true. Can a system understand itself? Can a quining computer program exist? Where is the line between being able to recite itself and understand itself?
You need a model above some threshold of capability at which it can provide useful interpretations, yes, but I don’t see any obvious reason why that threshold would move up with the size of the model under interpretation.
Agreed. A quine needs some minimum complexity and/or language / environment support, but once you have one it’s usually easy to expand it. Things could go either way, and the question is an interesting one needing investigation, not bare assertion.
And the answer might depend fairly strongly on whether you take steps to make the model interpretable or a spaghetti-code turing-tar-pit mess.
This feel reminiscent of:
And while it’s a well-constructed pithy quote, I don’t think it’s true. Can a system understand itself? Can a quining computer program exist? Where is the line between being able to recite itself and understand itself?
Agreed. A quine needs some minimum complexity and/or language / environment support, but once you have one it’s usually easy to expand it. Things could go either way, and the question is an interesting one needing investigation, not bare assertion.
And the answer might depend fairly strongly on whether you take steps to make the model interpretable or a spaghetti-code turing-tar-pit mess.