I don’t think you’ve understood the article. The idea of the article is that if you’re able to derive it, then yes, you can regenerate it. That’s what ‘regenerate’ means.
I think nominul does understand it, and at one level higher than you do. he understands the principle so well he goes and makes a tradeoff in terms of memory used vs execution time.
Take a symetric matrix with a conveniently zero’d out diagonal…
you could go and memorize every element on the matrix....(no understanding, pure rote memorization).…
you could go and memorize every element AND noticing it happens to be symmetric...(understanding, what you seem to be thinking of...)
Or noticing it happens to be symmetric and then only memorizing half the entries in the first place(nominull’s approach).
I go with nominull’s approach myself...I’m just a lot sloppier about selecting what info to rote memorize.
My interpretation: if your brain can regenerate lost information from its neighbors, but you don’t actually need that, then you have an inefficient information packing system. You can improve the situation by compressing more until you can’t regenerate lost information.
However, I have some doubts about this. Deep knowledge seems to be about the connections between ideas, and I don’t think you can significantly decrease information regeneration without removing the interconnections.
I don’t think you’ve understood the article. The idea of the article is that if you’re able to derive it, then yes, you can regenerate it. That’s what ‘regenerate’ means.
I think nominul does understand it, and at one level higher than you do. he understands the principle so well he goes and makes a tradeoff in terms of memory used vs execution time.
Take a symetric matrix with a conveniently zero’d out diagonal… you could go and memorize every element on the matrix....(no understanding, pure rote memorization).… you could go and memorize every element AND noticing it happens to be symmetric...(understanding, what you seem to be thinking of...) Or noticing it happens to be symmetric and then only memorizing half the entries in the first place(nominull’s approach).
I go with nominull’s approach myself...I’m just a lot sloppier about selecting what info to rote memorize.
My interpretation: if your brain can regenerate lost information from its neighbors, but you don’t actually need that, then you have an inefficient information packing system. You can improve the situation by compressing more until you can’t regenerate lost information.
However, I have some doubts about this. Deep knowledge seems to be about the connections between ideas, and I don’t think you can significantly decrease information regeneration without removing the interconnections.