I appreciate your genuine attempt to understand this post. Python is indeed a stand-in for any programming language of comparable complexity. Pointers and recursion could indeed be flipped around. I think we are on the same page concerning these details.
You seem to be making a few claims: (1) that these skills require an increasing amount of 1-dimensional intelligence (2) that one cannot do lower-indexed things without doing higher-indexed ones and (3) that there is something fundamental about this.
Yes.
Not quite. There can be a little slippage between close-together indices, especially #6 & #7. More importantly, this hierarchy is about talent and potential, not ability. If someone could do #5 with a little bit of study then that counts as doing #5. It is easy to imagine a mathematician who understands recursion but has never written a line of computer code. But I would be surprised if such a person could not quickly learn to write simple Python scripts.
This is technically an implication, not a claim. But…yeah.
First, what is an abstraction?
By “abstraction”, I mean a simplification of the universe—a model. By “higher-level abstraction”, I mean a model built on top of another model.[1]
I am not referring to how these models are built upon one another within mathematics. I am trying to organize them based on how they are constructed inside a typical human brain. For example, a human being learns to write before learning arithmetic and learns arithmetic before learning calculus. It is possible to construct calculus from the Peano axioms while skipping over all of arithmetic. But that is not how a human child learns calculus.
To put it another way, a human being learning a complex skill begins by learning to perform the skill consciously. With practice, the skill becomes unconscious. This frees up the conscious mind to think on a higher (what I call more “abstract”) level. Rinse and repeat. To use an example from Brandon Sanderson’s creative writing lectures, a novice author is worried about how to put words down on a page while a skilled writer is thinking with plot structure. An author cannot afford to think about higher level activities until the lower level activities are automatic.
I hope this clarifies exactly what I mean by “abstraction”.
The best I can make of this post is that these tasks have something akin to a hard-floor in g-factor required…
Yes. That is the point of this post.
…which is an extraordinary claim in need of extraordinary evidence.
Really? What makes you think this is an extraordinary claim? My prior is 50%/50% credence without evidence. Here are some observations I have used to subsequently update those priors.
I am not the first person to point out that there is a hard floor on the intelligence required to understand pointers. Joel Spolsky wrote about the basic idea concerning pointers back in 2005. My college computer science teacher pointed out something similar in 2011.
On the low end of the spectrum, I have cared for severely retarded adults who appeared to have g-related ceilings. This is a sensitive example and I do not want to get too deep into it. But it should be clear that someone without enough g to hold a coherent conversation is unlikely ever to perform arithmetic at the level specified in this post.
Similarly, an adult who has trouble remembering how to press 3 buttons on a smartphone is unlikely ever to write computer code.
On the high end of the spectrum, quants are regularly hired from physics departments. These people can apparently pick up finance faster than students trained in economics. The difference seems to be mathematical aptitude. When I talk to quants I can get away with explaining much less than when I talk to, say, a Silicon Valley CTO. At least one friend of mine has confirmed this evaluation.
I have spent countless hours tutoring physics, computer science and other subjects. The levels in my post comes from my personal experience. When it comes to specific subjects it often feels like there’s a block and my interlocutor cannot fit all of the relevant information in zeir head at once.
In the scientific literature it is established that traits like fluid intelligence and working memory are basically capped. If the ability to reason at certain levels of abstraction is capped by fluid intelligence or working memory then that would explain the phenomenon I have observed.
This might seem to contradict my claim about talent and potential. You can get around this contradiction by supposing the existence of horizontal alternatives for specific benchmarks. If these alternatives require similar potential to pass then they do not invalidate the ordering.
Thanks for the explanation. I accept your usage of “abstraction” as congruent with the common use among software engineers (although I have other issues with that usage)). Confusingly, your hierarchy is a hierarchy in g required, not a hierarchy in the abstractions themselves.
I am well-read in Joel Spoelsky, and my personal experience matches the anecdotes you share. On the other hand, I have also tutored some struggling programmers to a high level. I still find the claim of a g-floor incredible. This kind of inference feels like claiming the insolubility of the quintic because I solved a couple quintics numerically and the numbers look very weird.
Sidenote: I find your example discussion of human learning funny because I learned arithmetic before writing.
It’s that the general form is unsolvable, not specific examples, without better tools than the usual ones: +, -, *, /, sqrt, ^, etc. I’ve heard that with hypergeometric functions it’s doable, but the same issue reappears for polynomials of higher degree there as well.
This kind of inference feels like claiming the insolubility of the quintic because I solved a couple quintics numerically and the numbers look very weird.
I think it is more like the irreversibility of the trapdoor functions we use in cryptography. We are unable to prove mathematically they are secure. But an army of experts failing to break them is Bayesian evidence.
Was just reading through my journal, and found that I had copied this quote. I think you’ll find it to be of interest re: teaching recursion.
-------------------------------------
From “Computing Science: Achievements and Challenges” (1999):
“I learned a second lesson in the 60s, when I taught a course on programming to sophomores, and discovered to my surprise that 10% of my audience had the greatest difficulty in coping with the concept of re, cursive procedures. I was surprised because I knew that the concept of recursion was not difficult. Walking with my five-year old son through Eindhoven, he suddenly said “Dad not every boat has a life-boat, has it?” ’”’How come?” I said. “Well, the life-boat could have a smaller life-boat, but then that would be without one.” It turned out that the students with problems were those who had had prior exposure to FORTRAN, and the source of their difficulties was not that FORTRAN did not permit recursion, but that they had not been taught to distinguish between the definition of a programming language and its implementation and that their only handle on the semantics was trying to visualize what happened during program execution. Their only way of “understanding” recursion was to implement it, something of course they could not do. Their way of thinking was so thoroughly operational that, because they did not see how to implement recursion, they could not understand it. The inability to think about programs in an implementation-independent way still afflicts large sections of the computing community, and FORTRAN played a major role in establishing that regrettable tradition”
I appreciate your genuine attempt to understand this post. Python is indeed a stand-in for any programming language of comparable complexity. Pointers and recursion could indeed be flipped around. I think we are on the same page concerning these details.
Yes.
Not quite. There can be a little slippage between close-together indices, especially #6 & #7. More importantly, this hierarchy is about talent and potential, not ability. If someone could do #5 with a little bit of study then that counts as doing #5. It is easy to imagine a mathematician who understands recursion but has never written a line of computer code. But I would be surprised if such a person could not quickly learn to write simple Python scripts.
This is technically an implication, not a claim. But…yeah.
By “abstraction”, I mean a simplification of the universe—a model. By “higher-level abstraction”, I mean a model built on top of another model.[1]
I am not referring to how these models are built upon one another within mathematics. I am trying to organize them based on how they are constructed inside a typical human brain. For example, a human being learns to write before learning arithmetic and learns arithmetic before learning calculus. It is possible to construct calculus from the Peano axioms while skipping over all of arithmetic. But that is not how a human child learns calculus.
To put it another way, a human being learning a complex skill begins by learning to perform the skill consciously. With practice, the skill becomes unconscious. This frees up the conscious mind to think on a higher (what I call more “abstract”) level. Rinse and repeat. To use an example from Brandon Sanderson’s creative writing lectures, a novice author is worried about how to put words down on a page while a skilled writer is thinking with plot structure. An author cannot afford to think about higher level activities until the lower level activities are automatic.
I hope this clarifies exactly what I mean by “abstraction”.
Yes. That is the point of this post.
Really? What makes you think this is an extraordinary claim? My prior is 50%/50% credence without evidence. Here are some observations I have used to subsequently update those priors.
I am not the first person to point out that there is a hard floor on the intelligence required to understand pointers. Joel Spolsky wrote about the basic idea concerning pointers back in 2005. My college computer science teacher pointed out something similar in 2011.
On the low end of the spectrum, I have cared for severely retarded adults who appeared to have g-related ceilings. This is a sensitive example and I do not want to get too deep into it. But it should be clear that someone without enough g to hold a coherent conversation is unlikely ever to perform arithmetic at the level specified in this post.
Similarly, an adult who has trouble remembering how to press 3 buttons on a smartphone is unlikely ever to write computer code.
On the high end of the spectrum, quants are regularly hired from physics departments. These people can apparently pick up finance faster than students trained in economics. The difference seems to be mathematical aptitude. When I talk to quants I can get away with explaining much less than when I talk to, say, a Silicon Valley CTO. At least one friend of mine has confirmed this evaluation.
I have spent countless hours tutoring physics, computer science and other subjects. The levels in my post comes from my personal experience. When it comes to specific subjects it often feels like there’s a block and my interlocutor cannot fit all of the relevant information in zeir head at once.
In the scientific literature it is established that traits like fluid intelligence and working memory are basically capped. If the ability to reason at certain levels of abstraction is capped by fluid intelligence or working memory then that would explain the phenomenon I have observed.
This might seem to contradict my claim about talent and potential. You can get around this contradiction by supposing the existence of horizontal alternatives for specific benchmarks. If these alternatives require similar potential to pass then they do not invalidate the ordering.
Thanks for the explanation. I accept your usage of “abstraction” as congruent with the common use among software engineers (although I have other issues with that usage)). Confusingly, your hierarchy is a hierarchy in g required, not a hierarchy in the abstractions themselves.
I am well-read in Joel Spoelsky, and my personal experience matches the anecdotes you share. On the other hand, I have also tutored some struggling programmers to a high level. I still find the claim of a g-floor incredible. This kind of inference feels like claiming the insolubility of the quintic because I solved a couple quintics numerically and the numbers look very weird.
Sidenote: I find your example discussion of human learning funny because I learned arithmetic before writing.
It’s that the general form is unsolvable, not specific examples, without better tools than the usual ones: +, -, *, /, sqrt, ^, etc. I’ve heard that with hypergeometric functions it’s doable, but the same issue reappears for polynomials of higher degree there as well.
I think it is more like the irreversibility of the trapdoor functions we use in cryptography. We are unable to prove mathematically they are secure. But an army of experts failing to break them is Bayesian evidence.
Sidenote: Lol.
Was just reading through my journal, and found that I had copied this quote. I think you’ll find it to be of interest re: teaching recursion.
-------------------------------------
From “Computing Science: Achievements and Challenges” (1999):
“I learned a second lesson in the 60s, when I taught a course on programming to sophomores, and discovered to my surprise that 10% of my audience had the greatest difficulty in coping with the concept of re, cursive procedures. I was surprised because I knew that the concept of recursion was not difficult. Walking with my five-year old son through Eindhoven, he suddenly said “Dad not every boat has a life-boat, has it?” ’”’How come?” I said. “Well, the life-boat could have a smaller life-boat, but then that would be without one.” It turned out that the students with problems were those who had had prior exposure to FORTRAN, and the source of their difficulties was not that FORTRAN did not permit recursion, but that they had not been taught to distinguish between the definition of a programming language and its implementation and that their only handle on the semantics was trying to visualize what happened during program execution. Their only way of “understanding” recursion was to implement it, something of course they could not do. Their way of thinking was so thoroughly operational that, because they did not see how to implement recursion, they could not understand it. The inability to think about programs in an implementation-independent way still afflicts large sections of the computing community, and FORTRAN played a major role in establishing that regrettable tradition”