i think you overestimate human curiosity for one. Not everyone implements prime searching or Conways game of life for fun.
For two. Even those that implement their own fun projects are not necessarily great programmers. It seems there are those that get pointers, and the others.
For tree, where does a company advertise? There is a lot of mass mailing going on by not competent folks.
I recently read Joel Spolskys book on how to hire great talent, and he makes the point that the really great programmers just never appear on the market anyway.
It seems there are those that get pointers, and the others.
Are there really people who don’t get pointers? I’m having a hard time even imagining this. Pointers really aren’t that hard, if you take a few hours to learn what they do and how they’re used.
Alternately, is my reaction a sign that there really is a profoundly bimodal distribution of programming aptitudes?
Are there really people who don’t get pointers? I’m having a hard time even imagining this. Pointers really aren’t that hard, if you take a few hours to learn what they do and how they’re used.
There really are people who would not take that few hours.
I don’t know if this counts, but when I was about 9 or 10 and learning C (my first exposure to programming) I understood input/output, loops, functions, variables, but I really didn’t get pointers. I distinctly remember my dad trying to explain the relationship between the * and & operators with box-and-pointer diagrams and I just absolutely could not figure out what was going on. I don’t know whether it was the notation or the concept that eluded me. I sort of gave up on it and stopped programming C for a while, but a few years later (after some Common Lisp in between), when I revisited C and C++ in high school programming classes, it seemed completely trivial.
So there might be some kind of level of abstract-thinking-ability which is a prerequisite to understanding such things. No comment on whether everyone can develop it eventually or not.
One of the epiphanies of my programming career was when I grokked function pointers. For a while prior to that I really struggled to even make sense of that idea, but when it clicked it was beautiful. (By analogy I can sort of understand what it might be like not to understand pointers themselves.)
Then I hit on the idea of embedding a function pointer in a data structure, so that I could change the function pointed to depending on some environmental parameters. Usually, of course, the first parameter of that function was the data structure itself...
Usually, of course, the first parameter of that function was the data structure itself...
Cute. Sad, but that’s already more powerful than straight OO. Python and Ruby support adding/rebinding methods at runtime (one reason duck typing is more popular these days). You might want to look at functional programming if you haven’t yet, since you’ve no doubt progressed since your epiphany. I’ve heard nice things about statically typed languages such as Haskell and O’Caml, and my personal favorite is Scheme.
Oddly enough, I think Morendil would get a real kick out of JavaScript. So much in JS involves passing functions around, usually carrying around some variables from their enclosing scope. That’s how the OO works; it’s how you make callbacks seem natural; it even lets you define new control-flow structures like jQuery’s each() function, which lets you pass in a function which iterates over every element in a collection.
The clearest, most concise book on this is Doug Crockford’s Javascript: The Good Parts. Highly recommended.
The technical term for this is a closure. A closure is a first-class* function with some associated state. For example, in Scheme, here is a function which returns counters, each with its own internal ticker:
While we’re sharing fun information, I’d like to point out a little-used feature of Markdown syntax: if you put four spaces before a line, it’s treated as code. Behold:
I suspect it’s like how my brain reacts to negative numbers, or decimals; I have no idea how anyone could fail to understand them.
But some people do.
And, due to my tendency to analyse mistakes I make (especially factual errors) I remember the times when I got each one of those wrong. I even remember the logic I used.
But they’ve become so ingrained in my brain now that failure to understand them is nigh inconceivable.
There is a difference in aptitude, but part of the problem is that pointers are almost never explained correctly. Many texts try to explain in abstract terms, which doesn’t work; a few try to explain graphically, which doesn’t work terribly well. I’ve met professional C programmers who therefore never understood pointers, but who did understand them after I gave them the right explanation.
The right explanation is in terms of numbers: the key is that char *x actually means the same thing as int x (on a 32-bit machine, and modulo some superficial convenience). A pointer is just an integer that gets used to store a memory address. Then you write out a series of numbered boxes starting at e.g. 1000, to represent memory locations. People get pointers when you put it like that.
i think you overestimate human curiosity for one. Not everyone implements prime searching or Conways game of life for fun. For two. Even those that implement their own fun projects are not necessarily great programmers. It seems there are those that get pointers, and the others. For tree, where does a company advertise? There is a lot of mass mailing going on by not competent folks. I recently read Joel Spolskys book on how to hire great talent, and he makes the point that the really great programmers just never appear on the market anyway.
http://abstrusegoose.com/strips/ars_longa_vita_brevis.PNG
Are there really people who don’t get pointers? I’m having a hard time even imagining this. Pointers really aren’t that hard, if you take a few hours to learn what they do and how they’re used.
Alternately, is my reaction a sign that there really is a profoundly bimodal distribution of programming aptitudes?
There really are people who would not take that few hours.
I don’t know if this counts, but when I was about 9 or 10 and learning C (my first exposure to programming) I understood input/output, loops, functions, variables, but I really didn’t get pointers. I distinctly remember my dad trying to explain the relationship between the * and & operators with box-and-pointer diagrams and I just absolutely could not figure out what was going on. I don’t know whether it was the notation or the concept that eluded me. I sort of gave up on it and stopped programming C for a while, but a few years later (after some Common Lisp in between), when I revisited C and C++ in high school programming classes, it seemed completely trivial.
So there might be some kind of level of abstract-thinking-ability which is a prerequisite to understanding such things. No comment on whether everyone can develop it eventually or not.
There are really people who don’t get pointers.
One of the epiphanies of my programming career was when I grokked function pointers. For a while prior to that I really struggled to even make sense of that idea, but when it clicked it was beautiful. (By analogy I can sort of understand what it might be like not to understand pointers themselves.)
Then I hit on the idea of embedding a function pointer in a data structure, so that I could change the function pointed to depending on some environmental parameters. Usually, of course, the first parameter of that function was the data structure itself...
Cute. Sad, but that’s already more powerful than straight OO. Python and Ruby support adding/rebinding methods at runtime (one reason duck typing is more popular these days). You might want to look at functional programming if you haven’t yet, since you’ve no doubt progressed since your epiphany. I’ve heard nice things about statically typed languages such as Haskell and O’Caml, and my personal favorite is Scheme.
Oddly enough, I think Morendil would get a real kick out of JavaScript. So much in JS involves passing functions around, usually carrying around some variables from their enclosing scope. That’s how the OO works; it’s how you make callbacks seem natural; it even lets you define new control-flow structures like jQuery’s each() function, which lets you pass in a function which iterates over every element in a collection.
The clearest, most concise book on this is Doug Crockford’s Javascript: The Good Parts. Highly recommended.
The technical term for this is a closure. A closure is a first-class* function with some associated state. For example, in Scheme, here is a function which returns counters, each with its own internal ticker:
To create a counter, you’d do something like
Then, to get values from the counter, you could call something like
Here is the same example in Python, since that’s what most people seem to be posting in:
*That is, a function which you can pass around like a value.
While we’re sharing fun information, I’d like to point out a little-used feature of Markdown syntax: if you put four spaces before a line, it’s treated as code. Behold:
Also, the emacs rectangle editing functions are good for this. C-x r t is a godsend.
I suspect it’s like how my brain reacts to negative numbers, or decimals; I have no idea how anyone could fail to understand them. But some people do.
And, due to my tendency to analyse mistakes I make (especially factual errors) I remember the times when I got each one of those wrong. I even remember the logic I used.
But they’ve become so ingrained in my brain now that failure to understand them is nigh inconceivable.
There is a difference in aptitude, but part of the problem is that pointers are almost never explained correctly. Many texts try to explain in abstract terms, which doesn’t work; a few try to explain graphically, which doesn’t work terribly well. I’ve met professional C programmers who therefore never understood pointers, but who did understand them after I gave them the right explanation.
The right explanation is in terms of numbers: the key is that
char *x
actually means the same thing asint x
(on a 32-bit machine, and modulo some superficial convenience). A pointer is just an integer that gets used to store a memory address. Then you write out a series of numbered boxes starting at e.g. 1000, to represent memory locations. People get pointers when you put it like that.