My assumption was that people who can’t seem to learn to program can’t get to the gut-level belief that computers don’t use natural language—computers require types of precision that people don’t need.
However, this is only a guess. Would anyone with teaching experience care to post about where the roadblocks seem to be?
Also, does the proportion of people who can’t learn to program seem to be dropping?
On the other hand, I did the JavaScript tutorial at Codacademy, and it was fun of a very annoying sort—enough fun that I was disappointed that there only seemed to be a small amount of it.
However, I didn’t seem to be able to focus enough on the examples until I took out the extra lines and curly parentheses—I was literally losing track of what I was doing as I went from one distant line to another. If I pursue this, I might need to get used to the white space—I’m sure it’s valuable for keeping track of the sections of a program.
My working memory isn’t horrendously bad—I can reliably play dual 3-back, and am occasionally getting to 4-back.
If there are sensory issues making programming difficult for a particular person, this might be hard to distinguish from a general inability.
I’ve taught courses at various levels, and in introductory courses (where there’s no guarantee anyone has seen source code of any form before), I’ve been again and again horrified by students months into the course who “tell” the computer to do something. For instance, in a C program, they might write a comment to the computer instructing it to remember the value of a variable and print it if it changed. “Wishful” programming, as it were.
In fact, I might describe that as the key difference between the people who clearly would never take another programming course, and those that might—wishful thinking. Some never understood their own code and seemed to write it like monkeys armed with a binary classifier (the compiler & runtime, either running their program, or crashing) banging out Shakespeare. These typically never had a clear idea about what “program state” was; instead of seeing their program as data evolving over time, they saw it as a bunch of characters on the screen, and maybe if the right incantations were put on the screen, the right things would happen when they said Go.
Common errors in this category include:
Infinite loops, because “the loop will obviously be done when it has the value I want”.
Uninitialized variables, because “it’s obvious what I’m computing, and that you start at X”.
Calling functions that don’t exist, because, “well, it ought to”.
NOT calling functions, because “the function is named PrintWhenDone, it should automatically print when the program is done”.
These errors would crop up among a minority of students right up until the class was over. They could be well described by a gut-level belief that computers use natural language; but this only covers 2-6% of students in these courses*, whereas my experience is that less than 50% of students who go into a Computer Science major actually graduate with a Computer Science degree; so I think this is only a small part of what keeps people from programming.
*In three courses, with a roughly 50-person class, there were always 1-3 of these students; I suspect the median is therefore somewhere between 2 and 6%, but perhaps wildly different at another institution and far higher in the general population.
I think I’m over it, but back in college (the 70s), I understood most of the linguistic limitations of computers, but I resented having to accomodate the hardware, and I really hated having to declare variables in advance.
To some extent, I was anticipating the future. There’s a huge amount of programming these days where you don’t have to think about the hardware (I wish I could remember the specific thing that got on my nerves) and I don’t think there are modern languages where you have to declare that something is a variable before you use it.
Of course, hating something isn’t the same thing as not being able to understand that you need to do it.
Not graduating with a Computer Science degree isn’t the same thing as not having a programming gear. What fraction of that 50% get degrees in other fields that require programming? What proportion drop out of college, probably for other reasons? What proportion can program, but hate doing it?
In my opinion, almost all of that 50% (that drop out) could program, to some extent, if sufficiently motivated.
A great deal of Computer Science students (half? more than half?) love programming and hit a wall when they come to the theoretical side of computer science. Many of them force themselves through it, graduate, and become successful programmers. Many switch majors to Information Technology, and for better or for worse will end up doing mostly system administration work for their career. Some switch majors entirely, and become engineers. I actually think we do ourselves a disservice by failing to segment Computer Science from Software Engineering; a distinction made at very few institutions, and when made, often to the detriment of Software Engineers, regrettably.
So to answer your question; of the 50% that drop out, I think most end up as sub-par programmers, but 80% of that 50% “have programming gear”, to the extent that such a thing exists.
I did teach Python at a computer science school (people there already had 2 years of scientific studies after “bac”), and I was amazed to see how hard it was for some of them to understand that in Python :
>>> 4+2
6
>>> "4"+"2"
'42'
So yes, I guess the key is about understanding what types are. The same kind of issues arise between using a variable and the variable name.
Now, I’m not sure how much this is teachable and when (ie, maybe it’s a kind of skill you’ve to learn when you’re young to really grasp). I started programming when I was 11, so there may be something around it, but I don’t have much data on that.
To be fair, it’s not really enough to know what types are to get this one right. You have to understand that the + operator is overloaded based on the types of its operands; that is, + actually means several different things, depending on the operand types. The experience people have of + meaning numerical addition might be interfering with their learning. Maybe if someone else’s students had problems with it, they could try defining a function putTogether (a, b) and telling the students that it’s a mysterious black box that does one arbitrary thing for numbers and a completely different thing for strings. Then you could leave revealing that it’s actually the language’s + operator that has this strange behavior for later.
Maybe you might like trying Python (there are some more tutorials listed here; specifically, Learn Python the Hard Way, #2 in the Python section, is a nice next step after Codecademy), it has a “cleaner” syntax, in that it doesn’t require braces or so many brackets; this could help you to practice without so many distractions.
(And yes, once you’ve practiced more, you’ll be able to keep track of more of the program in your head and so the white space is a navigational aid, rather than a hinderance.)
My assumption was that people who can’t seem to learn to program can’t get to the gut-level belief that computers don’t use natural language—computers require types of precision that people don’t need.
However, this is only a guess. Would anyone with teaching experience care to post about where the roadblocks seem to be?
Also, does the proportion of people who can’t learn to program seem to be dropping?
On the other hand, I did the JavaScript tutorial at Codacademy, and it was fun of a very annoying sort—enough fun that I was disappointed that there only seemed to be a small amount of it.
However, I didn’t seem to be able to focus enough on the examples until I took out the extra lines and curly parentheses—I was literally losing track of what I was doing as I went from one distant line to another. If I pursue this, I might need to get used to the white space—I’m sure it’s valuable for keeping track of the sections of a program.
My working memory isn’t horrendously bad—I can reliably play dual 3-back, and am occasionally getting to 4-back.
If there are sensory issues making programming difficult for a particular person, this might be hard to distinguish from a general inability.
I’ve taught courses at various levels, and in introductory courses (where there’s no guarantee anyone has seen source code of any form before), I’ve been again and again horrified by students months into the course who “tell” the computer to do something. For instance, in a C program, they might write a comment to the computer instructing it to remember the value of a variable and print it if it changed. “Wishful” programming, as it were.
In fact, I might describe that as the key difference between the people who clearly would never take another programming course, and those that might—wishful thinking. Some never understood their own code and seemed to write it like monkeys armed with a binary classifier (the compiler & runtime, either running their program, or crashing) banging out Shakespeare. These typically never had a clear idea about what “program state” was; instead of seeing their program as data evolving over time, they saw it as a bunch of characters on the screen, and maybe if the right incantations were put on the screen, the right things would happen when they said Go.
Common errors in this category include:
Infinite loops, because “the loop will obviously be done when it has the value I want”.
Uninitialized variables, because “it’s obvious what I’m computing, and that you start at X”.
Calling functions that don’t exist, because, “well, it ought to”.
NOT calling functions, because “the function is named PrintWhenDone, it should automatically print when the program is done”.
These errors would crop up among a minority of students right up until the class was over. They could be well described by a gut-level belief that computers use natural language; but this only covers 2-6% of students in these courses*, whereas my experience is that less than 50% of students who go into a Computer Science major actually graduate with a Computer Science degree; so I think this is only a small part of what keeps people from programming.
*In three courses, with a roughly 50-person class, there were always 1-3 of these students; I suspect the median is therefore somewhere between 2 and 6%, but perhaps wildly different at another institution and far higher in the general population.
I think I’m over it, but back in college (the 70s), I understood most of the linguistic limitations of computers, but I resented having to accomodate the hardware, and I really hated having to declare variables in advance.
To some extent, I was anticipating the future. There’s a huge amount of programming these days where you don’t have to think about the hardware (I wish I could remember the specific thing that got on my nerves) and I don’t think there are modern languages where you have to declare that something is a variable before you use it.
Of course, hating something isn’t the same thing as not being able to understand that you need to do it.
Not graduating with a Computer Science degree isn’t the same thing as not having a programming gear. What fraction of that 50% get degrees in other fields that require programming? What proportion drop out of college, probably for other reasons? What proportion can program, but hate doing it?
In my opinion, almost all of that 50% (that drop out) could program, to some extent, if sufficiently motivated.
A great deal of Computer Science students (half? more than half?) love programming and hit a wall when they come to the theoretical side of computer science. Many of them force themselves through it, graduate, and become successful programmers. Many switch majors to Information Technology, and for better or for worse will end up doing mostly system administration work for their career. Some switch majors entirely, and become engineers. I actually think we do ourselves a disservice by failing to segment Computer Science from Software Engineering; a distinction made at very few institutions, and when made, often to the detriment of Software Engineers, regrettably.
So to answer your question; of the 50% that drop out, I think most end up as sub-par programmers, but 80% of that 50% “have programming gear”, to the extent that such a thing exists.
I did teach Python at a computer science school (people there already had 2 years of scientific studies after “bac”), and I was amazed to see how hard it was for some of them to understand that in Python :
So yes, I guess the key is about understanding what types are. The same kind of issues arise between using a variable and the variable name.
Now, I’m not sure how much this is teachable and when (ie, maybe it’s a kind of skill you’ve to learn when you’re young to really grasp). I started programming when I was 11, so there may be something around it, but I don’t have much data on that.
To be fair, it’s not really enough to know what types are to get this one right. You have to understand that the + operator is overloaded based on the types of its operands; that is, + actually means several different things, depending on the operand types. The experience people have of + meaning numerical addition might be interfering with their learning. Maybe if someone else’s students had problems with it, they could try defining a function putTogether (a, b) and telling the students that it’s a mysterious black box that does one arbitrary thing for numbers and a completely different thing for strings. Then you could leave revealing that it’s actually the language’s + operator that has this strange behavior for later.
Couldn’t you lead them to guess by themselves, by asking them to guess the result of a series of expressions like:
4+2
“Hel” + “lo”
“Di” + “Caprio”
“Jack” + “Black”
“Jack” + ” Black”
“ABCD” + “EFGH”
“1234” + “5678″
Maybe insert an “ABCD” + “1234″ in between your last two expressions.
Maybe you might like trying Python (there are some more tutorials listed here; specifically, Learn Python the Hard Way, #2 in the Python section, is a nice next step after Codecademy), it has a “cleaner” syntax, in that it doesn’t require braces or so many brackets; this could help you to practice without so many distractions.
(And yes, once you’ve practiced more, you’ll be able to keep track of more of the program in your head and so the white space is a navigational aid, rather than a hinderance.)