I expect it to become even more generally useful as computers get smarter and more pervasive.
Historically that hasn’t been the case.
When personal computers became popular (say, the 1980s) the prevalent thought was that everyone will need to know programming to make use of them, so there was a wave of putting BASIC courses into schools, etc. This turned out to be quite wrong. As time went on, you needed less and less of (any kind of) specialized knowledge to interact with computers.
I don’t see why this trend would suddenly break and reverse.
I’d distinguish between useful and necessary here. A user with no programming knowledge can clearly do a lot more now than they’d have been able to in 1993 let alone the 1980s, enabled largely by UI improvements: first the GUI revolution of the Eighties and early Nineties, then more incremental improvements as GUI idioms were refined over time, then innovations building on these trends. I expect this to continue.
If we stop there, however, we ignore the other side of the equation. A user with basic programming knowledge and the right mindset can now do everything a naive user can and much more, thanks among other things to an explosion in easily available libraries and the increasing popularity of capable high-level languages with intuitive semantics. Moreover, there are wide domains that UI changes haven’t touched and basically can’t, such as all but the simplest forms of automation: it’s a rare UI that exposes so much as a conditional outside of one-time user interaction, and the exceptions (like the Word and Excel features Gwern mentioned in a sibling post) often implement scripting languages in all but name. I expect these trends to continue, too.
Taken together, that gives us a gap in capability that’s likely to increase in absolute terms, even if the proportions narrow or remain stable. I’m betting on “stable”, myself.
Certainly a more capable user can do more than a less capable user, but that’s just restating the obvious.
I would argue that there is a more important trend here: the growth and accumulation of software—accumulation which continues to reduce the need to program something from scratch. 30 years ago if you wanted, say, a tool to convert metric amounts to imperial and back you had to write it yourself. Nowadays your task is to select one of a few dozen apps that can do it. Most needs of an average user (ones that he himself recognizes as his needs) have been met, software-wise—no need to code anything yourself.
I believe you also overestimate the willingness of the general population to get involved with programming of any kind. I happen to know a group of fairly high-level accountants. They are all smart people, computer-literate, work all day in Excel with rather complex structures, etc. A significant part of their work is automateable and would be made noticeably easier with a collection of scripts and snippets in Excel’s VBA (Visual Basic for Applications). However they refuse, explicitly and forcefully to delve into VBA using a variety of not-too-rational arguments which boil down to “we’re accountants, not programmers”. I don’t believe these people are exceptions, I think they are the rule.
Oh, and MS Office certainly implements a full-blown scripting language—the above-mentioned VBA.
I believe you also overestimate the willingness of the general population to get involved with programming of any kind.
I don’t believe I made a statement about that. I’m not trying to predict whether general computer science skills will become more common outside of formal software engineering in the future—that’s not something I’m equipped to answer. I’m saying that the potential value added by being an accountant who can code or an HR specialist who can code has increased over the last decade or so and will probably continue to do so.
I don’t know if I’m willing to agree with that. The main reason is complexity which is growing. We’re basically talking about people who are amateur coders, programming isn’t their main skill but they can do it. As the complexity of the environment increases, I’m not convinced their limited by definition skills can keep up.
There are at least two directions to this argument. One is that not many non-professional programmers are good programmers. The typical way this plays out is as follows: some guy learns a bit of Excel VBA and starts by writing a few simple macros. Soon he progresses to full-blown functions and pages of code. In a few months he automated a large chunk of his work and is happy. Until, that is, it turns out that there are some errors in his output. He tries to fix it and he can’t—two new errors pop up every time he claims to have killed one. Professionals are called in and they blanch in horror at the single 30-page function which works by arcane manipulations of a large number of global variables, all with three- or four-letter incomprehensible names, not to mention hardcoded cell references and specific numbers. The code is unsalvageable—it has to be trashed completely and all output produced by it re-done.
The second direction is concerned not with the complexity of the task, but the complexity of the environment. Consider, for example, the basic example of opening a file, copying a chunk of text from it, and pasting it into another file. That used to be easy (and is still easy if the files are local ASCII text files and you’re in Unix :-D). But now imagine that to open a file you need to interface with the company’s document storage system. You have to deal with security, privileges, and permissions. You have to deal with the versioning system. Maybe the file itself is not really a file in the filesystem but an entry in a document database. The chunk of text that you’re copying might turn out to be in Unicode and contain embedded objects. Etc., etc. And the APIs of all the layers that you’re dealing with are, of course, written for professional programmers who are supposed to know this stuff well...
I think you’re judging the hypothetical amateur programmer too harshly. So what if the code is ugly? Did the guy actually save time? Does his script make more errors than he would make if doing everything by hand? Is the 30-page function really necessary to achieve noticeable gains or could he still get a lot from sticking to short code snippets and thus avoiding the ugliness?
Similarly with the second example. Maybe some steps of the workflow will still have to be done manually. This wouldn’t fly in a professionally programmed system. But if someone is already being paid to do everything manually then as long as they can automate some steps, it’s still a win.
Historically that hasn’t been the case.
When personal computers became popular (say, the 1980s) the prevalent thought was that everyone will need to know programming to make use of them, so there was a wave of putting BASIC courses into schools, etc. This turned out to be quite wrong. As time went on, you needed less and less of (any kind of) specialized knowledge to interact with computers.
I don’t see why this trend would suddenly break and reverse.
I’d distinguish between useful and necessary here. A user with no programming knowledge can clearly do a lot more now than they’d have been able to in 1993 let alone the 1980s, enabled largely by UI improvements: first the GUI revolution of the Eighties and early Nineties, then more incremental improvements as GUI idioms were refined over time, then innovations building on these trends. I expect this to continue.
If we stop there, however, we ignore the other side of the equation. A user with basic programming knowledge and the right mindset can now do everything a naive user can and much more, thanks among other things to an explosion in easily available libraries and the increasing popularity of capable high-level languages with intuitive semantics. Moreover, there are wide domains that UI changes haven’t touched and basically can’t, such as all but the simplest forms of automation: it’s a rare UI that exposes so much as a conditional outside of one-time user interaction, and the exceptions (like the Word and Excel features Gwern mentioned in a sibling post) often implement scripting languages in all but name. I expect these trends to continue, too.
Taken together, that gives us a gap in capability that’s likely to increase in absolute terms, even if the proportions narrow or remain stable. I’m betting on “stable”, myself.
Certainly a more capable user can do more than a less capable user, but that’s just restating the obvious.
I would argue that there is a more important trend here: the growth and accumulation of software—accumulation which continues to reduce the need to program something from scratch. 30 years ago if you wanted, say, a tool to convert metric amounts to imperial and back you had to write it yourself. Nowadays your task is to select one of a few dozen apps that can do it. Most needs of an average user (ones that he himself recognizes as his needs) have been met, software-wise—no need to code anything yourself.
I believe you also overestimate the willingness of the general population to get involved with programming of any kind. I happen to know a group of fairly high-level accountants. They are all smart people, computer-literate, work all day in Excel with rather complex structures, etc. A significant part of their work is automateable and would be made noticeably easier with a collection of scripts and snippets in Excel’s VBA (Visual Basic for Applications). However they refuse, explicitly and forcefully to delve into VBA using a variety of not-too-rational arguments which boil down to “we’re accountants, not programmers”. I don’t believe these people are exceptions, I think they are the rule.
Oh, and MS Office certainly implements a full-blown scripting language—the above-mentioned VBA.
I don’t believe I made a statement about that. I’m not trying to predict whether general computer science skills will become more common outside of formal software engineering in the future—that’s not something I’m equipped to answer. I’m saying that the potential value added by being an accountant who can code or an HR specialist who can code has increased over the last decade or so and will probably continue to do so.
I don’t know if I’m willing to agree with that. The main reason is complexity which is growing. We’re basically talking about people who are amateur coders, programming isn’t their main skill but they can do it. As the complexity of the environment increases, I’m not convinced their limited by definition skills can keep up.
There are at least two directions to this argument. One is that not many non-professional programmers are good programmers. The typical way this plays out is as follows: some guy learns a bit of Excel VBA and starts by writing a few simple macros. Soon he progresses to full-blown functions and pages of code. In a few months he automated a large chunk of his work and is happy. Until, that is, it turns out that there are some errors in his output. He tries to fix it and he can’t—two new errors pop up every time he claims to have killed one. Professionals are called in and they blanch in horror at the single 30-page function which works by arcane manipulations of a large number of global variables, all with three- or four-letter incomprehensible names, not to mention hardcoded cell references and specific numbers. The code is unsalvageable—it has to be trashed completely and all output produced by it re-done.
The second direction is concerned not with the complexity of the task, but the complexity of the environment. Consider, for example, the basic example of opening a file, copying a chunk of text from it, and pasting it into another file. That used to be easy (and is still easy if the files are local ASCII text files and you’re in Unix :-D). But now imagine that to open a file you need to interface with the company’s document storage system. You have to deal with security, privileges, and permissions. You have to deal with the versioning system. Maybe the file itself is not really a file in the filesystem but an entry in a document database. The chunk of text that you’re copying might turn out to be in Unicode and contain embedded objects. Etc., etc. And the APIs of all the layers that you’re dealing with are, of course, written for professional programmers who are supposed to know this stuff well...
I think you’re judging the hypothetical amateur programmer too harshly. So what if the code is ugly? Did the guy actually save time? Does his script make more errors than he would make if doing everything by hand? Is the 30-page function really necessary to achieve noticeable gains or could he still get a lot from sticking to short code snippets and thus avoiding the ugliness?
Similarly with the second example. Maybe some steps of the workflow will still have to be done manually. This wouldn’t fly in a professionally programmed system. But if someone is already being paid to do everything manually then as long as they can automate some steps, it’s still a win.