Your core claim is very nearly conventional wisdom in some quarters. You might want to articulate some remedies.
A few thoughts --
One metric for disease you didn’t mention is the gap between research and practice. My impression is that in graphics, systems, networking and some other healthy parts of the CS academy, there’s an intensive flow of ideas back and forth between researchers and practitioners. That’s much rarer in software engineering. There are fewer start-ups by SE researchers. There are few academic ideas and artifacts that have become widely adopted. (I don’t have numerical metrics for this claim, unfortunately.) And this is a sign that either the researchers are irrelevant, or the practitioners refuse to learn, or both.
I can vouch from personal observation that software engineering is often a marginal part of academic computer science. It’s not well respected as a sub-field. Software engineering that studies developer behavior is especially marginal—as a result, the leading conferences tend to be dominated by applied program analysis papers. Which are nice, but typically very low impact.
Yes to all of that, especially the research-practice gap.
For instance around things like “test-driven development” the flow seems to be going in the “wrong” direction, by which I mean not the expected one. The academics seem to be playing catch-up to “prove” whether or not it works, which is largely irrelevant to most people who’ve chosen to either use TDD or to not use it.
One way to get both qualitative and quantitative evidence of the phenomenon is to look at the proceedings of the ICSE conference, taken as a historical whole (rather than one particular year, article or topic). There was a keynote speech by Ghezzi which examined some numbers, for instance it is tending to become more and more an academic conference, from beginnings more balanced between industry and academia.
Interestingly, the “peaks” of interest from ICSE in things that were more industry-relevant (design patterns and Agile) seem to correspond closely to peaks of industry involvement in the committee, which suggest that the academic-heavy trend is also a trend toward less relevance (at least on the part of this particular conference).
It’s also instructive to look at the “most influential” papers from ICSE, from the point of view of “how have these papers changed the way software engineers actually work on an everyday basis”. There is one that seems good from that point of view, the one on Statemate, but use of Statemate (a commercial tool) is AFAICT restricted to a subset of industries (such as auto or aerospace). And of course the Royce paper on waterfall, which isn’t really from 1987, it was reprinted that year at Boehm’s urging.
On the other hand, something like refactoring? That’s an industry thing, and largely ignored by ICSE until 1999, seven years after being first formally described in Bill Opdyke’s PhD thesis. Or design patterns—presented at OOPSLA in 1987, book published in 1994, discovered by ICSE in 1996.
Your core claim is very nearly conventional wisdom in some quarters.
Would like to hear more about that, in private if you are being intentionally vague. :)
My professional affiliation has the word “Agile” in it, a community which is known both for its disregard for the historical prescriptions arising from software engineering the formal discipline, and (more controversially) for its disdain of academic evidence.
After spending a fair bit of time on LW, though, I’ve become more sensitive to the ways that attachment to the labels “Agile” or “software engineering” also served as excellent ways to immunize oneself against inconvenient bits of knowledge. That’s where I’m trying to make some headway; I provisionally think of software engineering as a diseased discipline and of Agile as something that may have the potential to grow into a healthy one, but which is still miles from being there.
Your core claim is very nearly conventional wisdom in some quarters. You might want to articulate some remedies.
A few thoughts --
One metric for disease you didn’t mention is the gap between research and practice. My impression is that in graphics, systems, networking and some other healthy parts of the CS academy, there’s an intensive flow of ideas back and forth between researchers and practitioners. That’s much rarer in software engineering. There are fewer start-ups by SE researchers. There are few academic ideas and artifacts that have become widely adopted. (I don’t have numerical metrics for this claim, unfortunately.) And this is a sign that either the researchers are irrelevant, or the practitioners refuse to learn, or both.
I can vouch from personal observation that software engineering is often a marginal part of academic computer science. It’s not well respected as a sub-field. Software engineering that studies developer behavior is especially marginal—as a result, the leading conferences tend to be dominated by applied program analysis papers. Which are nice, but typically very low impact.
Yes to all of that, especially the research-practice gap.
For instance around things like “test-driven development” the flow seems to be going in the “wrong” direction, by which I mean not the expected one. The academics seem to be playing catch-up to “prove” whether or not it works, which is largely irrelevant to most people who’ve chosen to either use TDD or to not use it.
One way to get both qualitative and quantitative evidence of the phenomenon is to look at the proceedings of the ICSE conference, taken as a historical whole (rather than one particular year, article or topic). There was a keynote speech by Ghezzi which examined some numbers, for instance it is tending to become more and more an academic conference, from beginnings more balanced between industry and academia.
Interestingly, the “peaks” of interest from ICSE in things that were more industry-relevant (design patterns and Agile) seem to correspond closely to peaks of industry involvement in the committee, which suggest that the academic-heavy trend is also a trend toward less relevance (at least on the part of this particular conference).
It’s also instructive to look at the “most influential” papers from ICSE, from the point of view of “how have these papers changed the way software engineers actually work on an everyday basis”. There is one that seems good from that point of view, the one on Statemate, but use of Statemate (a commercial tool) is AFAICT restricted to a subset of industries (such as auto or aerospace). And of course the Royce paper on waterfall, which isn’t really from 1987, it was reprinted that year at Boehm’s urging.
On the other hand, something like refactoring? That’s an industry thing, and largely ignored by ICSE until 1999, seven years after being first formally described in Bill Opdyke’s PhD thesis. Or design patterns—presented at OOPSLA in 1987, book published in 1994, discovered by ICSE in 1996.
Would like to hear more about that, in private if you are being intentionally vague. :)
My professional affiliation has the word “Agile” in it, a community which is known both for its disregard for the historical prescriptions arising from software engineering the formal discipline, and (more controversially) for its disdain of academic evidence.
After spending a fair bit of time on LW, though, I’ve become more sensitive to the ways that attachment to the labels “Agile” or “software engineering” also served as excellent ways to immunize oneself against inconvenient bits of knowledge. That’s where I’m trying to make some headway; I provisionally think of software engineering as a diseased discipline and of Agile as something that may have the potential to grow into a healthy one, but which is still miles from being there.