So, I agree that you accomplished these desired things. However, before you accomplished them, how accurately did you know how much time they would take, or how useful they would be?
For that matter, if someone told you, “That wasn’t one desired thing I just implemented; it was three,” is it possible to disagree?
(My point is that “desired thing” is not well-defined, and so “desired things per unit time” cannot be a measurement..)
However, before you accomplished them, how accurately did you know how much time they would take
I didn’t. I never said I did.
or how useful they would be?
Uh, pretty accurately. Object selection is a critical feature; the entire functionality of the app depends on it. The usefulness of not having your data be corrupted is also obvious. I’m not really sure what you mean by asking whether I know in advance how useful a feature or bug fix will be. Of course I know. How could I not know? I always know.
For that matter, if someone told you, “That wasn’t one desired thing I just implemented; it was three,” is it possible to disagree?
Ah, now this is a different matter. Yes, “desired thing” is not a uniform unit of accomplishment. You have to compare to other desired things, and other people who implement them. You can also group desired things into classes (is this a bug fix or a feature addition? how big a feature? how much code must be written or modified to implement it? how many test cases must be run to isolate this bug?).
Yes, “desired thing” is not a uniform unit of accomplishment.
Right! So, “implementation of desired things per unit time” is not a measure of programmer productivity, since you can’t really use it to compare the work of one programmer and another.
There are obvious cases, of course, where you can — here’s someone who pounds out a reliable map-reduce framework in a weekend; there’s someone who can’t get a quicksort to compile. But if two moderately successful (and moderately reasonable) programmers disagree about their productivity, this candidate measurement doesn’t help us resolve that disagreement.
Well, if your goal is comparing two programmers, then the most obvious thing to do is to give them both the same set of diverse tasks, and see how long they take (on each task and on the whole set).
If your goal is gauging the effectiveness of this or that approach (agile vs. waterfall? mandated code formatting style or no? single or pair programming? what compensation structure? etc.), then it’s slightly less trivial, but you can use some “fuzzy” metrics: for instance, classify “desired things” into categories (feature, bug fix, compatibility fix, etc.), and measure those per unit time.
As for disagreeing whether something is one desired thing or three — well, like I said, you categorize. But also, it really won’t be the case that one programmer says “I just implemented a feature”, and another goes “A feature?! You just moved one parenthesis!”, and a third goes “A feature?! You just wrote the entire application suite!”.
Well, if your goal is comparing two programmers, then the most obvious thing to do is to give them both the same set of diverse tasks, and see how long they take (on each task and on the whole set).
That might work in an academic setting, but doesn’t work in a real-life business setting where you’re not going to tie up two programmers (or two teams, more likely) reimplementing the same stuff just to satisfy your curiosity.
And of course programming is diverse enough to encompass a wide variety of needs and skillsets. Say, programmer A is great at writing small self-contained useful libraries, programmer B has the ability to refactor a mess of spaghetti code into something that’s clear and coherent, programmer C writes weird chunks of code that look strange but consume noticeably less resources, programmer D is a wizard at databases, programmer E is clueless about databases but really groks Windows GUI APIs, etc. etc. How are you going to compare their productivity?
That might work in an academic setting, but doesn’t work in a real-life business setting where you’re not going to tie up two programmers (or two teams, more likely) reimplementing the same stuff just to satisfy your curiosity.
Maybe that’s one reason to have colleges that hand out computer science degrees? ;)
And of course programming is diverse enough to encompass a wide variety of needs and skillsets. Say, programmer A is great at writing small self-contained useful libraries, programmer B has the ability to refactor a mess of spaghetti code into something that’s clear and coherent, programmer C writes weird chunks of code that look strange but consume noticeably less resources, programmer D is a wizard at databases, programmer E is clueless about databases but really groks Windows GUI APIs, etc. etc. How are you going to compare their productivity?
Very carefully.
In seriousness, the answer is that you wouldn’t compare them. Comparing programmer productivity across problem domains like you describe is rarely all that useful.
You really only care about comparing programmer productivity within a domain, as well as comparing the same programmers’ productivity across time.
I’m going to look at the total desirability of the what Adam does, at the total desirability of what Bob does...
And in the end I’m going to have to make difficult calls, like how desirable it is for us to have weird chunks of code that look strange by consume noticeably fewer resources.
Each of them is better at different things, so as a manager I need to take that into account; I wouldn’t use a carpenter to paint while the painter is doing framing, but I might set things up so that painter helps with the framing and the carpenter assists with the painting. I certainly wouldn’t spend a lot of time optimizing to hire only carpenters and tell them to build the entire house.
It’s a two-step process, right? First, you measure how long a specific type of feature takes to implement; from a bunch of historic examples or something. Then, you measure how long a programmer (or all the programmers using a particular methodology or language, whatever you’re measuring), take to implement a new feature of the same type.
It’s not very hard.
Just recently the task before me was to implement an object selection feature in an app I’m working on.
I implemented it. Now, the app lets the user select objects. Before, it didn’t.
Prior to that, the task before me was to fix a data corruption bug in a different app.
I fixed it. Now, the data does not get corrupted when the user takes certain actions. Before, it did.
You see? Easy.
So, I agree that you accomplished these desired things. However, before you accomplished them, how accurately did you know how much time they would take, or how useful they would be?
For that matter, if someone told you, “That wasn’t one desired thing I just implemented; it was three,” is it possible to disagree?
(My point is that “desired thing” is not well-defined, and so “desired things per unit time” cannot be a measurement..)
I didn’t. I never said I did.
Uh, pretty accurately. Object selection is a critical feature; the entire functionality of the app depends on it. The usefulness of not having your data be corrupted is also obvious. I’m not really sure what you mean by asking whether I know in advance how useful a feature or bug fix will be. Of course I know. How could I not know? I always know.
Ah, now this is a different matter. Yes, “desired thing” is not a uniform unit of accomplishment. You have to compare to other desired things, and other people who implement them. You can also group desired things into classes (is this a bug fix or a feature addition? how big a feature? how much code must be written or modified to implement it? how many test cases must be run to isolate this bug?).
Right! So, “implementation of desired things per unit time” is not a measure of programmer productivity, since you can’t really use it to compare the work of one programmer and another.
There are obvious cases, of course, where you can — here’s someone who pounds out a reliable map-reduce framework in a weekend; there’s someone who can’t get a quicksort to compile. But if two moderately successful (and moderately reasonable) programmers disagree about their productivity, this candidate measurement doesn’t help us resolve that disagreement.
Well, if your goal is comparing two programmers, then the most obvious thing to do is to give them both the same set of diverse tasks, and see how long they take (on each task and on the whole set).
If your goal is gauging the effectiveness of this or that approach (agile vs. waterfall? mandated code formatting style or no? single or pair programming? what compensation structure? etc.), then it’s slightly less trivial, but you can use some “fuzzy” metrics: for instance, classify “desired things” into categories (feature, bug fix, compatibility fix, etc.), and measure those per unit time.
As for disagreeing whether something is one desired thing or three — well, like I said, you categorize. But also, it really won’t be the case that one programmer says “I just implemented a feature”, and another goes “A feature?! You just moved one parenthesis!”, and a third goes “A feature?! You just wrote the entire application suite!”.
That might work in an academic setting, but doesn’t work in a real-life business setting where you’re not going to tie up two programmers (or two teams, more likely) reimplementing the same stuff just to satisfy your curiosity.
And of course programming is diverse enough to encompass a wide variety of needs and skillsets. Say, programmer A is great at writing small self-contained useful libraries, programmer B has the ability to refactor a mess of spaghetti code into something that’s clear and coherent, programmer C writes weird chunks of code that look strange but consume noticeably less resources, programmer D is a wizard at databases, programmer E is clueless about databases but really groks Windows GUI APIs, etc. etc. How are you going to compare their productivity?
Maybe that’s one reason to have colleges that hand out computer science degrees? ;)
Very carefully.
In seriousness, the answer is that you wouldn’t compare them. Comparing programmer productivity across problem domains like you describe is rarely all that useful.
You really only care about comparing programmer productivity within a domain, as well as comparing the same programmers’ productivity across time.
I’m going to look at the total desirability of the what Adam does, at the total desirability of what Bob does...
And in the end I’m going to have to make difficult calls, like how desirable it is for us to have weird chunks of code that look strange by consume noticeably fewer resources.
Each of them is better at different things, so as a manager I need to take that into account; I wouldn’t use a carpenter to paint while the painter is doing framing, but I might set things up so that painter helps with the framing and the carpenter assists with the painting. I certainly wouldn’t spend a lot of time optimizing to hire only carpenters and tell them to build the entire house.
It’s a two-step process, right? First, you measure how long a specific type of feature takes to implement; from a bunch of historic examples or something. Then, you measure how long a programmer (or all the programmers using a particular methodology or language, whatever you’re measuring), take to implement a new feature of the same type.