I’d like to note the sheer volume of people in the wider startup ecosystem generating reasons why they are smarter than science when this is brought up.
Let’s investigate how little “evidence” they need before they completely ignore said research:
Many have the unmeasured, ridiculously unreliable anecdata “I produce amortized peak output working at a higher number of hours per week” (it’s hard to tell that anyone has actually tried looking before claiming it, though: work 6 months at 40 hours/week and another 6 months at 70/hours a week). Why is this unreliable? Because working longer hours produces more artifacts of work, even when it produces less deliverable work. You have all these memories of being in an office, more emails, more comments in your bug trackers, etc. But how much work is done? Even if they did the experiment, there isn’t a coherent way of measuring productivity of creative workers with n=1, almost all of us have quite a lot of variation in the complexity and familiarity of our work.
I know of only a couple people who have dramatically dropped their hours, and 100% of them are more productive (both more efficient and effective).
So there are people using almost no data (assuming they actually did measure themselves), and they claim to know better.
This is all to say, the startup ecosystem isn’t thinking this through carefully. To the extent they end up being correct it will be largely coincidence.
As a software programmer myself I can say that’s a pretty bizarre argument to make. Informally, almost all experts have a tool/language they feel gives them an advantage, and language holy wars are all about this topic. Doesn’t mean they aren’t just making it up, but worth considering that people saying “Node is better than Rails!” “TDD is better than !” can’t simultaneously claim “There is no way to order different approaches by productivity”.
But in fact, they is a way, and such measurement has been happening for long enough for us to develop reasonably accurate models of how it changes over time e.g. due to tools getting better, see Yannis’ Law, which I confirmed myself a couple months ago (example task took me about five minutes not including when I read the description, so I’m within a factor of 2 of predictions—I think we may need a better task in a decade or so, it’s rapidly approaching weird task-size-minimums).
I’d like to note the sheer volume of people in the wider startup ecosystem generating reasons why they are smarter than science when this is brought up.
Let’s investigate how little “evidence” they need before they completely ignore said research:
Many have the unmeasured, ridiculously unreliable anecdata “I produce amortized peak output working at a higher number of hours per week” (it’s hard to tell that anyone has actually tried looking before claiming it, though: work 6 months at 40 hours/week and another 6 months at 70/hours a week). Why is this unreliable? Because working longer hours produces more artifacts of work, even when it produces less deliverable work. You have all these memories of being in an office, more emails, more comments in your bug trackers, etc. But how much work is done? Even if they did the experiment, there isn’t a coherent way of measuring productivity of creative workers with n=1, almost all of us have quite a lot of variation in the complexity and familiarity of our work.
I know of only a couple people who have dramatically dropped their hours, and 100% of them are more productive (both more efficient and effective).
So there are people using almost no data (assuming they actually did measure themselves), and they claim to know better.
This is all to say, the startup ecosystem isn’t thinking this through carefully. To the extent they end up being correct it will be largely coincidence.
Software programmers also frequently argue that it’s impossible to measure software productivity.
As a software programmer myself I can say that’s a pretty bizarre argument to make. Informally, almost all experts have a tool/language they feel gives them an advantage, and language holy wars are all about this topic. Doesn’t mean they aren’t just making it up, but worth considering that people saying “Node is better than Rails!” “TDD is better than !” can’t simultaneously claim “There is no way to order different approaches by productivity”.
But in fact, they is a way, and such measurement has been happening for long enough for us to develop reasonably accurate models of how it changes over time e.g. due to tools getting better, see Yannis’ Law, which I confirmed myself a couple months ago (example task took me about five minutes not including when I read the description, so I’m within a factor of 2 of predictions—I think we may need a better task in a decade or so, it’s rapidly approaching weird task-size-minimums).
http://cgi.di.uoa.gr/~smaragd/law.html