But that does not answer why companies do not create good environments on purpose. I mean, companies are already complaining about lack of developers, they have HR departments, they do various teambuilding activities, do Scrum and SAFe and whatever, so… why not this one thing that could improve the productivity a lot?
You’re quoting a bunch of things as having being there primarily to “create good environments”, which, in fact have nothing to do with that. HR departments are there to prevent the company from getting sued, they’re not there to make the work environment better. Similarly, processes like Scrum and SAFe, in my experience, are more about management control than they are about productivity. In practice, what I’ve seen is that management is willing to leave significant amounts of productivity on the table, if it means that they have greater visibility and control over their employees.
If I had to guess as to the reason, I’d pick that one primarily: a crappy on-the-job environment is the price one pays for having control and visibility over what one’s employees are doing, and thus avoiding principal-agent problems in the process.
And where does the “predictability trumps productivity” attitude (which I agree is real) come from?
I would guess the managers are not aligned with the company, because they have asymmetrical incentives: a failure is punished more than a success is rewarded. Thus a guaranteed small success is preferable to a non-certain greater success. (Even worse, a great success may increase the expectations.)
Or maybe it is how the companies are usually organized. As an example, imagine that you make a product that has three parts: A, B, C, each created by different people with different skills. If the people working on the part A get it ready earlier than expected, it does not change anything, they still need to wait for others. But if people working on the part B get it ready later than expected, the entire product must wait for them. Therefore “not being late” is valuable, but “being early” is not.
Therefore “not being late” is valuable, but “being early” is not.
That’s a big part of it. A company is a soft real-time system. As much as developers like to complain about the seemingly nonsensical deadlines, those deadlines are there for a reason. There are other business processes that need to be coordinated, and there is pressure on developer managers, from elsewhere in the company, to provide a date for when the software will be ready.
Like any real-time system, therefore, it’s important that things get done in a consistent amount of time. So just like how, in real-time software, I would rather have something take 200 clock cycles consistently, rather than 20 clock cycles most of the time, and 2000 clock cycles when there’s an exception, managers will happily enforce processes that waste time, but which allow them the visibility to provide anticipated completion dates and status updates to the rest of the organization.
I agree with the idea in general, but not with its implementations I see.
If making things reliably on time is so important, you could simply hire more people. (Not in the last minute when you already see that you will miss the deadline; by then it’s usually too late.)
In my experience, many software projects are late because the teams are chronically understaffed. If you are completing deadlines reliably on time, the managers feel that you have too many people on your team, so they remove one or two. (Maybe in other countries this works differently, I don’t know.) Then there is no slack, which means that things like refactoring almost never happen, and when something unexpected happens, either the deadline is missed, or at least everyone is under big stress.
The usual response to this is that hiring more people costs more money. Yes, obviously. But the alternative, sacrificing lots of productivity to achieve greater reliability, also costs money.
Now that I think about it, maybe this is about different levels of management having different incentives. Like, maybe the upper management makes the strategical decision to sacrifice productivity for predictability, but then the lower management threatens predictability by keeping the teams too small and barely meeting the deadlines, because that is what their bonuses come from? I am just guessing here.
In my experience, many software projects are late because the teams are chronically understaffed. If you are completing deadlines reliably on time, the managers feel that you have too many people on your team, so they remove one or two.
It’s interesting that you say that, because my experience (US, large corporation IT—think large banks, large retail, 100,000+ total employees) has been the exact opposite. The projects that I’ve been working on have been all quite overstaffed, resulting in poor software architecture, thanks to Conway’s Law. When I worked at the major retailer, for example, I genuinely felt that their IT systems would be healthier and projects would delivered more quickly if they simply fired half the programmers and let the other half get on with writing code rather than Slack messages.
You’re quoting a bunch of things as having being there primarily to “create good environments”, which, in fact have nothing to do with that. HR departments are there to prevent the company from getting sued, they’re not there to make the work environment better. Similarly, processes like Scrum and SAFe, in my experience, are more about management control than they are about productivity. In practice, what I’ve seen is that management is willing to leave significant amounts of productivity on the table, if it means that they have greater visibility and control over their employees.
If I had to guess as to the reason, I’d pick that one primarily: a crappy on-the-job environment is the price one pays for having control and visibility over what one’s employees are doing, and thus avoiding principal-agent problems in the process.
And where does the “predictability trumps productivity” attitude (which I agree is real) come from?
I would guess the managers are not aligned with the company, because they have asymmetrical incentives: a failure is punished more than a success is rewarded. Thus a guaranteed small success is preferable to a non-certain greater success. (Even worse, a great success may increase the expectations.)
Or maybe it is how the companies are usually organized. As an example, imagine that you make a product that has three parts: A, B, C, each created by different people with different skills. If the people working on the part A get it ready earlier than expected, it does not change anything, they still need to wait for others. But if people working on the part B get it ready later than expected, the entire product must wait for them. Therefore “not being late” is valuable, but “being early” is not.
That’s a big part of it. A company is a soft real-time system. As much as developers like to complain about the seemingly nonsensical deadlines, those deadlines are there for a reason. There are other business processes that need to be coordinated, and there is pressure on developer managers, from elsewhere in the company, to provide a date for when the software will be ready.
Like any real-time system, therefore, it’s important that things get done in a consistent amount of time. So just like how, in real-time software, I would rather have something take 200 clock cycles consistently, rather than 20 clock cycles most of the time, and 2000 clock cycles when there’s an exception, managers will happily enforce processes that waste time, but which allow them the visibility to provide anticipated completion dates and status updates to the rest of the organization.
I agree with the idea in general, but not with its implementations I see.
If making things reliably on time is so important, you could simply hire more people. (Not in the last minute when you already see that you will miss the deadline; by then it’s usually too late.)
In my experience, many software projects are late because the teams are chronically understaffed. If you are completing deadlines reliably on time, the managers feel that you have too many people on your team, so they remove one or two. (Maybe in other countries this works differently, I don’t know.) Then there is no slack, which means that things like refactoring almost never happen, and when something unexpected happens, either the deadline is missed, or at least everyone is under big stress.
The usual response to this is that hiring more people costs more money. Yes, obviously. But the alternative, sacrificing lots of productivity to achieve greater reliability, also costs money.
Now that I think about it, maybe this is about different levels of management having different incentives. Like, maybe the upper management makes the strategical decision to sacrifice productivity for predictability, but then the lower management threatens predictability by keeping the teams too small and barely meeting the deadlines, because that is what their bonuses come from? I am just guessing here.
It’s interesting that you say that, because my experience (US, large corporation IT—think large banks, large retail, 100,000+ total employees) has been the exact opposite. The projects that I’ve been working on have been all quite overstaffed, resulting in poor software architecture, thanks to Conway’s Law. When I worked at the major retailer, for example, I genuinely felt that their IT systems would be healthier and projects would delivered more quickly if they simply fired half the programmers and let the other half get on with writing code rather than Slack messages.