The claim was that the decision to go to cloud computing and microservice architectures wasn’t based on whether they were a good idea.
But also, yes, I think they’re used in many cases where they’re less efficient and a mistake. The main argument for cloud stuff is that it saves dev time, but that’s a poor argument for moving from a working system to AWS. And microservices are mostly a solution to institutional/management problems, not technical ones.
And microservices are mostly a solution to institutional/management problems, not technical ones.
So this is interesting in context, because management and coordination problems are problems! But they’re problems where the distinction between “people think this is a good idea” and “this is actually a good idea” is more bidirectionally porous than the kinds of problems that have more clearly objective solutions. In fact the whole deal with “Worse is Better” is substantially based on observing that if people gravitate toward something, that tends to change the landscape to make it a better idea, even if it didn’t look like that to start with, because there’ll be a broader selection of support artifacts and it’ll be easier to work with other people.
One might expect an engineering discipline to be more malleable to this when social factors are more constraining than impersonal physical/computational ones. In software engineering, I think this is true across large swaths of business software, but not necessarily in specialized areas. In mechanical engineering or manufacturing, closer to the primary focus of the original post, I would expect impersonal physical reality to push back much harder.
A separate result of this model would be that things become more fashion-based on average as humanity’s aggregate power over impersonal constraints increases, much like positional goods becoming more relatively prominent as basic material needs become easier to meet.
Well, in the specific case of microservices, I think the main problem being solved is not allowing people on other teams to modify your part of the code.
In theory, people could just not do that. It’s kind of like how private variables in Java are considered important, even though sometimes there’s a good reason to change them and theoretically you could just use variable names / comments / documentation to indicate which variables are normally meant to be changed. There’s a tradeoff between people messing with stuff they shouldn’t and inability to do things because you rely on other groups. You could break a monolithic project into multiple git repos instead, but I guess that psychologically feels worse.
I haven’t worked in an organization that uses microservices extensively, but what I hear from people who use them goes far beyond visibility constraints. As an example, allowing groups to perform deployment cycles without synchronizing seems to be a motivation that’s harder to solve by having independently updated parts of a build-level monolith—not impossible, because you could set up to propagate full rebuilds somehow and so forth, but more awkward. Either way, as you probably know, “in theory, people could just … but” is a primary motivator behind all kinds of socially- or psychologically-centered design.
That said, getting into too much detail on microservices feels like it’d get off topic, because your central example of the Giga Press is in a domain where the object-level manufacturing issues of metal properties and such should have a lot more impact. But to circle around, now I’m wondering: does the ongoing “software eating the world” trend come along with a side of “software business culture eating into other business cultures”? In the specific case of Tesla, there’s a more specific vector for this, because Elon Musk began his career during the original dot-com era and could have carried associated memes to Tesla. Are management and media associated with more physical industries being primed this way elsewhere? Or is this just, as they say, Elon Musk being Elon Musk, and (as I think you suggested in the original post) the media results more caused by the distortion of celebrity and PR than by subtler/deeper dysfunctions?
Hmm, I think there are a few reasons for software people getting into other industries over vice-versa:
Software has been very profitable, largely because of how ad-based the US economy has become. So a lot of the available money is from the software side.
Because code scales and software doesn’t require as much capital investment as heavy industry, there are more wealthy founders who did some code themselves than wealthy founders who do, say, chemical engineering themselves. That means you have wealthy people who a) like starting companies and b) are engineering-oriented.
American companies seem to have more of a competitive advantage vs Japan/China for code than manufacturing. Note that I said companies; Japan actually makes lots of high-quality open-source software.
I don’t think that’s the sophisticated argument for switching your in house app to the cloud. There’s a recognition that because it’s more efficient for developers, more and more talent will learn to use and infrastructure will be built on top of cloud solutions.
Which means your organization risks being bottlenecked on talent and infrastructure if you fall too far behind the adoption curve.
“Everyone is going to switch to cloud stuff” means that, in the short term, there will be a shortage of cloud people and an excess of non-cloud people.
Your argument is for hiring in a long-term future where the non-cloud people retired or forgot how to do their thing, but we know that’s not what US executives were thinking because they don’t think that long-term due to the incentives they face.
And it certainly doesn’t explain some groups of companies switching to cloud stuff together and then switching back together later.
we know that’s not what US executives were thinking because they don’t think that long-term due to the incentives they face
The story of “they’re doing something that’s bad in the short term, but good in the long term, but only accidentally they’re actually trying to do something good in the short term but failing” seems suspicious.
I know that the CEOs I know do plan in the long term.
I also know that the many of the worlds most famous consumer brands (Apple, Amazon. Tesla) have valuations that only make sense because people trust the CEOs to prioritize the long term and those future earnings are priced in.
And I also know that if you look at the spending budget of many of the top consumer tech companies, and the amount spent on longterm R&D and moon shots, it sure looks like they are spending on the long term.
The claim was that the decision to go to cloud computing and microservice architectures wasn’t based on whether they were a good idea.
But also, yes, I think they’re used in many cases where they’re less efficient and a mistake. The main argument for cloud stuff is that it saves dev time, but that’s a poor argument for moving from a working system to AWS. And microservices are mostly a solution to institutional/management problems, not technical ones.
So this is interesting in context, because management and coordination problems are problems! But they’re problems where the distinction between “people think this is a good idea” and “this is actually a good idea” is more bidirectionally porous than the kinds of problems that have more clearly objective solutions. In fact the whole deal with “Worse is Better” is substantially based on observing that if people gravitate toward something, that tends to change the landscape to make it a better idea, even if it didn’t look like that to start with, because there’ll be a broader selection of support artifacts and it’ll be easier to work with other people.
One might expect an engineering discipline to be more malleable to this when social factors are more constraining than impersonal physical/computational ones. In software engineering, I think this is true across large swaths of business software, but not necessarily in specialized areas. In mechanical engineering or manufacturing, closer to the primary focus of the original post, I would expect impersonal physical reality to push back much harder.
A separate result of this model would be that things become more fashion-based on average as humanity’s aggregate power over impersonal constraints increases, much like positional goods becoming more relatively prominent as basic material needs become easier to meet.
Well, in the specific case of microservices, I think the main problem being solved is not allowing people on other teams to modify your part of the code.
In theory, people could just not do that. It’s kind of like how private variables in Java are considered important, even though sometimes there’s a good reason to change them and theoretically you could just use variable names / comments / documentation to indicate which variables are normally meant to be changed. There’s a tradeoff between people messing with stuff they shouldn’t and inability to do things because you rely on other groups. You could break a monolithic project into multiple git repos instead, but I guess that psychologically feels worse.
I haven’t worked in an organization that uses microservices extensively, but what I hear from people who use them goes far beyond visibility constraints. As an example, allowing groups to perform deployment cycles without synchronizing seems to be a motivation that’s harder to solve by having independently updated parts of a build-level monolith—not impossible, because you could set up to propagate full rebuilds somehow and so forth, but more awkward. Either way, as you probably know, “in theory, people could just … but” is a primary motivator behind all kinds of socially- or psychologically-centered design.
That said, getting into too much detail on microservices feels like it’d get off topic, because your central example of the Giga Press is in a domain where the object-level manufacturing issues of metal properties and such should have a lot more impact. But to circle around, now I’m wondering: does the ongoing “software eating the world” trend come along with a side of “software business culture eating into other business cultures”? In the specific case of Tesla, there’s a more specific vector for this, because Elon Musk began his career during the original dot-com era and could have carried associated memes to Tesla. Are management and media associated with more physical industries being primed this way elsewhere? Or is this just, as they say, Elon Musk being Elon Musk, and (as I think you suggested in the original post) the media results more caused by the distortion of celebrity and PR than by subtler/deeper dysfunctions?
Hmm, I think there are a few reasons for software people getting into other industries over vice-versa:
Software has been very profitable, largely because of how ad-based the US economy has become. So a lot of the available money is from the software side.
Because code scales and software doesn’t require as much capital investment as heavy industry, there are more wealthy founders who did some code themselves than wealthy founders who do, say, chemical engineering themselves. That means you have wealthy people who a) like starting companies and b) are engineering-oriented.
American companies seem to have more of a competitive advantage vs Japan/China for code than manufacturing. Note that I said companies; Japan actually makes lots of high-quality open-source software.
I don’t think that’s the sophisticated argument for switching your in house app to the cloud. There’s a recognition that because it’s more efficient for developers, more and more talent will learn to use and infrastructure will be built on top of cloud solutions.
Which means your organization risks being bottlenecked on talent and infrastructure if you fall too far behind the adoption curve.
“Everyone is going to switch to cloud stuff” means that, in the short term, there will be a shortage of cloud people and an excess of non-cloud people.
Your argument is for hiring in a long-term future where the non-cloud people retired or forgot how to do their thing, but we know that’s not what US executives were thinking because they don’t think that long-term due to the incentives they face.
And it certainly doesn’t explain some groups of companies switching to cloud stuff together and then switching back together later.
The story of “they’re doing something that’s bad in the short term, but good in the long term, but only accidentally they’re actually trying to do something good in the short term but failing” seems suspicious.
I know that the CEOs I know do plan in the long term.
I also know that the many of the worlds most famous consumer brands (Apple, Amazon. Tesla) have valuations that only make sense because people trust the CEOs to prioritize the long term and those future earnings are priced in.
And I also know that if you look at the spending budget of many of the top consumer tech companies, and the amount spent on longterm R&D and moon shots, it sure looks like they are spending on the long term.
We’re talking about different timescales. Apple’s investments paid off within the tenure of top executives. Meanwhile, banks are still using COBOL.
I’m not talking about 10 year time horizons no