This analysis is targeted at organizations that have not yet locked in their strategy for managing protobufs/gRPC, or have the capacity to pivot. I expect the principles involved to generalize to other organizational questions. Some of the tradeoffs are only relevant from a “protos as service interfaces” (gRPC/twirp/etc) orientation, but others are more general.
Meta-level conflict of interest notice: This is a linkpost to a blog for a (commercial) side-project of mine.
Object-level conflict of interest notice: Protocall currently only supports single-repo workspaces, not multi-repo workspaces. This analysis represents an accurate accounting of my view on the tradeoffs involved in this decision, and is not motivated by any desire to make my own life easier. Implementing basic support for multi-repo workspaces would not be an overwhelming technical challenge; the edge cases around e.g. namespace collisions would need solving but I would only need to solve them once.
Most engineering organizations should keep their proto files in one repo. If you’re already operating with a single monorepo for your entire codebase, nothing about protos changes the calculus. If you’re not, the benefits of keeping your proto files in one place may not be obvious.
Here are some (valid) reasons for not keeping your proto files in one repo.
Locality
If your proto files are primarily used to define service interfaces, keeping them in the same repo as the service code can reduce overhead for developers working on that service. They only need to issue one PR to update the service interface, rather than two. Working across multiple repos is challenging for most build systems, so there might also be less tooling work involved in maintaining a quick development loop.
Differential Processes, Tooling, and Norms
If your org has multiple repos, it’s likely that teams own their own repos. Those teams may have wildly different engineering processes, tooling, norms, etc. Depending on the details, those may be possible to maintain in harmony within a single repo, but it can be less overhead to have teams manage their own processes with the repo as the dividing line of responsibility.
Avoiding (Inappropriate) Shared Dependencies
Protos come with the ability to import other protos. This makes for a very tempting footgun. While there are some rare situations where it makes sense to import a proto from another service or business domain, the most common reason this happens is an invocation of DRY: “a message schema that fits my needs already exists somewhere else, so why would I write a new one identical to the old one?”
This is a mistake you should avoid at all costs. The domain you are attempting to model with your schema is not the domain being modeled by the existing proto. If that proto changes, it will be evolving in a different direction, leaving you with a dependency on a model that becomes an increasingly poor fit for your use of it over time.
Keeping protos in separate repos is a helpful (but not wholly sufficient) way to avoid falling into this trap, since importing protos from another repo requires writing fairly specialized build tooling.[1]
However, the upsides of keeping your proto files in one repo are much stronger.
Locality
Hold on, wasn’t this an upside for keeping protos with the service code? Yes, for the maintainers of that service. The consumers (clients) of that service benefit much more from having all the proto files in a single, predictable location. Instead of needing to figure out where the service code lives in order to examine its interface, every engineer in the organization knows there’s exactly one place they need to look. The mental overhead stays constant instead of scaling with organization size.
Consistent Processes, Tooling, and Norms
Protos are complicated. There’s both essential and accidental complexity.
The essential complexity mostly comes from ensuring your schema accurately models whatever it is you’re trying to model, and forward-looking considerations such as extensibility.[2]
The accidental complexity is where protos can really bite you, and where a monorepo can make your life much easier. Among other things, you need to think about:
Linting
Protos are code! Naming conventions, structure, casing—nearly every single concern that makes a linter desirable for your codebase also applies to your proto files. Applying proto code standards in multiple repos either requires teams to reinvent the wheel each time, inevitably leading to drift, or for the broader organization to invest in tooling that will make standards seamless across repos. A monorepo neatly resolves this issue.
Namespace management
Imagine you work at the company Foobar. There are two teams at Foobar, the Blue team and the Green team.[3] These teams maintain their own repositories, and colocate their proto files with their service code. The Blue team maintains a service defined by this proto file:
The Green team also happens to maintain a service described by a very similar proto file! (The fields in AuthRequest and AuthResponse may differ, but those aren’t salient here.)
One day, an engineer on the Green team needs to implement a feature that involves one of their services talking to the Blue team’s AuthService. This engineer attempts to add the dependency with the generated AuthService client to to the build of a service owned by the Green team.
Can you guess what happens next?
What actually happens next depends on many details unspecified in this blog post, including the language and build system used, but a very likely outcome is a namespace collision.
Truly, every developer dreads dreams of spending hours, days, or maybe even weeks refactoring their services to use renamed protos, instead of doing feature development.
You, too, can ensure your developers experience this unique joy! (Or you could not.)
Builds (artifact generation, publishing, language support, etc)
Just as with linting, there are three options: you can require that teams roll their own build and release tooling for protos, dedicate organizational resources to standardizing it across repos, or dodge the issue entirely by keeping it in one place. Conveniently, keeping your protos in one place also makes it trivial to detect and prevent namespace collisions!
The bottom line is straightforward.
To summarize:
A monorepo imposes a small, fixed cost in terms of locality on service owners, scaling linearly with organizational size, while multiple repos impose costs that scale polynomially[4] with organizational size.
With a monorepo, the amount of tooling you need to build to support a seamless developer experience with protos scales only with the number of build systems you need to support, and even then you only need to build most of those integrations once. This may not even require a full-time engineer after the initial groundwork has been done.
With protos in multiple repos, you need to build and maintain tooling to ensure a consistent experience across repos. This is more work, with many more subtle challenges and failure modes, both technical and organizational.
Letting teams handle proto management themselves is taking on tech debt with an extremely high interest rate and no payback plan. At the level of organizational scale where it makes sense to adopt protos at all, this is probably a bad idea.
Thanks to Justis Mills for their valuable feedback on this post.
A concern occasionally raised about the adoption of protobufs/gRPC is the additional complexity that some features bring, like having a strict schema and requiring backwards-compatible changes, when compared to opposed to alternatives like JSON. This concern mostly elides the fact that the essential complexity is effectively unchanged by the choice of IDL and serialization format; some choices (such as protobufs) simply make the complexity more legible and difficult to ignore. This is a topic that deserves its own post, so I might come back to it in the future.
Each engineer would pay a cost that scaled linearly with respect to the number of other repos they might need to interact with. As an organization grows in size, this complexity could be modeled by (# of employees * # of repos). If the number of repos scales linearly with engineering headcount, which seems like a reasonable approximation, we get n2/r edges, for n employees and r employees per repo. This is similar to Graičiūnas’ model for how many relationships between subordinates a manager would need to oversee.
Keep your protos in one repo
Link post
This analysis is targeted at organizations that have not yet locked in their strategy for managing protobufs/gRPC, or have the capacity to pivot. I expect the principles involved to generalize to other organizational questions. Some of the tradeoffs are only relevant from a “protos as service interfaces” (gRPC/twirp/etc) orientation, but others are more general.
Meta-level conflict of interest notice: This is a linkpost to a blog for a (commercial) side-project of mine.
Object-level conflict of interest notice: Protocall currently only supports single-repo workspaces, not multi-repo workspaces. This analysis represents an accurate accounting of my view on the tradeoffs involved in this decision, and is not motivated by any desire to make my own life easier. Implementing basic support for multi-repo workspaces would not be an overwhelming technical challenge; the edge cases around e.g. namespace collisions would need solving but I would only need to solve them once.
Most engineering organizations should keep their proto files in one repo. If you’re already operating with a single monorepo for your entire codebase, nothing about protos changes the calculus. If you’re not, the benefits of keeping your proto files in one place may not be obvious.
Here are some (valid) reasons for not keeping your proto files in one repo.
Locality
If your proto files are primarily used to define service interfaces, keeping them in the same repo as the service code can reduce overhead for developers working on that service. They only need to issue one PR to update the service interface, rather than two. Working across multiple repos is challenging for most build systems, so there might also be less tooling work involved in maintaining a quick development loop.
Differential Processes, Tooling, and Norms
If your org has multiple repos, it’s likely that teams own their own repos. Those teams may have wildly different engineering processes, tooling, norms, etc. Depending on the details, those may be possible to maintain in harmony within a single repo, but it can be less overhead to have teams manage their own processes with the repo as the dividing line of responsibility.
Avoiding (Inappropriate) Shared Dependencies
Protos come with the ability to import other protos. This makes for a very tempting footgun. While there are some rare situations where it makes sense to import a proto from another service or business domain, the most common reason this happens is an invocation of DRY: “a message schema that fits my needs already exists somewhere else, so why would I write a new one identical to the old one?”
This is a mistake you should avoid at all costs. The domain you are attempting to model with your schema is not the domain being modeled by the existing proto. If that proto changes, it will be evolving in a different direction, leaving you with a dependency on a model that becomes an increasingly poor fit for your use of it over time.
Keeping protos in separate repos is a helpful (but not wholly sufficient) way to avoid falling into this trap, since importing protos from another repo requires writing fairly specialized build tooling.[1]
However, the upsides of keeping your proto files in one repo are much stronger.
Locality
Hold on, wasn’t this an upside for keeping protos with the service code? Yes, for the maintainers of that service. The consumers (clients) of that service benefit much more from having all the proto files in a single, predictable location. Instead of needing to figure out where the service code lives in order to examine its interface, every engineer in the organization knows there’s exactly one place they need to look. The mental overhead stays constant instead of scaling with organization size.
Consistent Processes, Tooling, and Norms
Protos are complicated. There’s both essential and accidental complexity.
The essential complexity mostly comes from ensuring your schema accurately models whatever it is you’re trying to model, and forward-looking considerations such as extensibility.[2]
The accidental complexity is where protos can really bite you, and where a monorepo can make your life much easier. Among other things, you need to think about:
Linting
Protos are code! Naming conventions, structure, casing—nearly every single concern that makes a linter desirable for your codebase also applies to your proto files. Applying proto code standards in multiple repos either requires teams to reinvent the wheel each time, inevitably leading to drift, or for the broader organization to invest in tooling that will make standards seamless across repos. A monorepo neatly resolves this issue.
Namespace management
Imagine you work at the company
Foobar
. There are two teams atFoobar
, theBlue
team and theGreen
team.[3] These teams maintain their own repositories, and colocate their proto files with their service code. TheBlue
team maintains a service defined by this proto file:The
Green
team also happens to maintain a service described by a very similar proto file! (The fields inAuthRequest
andAuthResponse
may differ, but those aren’t salient here.)One day, an engineer on the
Green
team needs to implement a feature that involves one of their services talking to theBlue
team’sAuthService
. This engineer attempts to add the dependency with the generatedAuthService
client to to the build of a service owned by theGreen
team.Can you guess what happens next?
What actually happens next depends on many details unspecified in this blog post, including the language and build system used, but a very likely outcome is a namespace collision.
Truly, every developer
dreadsdreams of spending hours, days, or maybe even weeks refactoring their services to use renamed protos, instead of doing feature development.You, too, can ensure your developers experience this unique joy! (Or you could not.)
Builds (artifact generation, publishing, language support, etc)
Just as with linting, there are three options: you can require that teams roll their own build and release tooling for protos, dedicate organizational resources to standardizing it across repos, or dodge the issue entirely by keeping it in one place. Conveniently, keeping your protos in one place also makes it trivial to detect and prevent namespace collisions!
The bottom line is straightforward.
To summarize:
A monorepo imposes a small, fixed cost in terms of locality on service owners, scaling linearly with organizational size, while multiple repos impose costs that scale polynomially[4] with organizational size.
With a monorepo, the amount of tooling you need to build to support a seamless developer experience with protos scales only with the number of build systems you need to support, and even then you only need to build most of those integrations once. This may not even require a full-time engineer after the initial groundwork has been done.
With protos in multiple repos, you need to build and maintain tooling to ensure a consistent experience across repos. This is more work, with many more subtle challenges and failure modes, both technical and organizational.
Letting teams handle proto management themselves is taking on tech debt with an extremely high interest rate and no payback plan. At the level of organizational scale where it makes sense to adopt protos at all, this is probably a bad idea.
Thanks to Justis Mills for their valuable feedback on this post.
If you find yourself writing such tooling, stop and ask yourself: “Is there any other way I can achieve the same functionality?”
There are almost always functional alternatives. They are probably better than signing yourself up for managing cross-repo proto imports.
If you aren’t sure about your situation, drop me a line (robert@protocall.dev).
A concern occasionally raised about the adoption of protobufs/gRPC is the additional complexity that some features bring, like having a strict schema and requiring backwards-compatible changes, when compared to opposed to alternatives like JSON. This concern mostly elides the fact that the essential complexity is effectively unchanged by the choice of IDL and serialization format; some choices (such as protobufs) simply make the complexity more legible and difficult to ignore. This is a topic that deserves its own post, so I might come back to it in the future.
Blue and green are inspired by this fable.
Each engineer would pay a cost that scaled linearly with respect to the number of other repos they might need to interact with. As an organization grows in size, this complexity could be modeled by (# of employees * # of repos). If the number of repos scales linearly with engineering headcount, which seems like a reasonable approximation, we get n2/r edges, for
n
employees andr
employees per repo. This is similar to Graičiūnas’ model for how many relationships between subordinates a manager would need to oversee.