I think this sort of problem is far more common than most engineers intuitively expect.
Minor nit: Most software engineers I know who have worked on systems that interact with the real world and especially people (e.g. anything in the ‘gig economy’, real estate tech, etc.) are aware of these problems, even if they can’t do nearly as good a job as you did of explaining them and describing when they’re likely to occur. So I would say either more common than engineers intuitively expect until they work with a messy system like this or more common than more theory-oriented people intuitively expect.
What you describe is part of what I was trying (and not doing as good a job as you) to get at in my cruxes shortform post. Specifically, having a healthy respect for how difficult “messiness” is to deal with and its resistance to elegant solutions is very related to the long tail.
Interestingly, a lot of that cruxes thing resonates with me—I hate definition-theorem-proof textbooks, strongly prefer my math to be driven by applications and examples, have ample exposure to real-world messiness, and I’ve only very grudgingly come to accept that I’m not an engineer at heart.
But at the same time, I still think that, for instance, “AGI needs Judea Pearl more than John Carmack”.
I think that too many people on both sides of the theory/engineering aesthetic divide identify “theory” with what I’d call “bad theory”—think Bourbaki-esque opaque math textbooks, where 80% of the effort is just spent defining things and there don’t seem to be many (if any) practical applications. The definition-theorem-proof writing style is a central example here.
By contrast, I think “good theory” usually lets us take real-world phenomena which we already intuitively recognize, formalize them, then use that formalization to make predictions about them and/or design them. Great examples:
Early days of analytic geometry, when people first began describing geometric shapes via equations
Calculus and early kinematics, which let us describe motion via equations
Information theory
Game theory
Pearl’s work on causality
These all took things which we didn’t previously know how to handle mathematically, and gave us a way to handle them mathematically. The models don’t always match the messy world perfectly, but they tell us what questions to ask, what the key bottlenecks are, and what design approaches are possible. They give us frames through which the world makes more sense—they show ways in which the world is less messy than it first appears, and they help us identify which parts of the mess are actually relevant to particular phenomena.
An analogy: bad theory is drawing a map of an unfamiliar city by sitting in a room with the shades closed. It may produce some cool pictures, but it won’t produce a map of the world. Good theory goes out and looks at the world, says “hey I’ve noticed a pattern where <interesting thing>, and I don’t have a map for that”, and then makes the map by abstracting the relevant parts of the real world.
Maybe my post makes it seem otherwise (although I hope not), but I agree with everything you said.
A minor meta point: since writing my original comment I’ve also learned more about graphical models and causality, which has led me to realize I previously underestimated Pearl’s (and his students’ / collaborators’) achievements.
Minor nit: Most software engineers I know who have worked on systems that interact with the real world and especially people (e.g. anything in the ‘gig economy’, real estate tech, etc.) are aware of these problems, even if they can’t do nearly as good a job as you did of explaining them and describing when they’re likely to occur. So I would say either more common than engineers intuitively expect until they work with a messy system like this or more common than more theory-oriented people intuitively expect.
What you describe is part of what I was trying (and not doing as good a job as you) to get at in my cruxes shortform post. Specifically, having a healthy respect for how difficult “messiness” is to deal with and its resistance to elegant solutions is very related to the long tail.
Interestingly, a lot of that cruxes thing resonates with me—I hate definition-theorem-proof textbooks, strongly prefer my math to be driven by applications and examples, have ample exposure to real-world messiness, and I’ve only very grudgingly come to accept that I’m not an engineer at heart.
But at the same time, I still think that, for instance, “AGI needs Judea Pearl more than John Carmack”.
I think that too many people on both sides of the theory/engineering aesthetic divide identify “theory” with what I’d call “bad theory”—think Bourbaki-esque opaque math textbooks, where 80% of the effort is just spent defining things and there don’t seem to be many (if any) practical applications. The definition-theorem-proof writing style is a central example here.
By contrast, I think “good theory” usually lets us take real-world phenomena which we already intuitively recognize, formalize them, then use that formalization to make predictions about them and/or design them. Great examples:
Early days of analytic geometry, when people first began describing geometric shapes via equations
Calculus and early kinematics, which let us describe motion via equations
Information theory
Game theory
Pearl’s work on causality
These all took things which we didn’t previously know how to handle mathematically, and gave us a way to handle them mathematically. The models don’t always match the messy world perfectly, but they tell us what questions to ask, what the key bottlenecks are, and what design approaches are possible. They give us frames through which the world makes more sense—they show ways in which the world is less messy than it first appears, and they help us identify which parts of the mess are actually relevant to particular phenomena.
An analogy: bad theory is drawing a map of an unfamiliar city by sitting in a room with the shades closed. It may produce some cool pictures, but it won’t produce a map of the world. Good theory goes out and looks at the world, says “hey I’ve noticed a pattern where <interesting thing>, and I don’t have a map for that”, and then makes the map by abstracting the relevant parts of the real world.
Maybe my post makes it seem otherwise (although I hope not), but I agree with everything you said.
A minor meta point: since writing my original comment I’ve also learned more about graphical models and causality, which has led me to realize I previously underestimated Pearl’s (and his students’ / collaborators’) achievements.