Here’s a programming example which I expect non-programmers will understand. Everyday programming involves a lot of taking data from one place in one format, and moving it to another place in another format. A company I worked for had to do even more of this than usual, and also wanted to track all those data flows and transformations. So I sat down and had a long think about how to make it easier to transform data from one format to another.
Turns out, this sort of problem can be expressed very neatly as high-school algebra with json-like data structures. For instance, you have some data like [{‘name’:‘john’,...},{‘name’:‘joe’,...},...] and you want to extract a list of all the names. As an algebra problem, that means finding a list of solutions to [{‘name’: X}] = data. (Of course there’s simpler ways of doing it for this simple example, but for more complicated examples with tens or even hundreds of variables, the algebra picture scales up much better.)
Problem formulation is even more important in data analysis and/or machine learning problems. At one company I worked for, our product boiled down to recommendation. We had very fat tails of specific user tastes and corresponding items, so clustering-based approaches (i.e. find similar users, recommend things similar to things they like) gave pretty mediocre recommendations—too many users/items just weren’t that similar to any major clusters, and we didn’t have enough data to map out tiny clusters. Formulating the problem as pure content-based recommendation—i.e. recommending things for one user without using any information whatsoever about what “similar” users were interested in—turned out to work far better.
Anyway, that’s enough from my life. Some historical examples:
The Arrow-Debreu model of an economy. They created the first really general theory (at least as far as I know) by using convexity as their main assumption.
Linear algebra. Turns out it’s a relatively young discipline, only about a century old—in particular, that means people used to solve linear ODEs without the benefit of eigendecomposition or the matrix exponential.
Information theory is a great example of this. You can find some nice articles around talking about the history of it. Basically Shannon lived with the details that would come to be abstracted by information theory for years and he felt there was some general way to describe it all, but it took him considerable effort to figure it out, and it was very much non-obvious that there was a good unifying abstraction or what it would look like before it was complete. Now the short paper that introduced information theory remains one of the most widely read academic papers of all time, and much of our world would be much harder to reason about without this abstraction (in fact, it’s hard to even wrap your mind around what it must have been like not to have this theory now that you’ve grown up in a world suffuse with it).
Here’s a programming example which I expect non-programmers will understand. Everyday programming involves a lot of taking data from one place in one format, and moving it to another place in another format. A company I worked for had to do even more of this than usual, and also wanted to track all those data flows and transformations. So I sat down and had a long think about how to make it easier to transform data from one format to another.
Turns out, this sort of problem can be expressed very neatly as high-school algebra with json-like data structures. For instance, you have some data like [{‘name’:‘john’,...},{‘name’:‘joe’,...},...] and you want to extract a list of all the names. As an algebra problem, that means finding a list of solutions to [{‘name’: X}] = data. (Of course there’s simpler ways of doing it for this simple example, but for more complicated examples with tens or even hundreds of variables, the algebra picture scales up much better.)
Problem formulation is even more important in data analysis and/or machine learning problems. At one company I worked for, our product boiled down to recommendation. We had very fat tails of specific user tastes and corresponding items, so clustering-based approaches (i.e. find similar users, recommend things similar to things they like) gave pretty mediocre recommendations—too many users/items just weren’t that similar to any major clusters, and we didn’t have enough data to map out tiny clusters. Formulating the problem as pure content-based recommendation—i.e. recommending things for one user without using any information whatsoever about what “similar” users were interested in—turned out to work far better.
Anyway, that’s enough from my life. Some historical examples:
The Arrow-Debreu model of an economy. They created the first really general theory (at least as far as I know) by using convexity as their main assumption.
Shannon’s theory of information.
Jaynes’ work on statistical mechanics (and especially his interpretation of maximum entropy).
The Black-Scholes model for pricing options, and all its subsequent extensions.
Linear algebra. Turns out it’s a relatively young discipline, only about a century old—in particular, that means people used to solve linear ODEs without the benefit of eigendecomposition or the matrix exponential.
Game theory, basically invented by Von Neumann and Morgenstern.
Shortly after Cook gave us the first interesting NP-complete problem (itself a great example of clever formulation), Karp introduced the world to reduction proofs of NP-completeness (an even better example).
Going back a bit further: Descartes’ development and popularization of coordinate systems for geometry.
… etc. Practically any topic in applied math began with somebody finding a neat new formulation.
Information theory is a great example of this. You can find some nice articles around talking about the history of it. Basically Shannon lived with the details that would come to be abstracted by information theory for years and he felt there was some general way to describe it all, but it took him considerable effort to figure it out, and it was very much non-obvious that there was a good unifying abstraction or what it would look like before it was complete. Now the short paper that introduced information theory remains one of the most widely read academic papers of all time, and much of our world would be much harder to reason about without this abstraction (in fact, it’s hard to even wrap your mind around what it must have been like not to have this theory now that you’ve grown up in a world suffuse with it).