Strictly speaking asymptotic analysis is not very demanding: if you have a function f(N) that you can bound above in the limit as a function of N, you can do asymptotic analysis to it. In practice I mostly see asymptotic analysis used to evaluate counterfactuals: you have some function or process that’s well-behaved for N inputs, and you want to know if it will still be well-enough behaved if you had 2N inputs instead, without actually doing the experiment. You’re rendering ten characters on the screen in your video game—could you get away with rendering 20, or would you run out of graphics card memory? Your web site is serving 100 requests per second with low latency—if you suddenly had 1000 requests per second instead, would latency still be low? Would the site still be available at all? Can we make a large language model with the exact same architecture as GPT-3, but a book-sized context window? Asymptotic analysis lets you answer questions like that without having to do experiments or think very hard—so long as you understand f correctly.
When I’m reviewing software designs, I do this kind of analysis a lot. There, it’s often useful to distinguish among average-case and worst-case analyses: when you’re processing 100 million records in a big data analysis job you don’t care that much about the variance in processing any individual record, but when you’re rendering frames in a video game you work hard to make sure that every single frame gets done in less than 16 milliseconds, or whatever your budget is, even if that means your code is slower on average.
This makes it sound like a computer science thing, and for me it mostly is, but you can do the same thing to any scaling process. For example, in some cultures, when toasting before drinking, it’s considered polite for each person to toast each other person, making eye contact with them, before anyone drinks for real. If you’re in a drinking party of N people, how many toasts should there be and how long should we expect them to take? Well, clearly there are about N2 pairs of people, but you can do N/2 toasts in parallel, so with good coordindation you should expect the toasts to take time proportional to N…
...except that usually these toasts are done in a circle, with everyone holding still, so there’s an additional hard-to-model constraint around the shared toasting space in the circle. That is to me a prototypical example of the way that asymptotic analysis can go wrong: our model of the toasting time as a function of the number of people was fine as far as it went, but it didn’t capture all the relevant parts of the environment, so we got different scaling behavior than we expected.
(The other famous use of asymptotic analysis is in hardness proofs and complexity theory, e.g. in cryptography, but those aren’t exactly “real world processes” even though cryptography is very real).
Strictly speaking asymptotic analysis is not very demanding: if you have a function f(N) that you can bound above in the limit as a function of N, you can do asymptotic analysis to it. In practice I mostly see asymptotic analysis used to evaluate counterfactuals: you have some function or process that’s well-behaved for N inputs, and you want to know if it will still be well-enough behaved if you had 2N inputs instead, without actually doing the experiment. You’re rendering ten characters on the screen in your video game—could you get away with rendering 20, or would you run out of graphics card memory? Your web site is serving 100 requests per second with low latency—if you suddenly had 1000 requests per second instead, would latency still be low? Would the site still be available at all? Can we make a large language model with the exact same architecture as GPT-3, but a book-sized context window? Asymptotic analysis lets you answer questions like that without having to do experiments or think very hard—so long as you understand f correctly.
When I’m reviewing software designs, I do this kind of analysis a lot. There, it’s often useful to distinguish among average-case and worst-case analyses: when you’re processing 100 million records in a big data analysis job you don’t care that much about the variance in processing any individual record, but when you’re rendering frames in a video game you work hard to make sure that every single frame gets done in less than 16 milliseconds, or whatever your budget is, even if that means your code is slower on average.
This makes it sound like a computer science thing, and for me it mostly is, but you can do the same thing to any scaling process. For example, in some cultures, when toasting before drinking, it’s considered polite for each person to toast each other person, making eye contact with them, before anyone drinks for real. If you’re in a drinking party of N people, how many toasts should there be and how long should we expect them to take? Well, clearly there are about N2 pairs of people, but you can do N/2 toasts in parallel, so with good coordindation you should expect the toasts to take time proportional to N…
...except that usually these toasts are done in a circle, with everyone holding still, so there’s an additional hard-to-model constraint around the shared toasting space in the circle. That is to me a prototypical example of the way that asymptotic analysis can go wrong: our model of the toasting time as a function of the number of people was fine as far as it went, but it didn’t capture all the relevant parts of the environment, so we got different scaling behavior than we expected.
(The other famous use of asymptotic analysis is in hardness proofs and complexity theory, e.g. in cryptography, but those aren’t exactly “real world processes” even though cryptography is very real).