Innovation research is notoriously hard to falsify and subject to just-so stories and post-hoc justifications.
One of the things I find compelling about S-curves is just how frequently they show up in innovation research coming from different angles and using different methodologies.
Some examples:
Everett rogers is a communication professor trying to figure out how ideas spread. So he finds measurements for ownership of different technologies like television and radio throughout society. Finds S-curves.
Clayton Christensen is interested in how new firms overtake established firms in the market. Decides to study the transistor market because there’s easy measurements and it moves quickly. Finds S-curves.
Carlotta Perez is interested in broad shifts in society and how new innovations effect the social context. She maps out these large shifts using historical records. Finds S-curves.
Genrich Altshuller is interested in how engineers create novel inventions. So he pores through thousands of patents, looks for the ones that show real inventiveness, and tries to find patterns. Finds S-curves.
Simon Wardley is interested in the stages that software goes through as it becomes commodotized. Takes recent tech innovations that were commodotized and categorizes the news stories about them, then plots their frequency. Finds S-curves
> How do S-curves help me make predictions, or, alternately, tell me when I shouldn’t try predicting?
> How do I know when some trend isn’t made of S-curves?
I think understanding how to work with fake frameworks is a key skill here. Something like S-curves isn’t used in a proof to get to the right answer. Rather, you can use it as evidence pointing you towards certain conclusions. You know that they tend to apply in an environment with self-reinforcing positive feedback loops and constraints on those feedback loops. You know they tend to apply for diffusion and innovation. When things have more of these features, you can expect them to be more useful. When things have less of these features, you can expect them to be less useful. By holding up a situation to lots of your fake frameworks, and seeing how much each applies, you can “run the Bayesian Gauntlet” and decide how much probability mass to put on different predictions.
> Is this falsifiable?
Innovation research is notoriously hard to falsify and subject to just-so stories and post-hoc justifications.
One of the things I find compelling about S-curves is just how frequently they show up in innovation research coming from different angles and using different methodologies.
Some examples:
Everett rogers is a communication professor trying to figure out how ideas spread. So he finds measurements for ownership of different technologies like television and radio throughout society. Finds S-curves.
Clayton Christensen is interested in how new firms overtake established firms in the market. Decides to study the transistor market because there’s easy measurements and it moves quickly. Finds S-curves.
Carlotta Perez is interested in broad shifts in society and how new innovations effect the social context. She maps out these large shifts using historical records. Finds S-curves.
Genrich Altshuller is interested in how engineers create novel inventions. So he pores through thousands of patents, looks for the ones that show real inventiveness, and tries to find patterns. Finds S-curves.
Simon Wardley is interested in the stages that software goes through as it becomes commodotized. Takes recent tech innovations that were commodotized and categorizes the news stories about them, then plots their frequency. Finds S-curves
> How do S-curves help me make predictions, or, alternately, tell me when I shouldn’t try predicting?
By understanding the separate patterns, they can give you an idea of the most likely future of different technologies. For instance, here’s a question on LW that I was able to better understand and predict because of my understanding of S-curves and how innovations stack.
> How do I know when some trend isn’t made of S-curves?
I think understanding how to work with fake frameworks is a key skill here. Something like S-curves isn’t used in a proof to get to the right answer. Rather, you can use it as evidence pointing you towards certain conclusions. You know that they tend to apply in an environment with self-reinforcing positive feedback loops and constraints on those feedback loops. You know they tend to apply for diffusion and innovation. When things have more of these features, you can expect them to be more useful. When things have less of these features, you can expect them to be less useful. By holding up a situation to lots of your fake frameworks, and seeing how much each applies, you can “run the Bayesian Gauntlet” and decide how much probability mass to put on different predictions.