Yes, it does feel awesome. This discontinuity of the effort → outcome map ([almost] nothing… nothing… nothing… jump!) to me is an instance of the Hegelian/Marxian quantity->quality conversion, something that jumps at me again and again in different contexts. I wonder if there is a way to formalize it.
My understanding of the “quantity to quality conversion” phrase is that in many situations the relation between some inputs and outputs is not linear. More specifically, there are many situations where at the beginning the relation seems linear, but later at some point the increase of outputs becomes incredibly huge (incredibly = for people who based their models on extrapolating the linear relationship at the beginning). Even more specifically, you can have one input “A” that has obvious effect on “X”, but almost zero effect on “Y” and “Z”. Then at some moment with additional increases of “A” also “Y” and “Z” start growing (which was totally unexpected by the old model).
Specific example: You start playing piano. At the beginning, it feels like it has a simple linear impact on your life. You spend 1 hour playing piano, you get an ability to play a simple song quite well. You spend 2 hours playing piano, you get an ability to play another simple song quite well. Extrapolate this, and you get a model. According to this model, after spending 80000 hours playing piano, you would expect to be able to play 80000 simple songs quite well. -- What happens in reality is that you get an ability to play any simple song well just by looking at the music sheets, an ability to play very complex music, an ability to make money by playing the music, you become famous, get a lot of social capital, lot of friends, lot of sex, lot of drugs, etc. (Both non-linear outputs, and the outputs not predicted by the original model.)
A similar pattern appears in many different situations, so some people invented a mysteriously sounding phrase to describe it. Now it seems like some law of nature. But maybe it is just a selection effect (some situations develop like this, and we notice “oh, the law of quantity to quality conversion”, other situations don’t, and we ignore them).
In other words, “quantity” seems to mean “linear model”, “quality” means “model”, and the whole phrase decoded means “if you change variables enough, you may notice that the linear model does not reflect reality well (especially in situations where the curve starts growing slowly, and then it grows very fast)”.
I was more after some discontinuity than a simple nonlinearity, like a quadratic or even an exponential dependence. And you are right, the selection effect is at work, but it’s not a negative in this case. We want to select similar phenomena and find a common model for them, in order to be able to classify new phenomena as potentially leading to the same effects.
For example, if you look at some new hypothetical government policy which legislates indexing the minimum savings account rate to, say, inflation, you should be able to tell whether after a sizable chunk of people shift their savings to this guaranteed investment, the inflation rate will suddenly skyrocket (it happened before in some countries).
Or if you connect billions of computers together, whether it will give rise to a hive mind which takes over the world (it has not happened, despite some dire predictions, mostly in fictional scenarios).
Another example: if you trying to “level up”, what factors would hasten this process, so you don’t have to spend 10k hours mastering something, but only, say, 1000.
If you pay attention to this leveling effect happening in various disparate areas, you might get your clues from something like stellar formation, where increasing metallicity significantly decreases the mass required for a star to form (a dust cloud “leveling up”).
Classifying, modeling and constructing successful predictions for this “quantity to quality conversion” would be a great example of useful applied philosophy.
There are (at least) two different things going on here that I think it’s valuable to separate.
One is, as you say, the general category of systems whose growth rate expressed in delivered value “skyrockets” in some fashion (positive or negative) at an unexpected-given-our-current-model inflection point. I don’t know if that’s actually a useful reference class for analysis (that is, I don’t know if an analysis of the causes of, say, runaway inflation will increase our understanding of the causes, say, a runaway greenhouse effect), any more than the class of systems with linear growth rates is, but I’ll certainly agree that our ability to not be surprised by such systems when we encounter them is improved by encountering other such systems (that is, studying runaway inflation may teach me to not simply assume that the greenhouse effect is linear).
The other has to do with perceptual thresholds and just-noticable differences. I may experience a subjective “quantity to quality” transition just because a threshold is crossed that makes me pay attention, even if there’s no significant inflection point in the growth curve of delivered value.
I don’t know if that’s actually a useful reference class for analysis
I don’t know, either, but I feel that some research in this direction would be justified, given the potential payoff.
The other has to do with perceptual thresholds and just-notic[e]able differences.
This might, in fact, be one of the models: the metric being observed hides the “true growth curve”. So a useful analysis, assuming it generalizes, would point to a more sensitive metric.
Right, it works for a bunch of specific instances of this phenomenon, but how do you construct a model which describes both phase transitions and human learning (and a host of other similar effects in totally dissimilar substrates)?
Yes, it does feel awesome. This discontinuity of the effort → outcome map ([almost] nothing… nothing… nothing… jump!) to me is an instance of the Hegelian/Marxian quantity->quality conversion, something that jumps at me again and again in different contexts. I wonder if there is a way to formalize it.
I wish that I understood this post. I am upvoting you in the hopes that you feel obligated to explain further.
My understanding of the “quantity to quality conversion” phrase is that in many situations the relation between some inputs and outputs is not linear. More specifically, there are many situations where at the beginning the relation seems linear, but later at some point the increase of outputs becomes incredibly huge (incredibly = for people who based their models on extrapolating the linear relationship at the beginning). Even more specifically, you can have one input “A” that has obvious effect on “X”, but almost zero effect on “Y” and “Z”. Then at some moment with additional increases of “A” also “Y” and “Z” start growing (which was totally unexpected by the old model).
Specific example: You start playing piano. At the beginning, it feels like it has a simple linear impact on your life. You spend 1 hour playing piano, you get an ability to play a simple song quite well. You spend 2 hours playing piano, you get an ability to play another simple song quite well. Extrapolate this, and you get a model. According to this model, after spending 80000 hours playing piano, you would expect to be able to play 80000 simple songs quite well. -- What happens in reality is that you get an ability to play any simple song well just by looking at the music sheets, an ability to play very complex music, an ability to make money by playing the music, you become famous, get a lot of social capital, lot of friends, lot of sex, lot of drugs, etc. (Both non-linear outputs, and the outputs not predicted by the original model.)
A similar pattern appears in many different situations, so some people invented a mysteriously sounding phrase to describe it. Now it seems like some law of nature. But maybe it is just a selection effect (some situations develop like this, and we notice “oh, the law of quantity to quality conversion”, other situations don’t, and we ignore them).
In other words, “quantity” seems to mean “linear model”, “quality” means “model”, and the whole phrase decoded means “if you change variables enough, you may notice that the linear model does not reflect reality well (especially in situations where the curve starts growing slowly, and then it grows very fast)”.
I was more after some discontinuity than a simple nonlinearity, like a quadratic or even an exponential dependence. And you are right, the selection effect is at work, but it’s not a negative in this case. We want to select similar phenomena and find a common model for them, in order to be able to classify new phenomena as potentially leading to the same effects.
For example, if you look at some new hypothetical government policy which legislates indexing the minimum savings account rate to, say, inflation, you should be able to tell whether after a sizable chunk of people shift their savings to this guaranteed investment, the inflation rate will suddenly skyrocket (it happened before in some countries).
Or if you connect billions of computers together, whether it will give rise to a hive mind which takes over the world (it has not happened, despite some dire predictions, mostly in fictional scenarios).
Another example: if you trying to “level up”, what factors would hasten this process, so you don’t have to spend 10k hours mastering something, but only, say, 1000.
If you pay attention to this leveling effect happening in various disparate areas, you might get your clues from something like stellar formation, where increasing metallicity significantly decreases the mass required for a star to form (a dust cloud “leveling up”).
Classifying, modeling and constructing successful predictions for this “quantity to quality conversion” would be a great example of useful applied philosophy.
There are (at least) two different things going on here that I think it’s valuable to separate.
One is, as you say, the general category of systems whose growth rate expressed in delivered value “skyrockets” in some fashion (positive or negative) at an unexpected-given-our-current-model inflection point. I don’t know if that’s actually a useful reference class for analysis (that is, I don’t know if an analysis of the causes of, say, runaway inflation will increase our understanding of the causes, say, a runaway greenhouse effect), any more than the class of systems with linear growth rates is, but I’ll certainly agree that our ability to not be surprised by such systems when we encounter them is improved by encountering other such systems (that is, studying runaway inflation may teach me to not simply assume that the greenhouse effect is linear).
The other has to do with perceptual thresholds and just-noticable differences. I may experience a subjective “quantity to quality” transition just because a threshold is crossed that makes me pay attention, even if there’s no significant inflection point in the growth curve of delivered value.
I don’t know, either, but I feel that some research in this direction would be justified, given the potential payoff.
This might, in fact, be one of the models: the metric being observed hides the “true growth curve”. So a useful analysis, assuming it generalizes, would point to a more sensitive metric.
Phase transitions?
Right, it works for a bunch of specific instances of this phenomenon, but how do you construct a model which describes both phase transitions and human learning (and a host of other similar effects in totally dissimilar substrates)?