The Boy Who Cried Wolf is a pretty good example of updating on new information, I guess.
But it seems sort of pointless to attempt to find old stories that show the superiority of a supposedly new way of thinking. If the way of thinking is so new, then why should we expect to find stories about it? And if we do, what does that say about the superiority of the method (that is, that it was known N years ago but didn’t take over the world)? Perhaps this is too cynical?
The Boy Who Cried Wolf is a pretty good example of updating on new information, I guess.
Agreed, but the primary lesson of that story is “guard your reputation if you want to be believed.” The reverse story—”don’t waste your time on liars”—probably shouldn’t end with there actually being a wolf, as one should not expect listeners to understand the sometimes subtle separation between good decision-making and good consequences.
But it seems sort of pointless to attempt to find old stories that show the superiority of a supposedly new way of thinking.
New stories are useful too.
I also wouldn’t call rationality a new way of thinking, any more than I would call science a new way of thinking. Both are active fields of research and development. Both have transformative milestones, such that you might want to call science before X ‘protoscience’ instead of ‘science’, but only in the same way that modern science is ‘protoscience’ because Y hasn’t happened yet.
It’s also worth noting that the research and development often makes old ideas more precise. People ran empirical tests before they knew what empiricism was. Similarly, we should expect to see people acting cleverly before a systematic way to act cleverly was developed.
And if we do, what does that say about the superiority of the method (that is, that it was known N years ago but didn’t take over the world)?
A meme’s reproductive success and its desirability for its host can differ significantly.
The reverse story—”don’t waste your time on liars”—probably shouldn’t end with there actually being a wolf, as one should not expect listeners to understand the sometimes subtle separation between good decision-making and good consequences.
The lesson of the story (for the townspeople), is that when your test (the boy) turns out to be unreliable, you should devise a new test (replace him with somebody who doesn’t lie).
If the way of thinking is so new, then why should we expect to find stories about it?
To quote from the guy this story was about, “there is nothing new under the sun”. At least nothing directly related to our wetware. So we should expect that every now and then people stumbled upon a “good way of thinking”, and when they did, the results were good. They might just not manage to identify what exactly made the method good, and to replicate it.
Also, as MaoShan said, this is kind of Proto-Bayes, 101 thinking. What we now have is this, but systematically improved over many iterations.
(that is, that it was known N years ago but didn’t take over the world)?
“Taking over the world” is a complex mix of effectiveness, popularity, luck and cultular factors. You can see this a lot in the domain of programming languages. With ways of thinking it is even more difficult, because—as opposed to programming languages—most people don’t learn them explicitly and don’t evaluate them based on results/”features”.
No, as you can see by the amount of objections, you are not too cynical. It’s closer to a sort of Proto-Bayes, stories like this show that that kind of thinking can turn out wise solutions; Bayesian thinking as it is understood now is more refined.
The Boy Who Cried Wolf is a pretty good example of updating on new information, I guess.
But it seems sort of pointless to attempt to find old stories that show the superiority of a supposedly new way of thinking. If the way of thinking is so new, then why should we expect to find stories about it? And if we do, what does that say about the superiority of the method (that is, that it was known N years ago but didn’t take over the world)? Perhaps this is too cynical?
Agreed, but the primary lesson of that story is “guard your reputation if you want to be believed.” The reverse story—”don’t waste your time on liars”—probably shouldn’t end with there actually being a wolf, as one should not expect listeners to understand the sometimes subtle separation between good decision-making and good consequences.
New stories are useful too.
I also wouldn’t call rationality a new way of thinking, any more than I would call science a new way of thinking. Both are active fields of research and development. Both have transformative milestones, such that you might want to call science before X ‘protoscience’ instead of ‘science’, but only in the same way that modern science is ‘protoscience’ because Y hasn’t happened yet.
It’s also worth noting that the research and development often makes old ideas more precise. People ran empirical tests before they knew what empiricism was. Similarly, we should expect to see people acting cleverly before a systematic way to act cleverly was developed.
A meme’s reproductive success and its desirability for its host can differ significantly.
The lesson of the story (for the townspeople), is that when your test (the boy) turns out to be unreliable, you should devise a new test (replace him with somebody who doesn’t lie).
To quote from the guy this story was about, “there is nothing new under the sun”. At least nothing directly related to our wetware. So we should expect that every now and then people stumbled upon a “good way of thinking”, and when they did, the results were good. They might just not manage to identify what exactly made the method good, and to replicate it.
Also, as MaoShan said, this is kind of Proto-Bayes, 101 thinking. What we now have is this, but systematically improved over many iterations.
“Taking over the world” is a complex mix of effectiveness, popularity, luck and cultular factors. You can see this a lot in the domain of programming languages. With ways of thinking it is even more difficult, because—as opposed to programming languages—most people don’t learn them explicitly and don’t evaluate them based on results/”features”.
No, as you can see by the amount of objections, you are not too cynical. It’s closer to a sort of Proto-Bayes, stories like this show that that kind of thinking can turn out wise solutions; Bayesian thinking as it is understood now is more refined.