For progress to be by accumulation and not by random walk, read great books
This recent blog post strikes me as an interesting instance of a common phenomenon. The phenomenon looks like the following; an intellectual, working within the assumption that the world is not mad, (an assumption not generally found outside of the Anglo-American Enlightenment intellectual tradition) notices that some feature of the world would only make sense if the world was mad. This intellectual responds by denouncing as silly one of the few features of this vale of tears to be, while not intelligently designed, at least structured by generalized evolution rather than by entropy. The key line in the post is
“Conversely in all those disciplines where we have reliable quantatative measurements of progress (with the obvious exception of history) returning to the original works of past great thinkers is decidedly unhelpful.”
I agree with the above statement, and find that the post makes a compelling argument for it. My only caveat is that we essentially never have quantitative measures of progress. Even in physics, when one regards not the theory but the technique of actually doing physics, tools and modes of thought rise and fall for reasons of fashion, and once widespread techniques that remain useful fall into disuse.
Other important techniques, like the ones used to invent calculus in the first place, are never adequately articulated by those who use them and thus never come into general use. One might argue that Newton didn’t use any technique to invent calculus, just a very high IQ or some other unusual set of biological traits. This, however, doesn’t explain why a couple of people invented calculus at about the same time and place, especially given the low population of that time and place compared to the population of China over the many centuries when China was much more civilized than Europe.
It seems likely to me that in cases like the invention of calculus, looking at the use of such techniques can contribute to their development in at least crude form. By analogy, even the best descriptions of how to do martial arts are inadequate to provide expertise without practice, but experience watching experts fight is a valuable complement to training by the relatively inept. If one wants to know the Standard Model, sure, study it directly, but if you want to actually understand how to do the sorts of things that Newton did, you would be advised to read him, Feynman and yes, Plato too, as Plato also did things which contributed greatly to the development of thought.
Anyone who has ever had a serious intellectual following is worth some attention. Repeating errors is the default, so its valuable to look at ideas that were once taken seriously but are now recognized as errors. This is basically the converse of studying past thinkers to understand their techniques.
Outside of physics, the evidence for progress is far weaker. Many current economists think that today we need to turn back to Keynes to find the tools that he developed but which were later abandoned or simply never caught on. A careful reading of Adam Smith and of Ben Franklin reveals them to use tools which did catch on centuries after he published, such as economic models of population growth which would have predicted the “demographic transition” which surprised almost all demographers just recently. Likewise, much in Darwin is part of contemporary evolutionary theory but was virtually unknown by evolutionary biologists half a century ago.
As a practical matter a psychologist who knew the work of William James as well as that of B.F. Skinner or an economist who knows Hayek and Smith as well as Samuelson or Keynes is always more impressive than one who knows only the ‘modern’ field as ‘modern’ was understood by the previous generation. Naive induction strongly suggests that like all previous generations of social scientists, today’s social scientists who specialize in contemporary theories will be judged by the next generation, who will have an even more modern theory, to be inferior to their more eclectic peers. Ultimately one has to look at the empirical question of the relative per-capita intellectual impressiveness of people who study only condensations and people who study original works. To me, the latter looks much much greater in most fields, OK, in every field that I can quickly think of except for astronomy.
To the eclectic scholar of scholarly madness, progress is real. This decade’s sludge contains a few gems that weren’t present in the sludge of any previous decade. To the person who assumes that fields like economics or psychology effectively condense the findings of previous generations as background assumptions to today’s work, however, progress means replacing one pile of sludge with another fashionable sludge-pile of similar quality. And to those few whom the stars bless with the coworkers of those who study stars? Well I have only looked at astronomy as through a telescope. I haven’t seen the details on the ground. That said, for them maybe, just maybe, I can endorse the initial link. But then again, who reads old books of astronomy?
- The Best Textbooks on Every Subject by 16 Jan 2011 8:30 UTC; 750 points) (
- The Best Textbooks on Every Subject by 16 Jan 2011 8:30 UTC; 750 points) (
- To Learn Critical Thinking, Study Critical Thinking by 7 Jul 2012 23:50 UTC; 41 points) (
- 2 Apr 2015 6:10 UTC; 34 points) 's comment on Rationality Quotes Thread April 2015 by (
- Self-Study Questions Thread by 29 Jan 2014 1:32 UTC; 17 points) (
- 21 Mar 2010 0:45 UTC; 13 points) 's comment on Lights, Camera, Action! by (
- Proposal: Systematic Search for Useful Ideas by 1 Jun 2011 0:09 UTC; 9 points) (
- 19 Nov 2010 3:26 UTC; 4 points) 's comment on Yes, a blog. by (
- 4 Mar 2010 20:36 UTC; 3 points) 's comment on Open Thread: March 2010 by (
- 26 Apr 2010 4:53 UTC; 1 point) 's comment on The Fundamental Question by (
- 16 Aug 2013 3:30 UTC; 0 points) 's comment on Rationality Quotes August 2013 by (
More relevant: many textbooks are straightforwardly badly written, to the point that the thirty-year-old conference papers in the citations are actually more accurate. Another factor which the classics-are-screened-off-by-moderns argument may miss is the degree to which poor work reduces the value of a reference.
Is it that hard to pick the good ones?
I’ve never had to pick—I couldn’t tell you. My professors have done a mostly good job so far.
Would this be a fair summary?
Old books can be useful, but for the old books in a field to be essential reading today, something must have gone badly wrong with the field. Some fields have indeed gone badly wrong.
Yep, except that I’m saying that virtually all fields have gone badly enough wrong for old books to be useful.
Do you have any specific examples in mind, or is this an expression of the general idea that the academia is mad?
I mentioned biology and economics, philosophy and psychology. I could go farther if desired.
However, really, since academia promotes reading old books, I’m happy to place the probablistic burden of the claim that academia is mad on it.
That doesn’t seem so for mathematics, physics, chemistry...the hard sciences in general. It may be an ornament to one’s education to read Euclid, Newton, and Einstein, but it is not necessary. The books that endure in these fields are the exceptionally good textbooks rather than the original works.
Biology is the hot science right now. Knowledge about evolution was going to be very superficial until genetics came along. Now that tools are available, we are learning all sorts of things at an amazing clip.
in order to recognize systemic errors of your own era it is useful to return to a time before the current dominant paradigm was in effect. even better if you can find first hand accounts of when the current paradigm started becoming fashionable and was regarded as strange and alien.
I disagree with a few points
1 ) Most people do not have enormous amount of time to read, so the question is always if one should NOT read something actual and read a classic instead.
2) People who Do have lots of time end up reading Both actual and classic material, which is probably why you find those who read the classics superior, it’s just they are more into it.
3) Academics advise towards reading the classics among other reasons because they have been advised the same way, and chosen the same way, so Choice supportive bias plays a role there.
4) In addition, they prefer that their students read something they are already familiar with than something they themselves will have to become acquainted with in order to judge. It’s easier to judge Hegel than Bostrom.
5) Very motivated people tend to lose motivation when not allowed to have their own ideas, and with time become meme-copies of classic people, in part this happens because they are obligated to read Plato, Aristotle, etc… and end up losing faith in the intellectual world. High young achievers such as Feynman, Eliezer, Russell, Kripke, Wittgenstein and others take deep pride in having been outsiders in their studying methods.
5) To Dodge the Nearest Mistakes: We are all mistakers, trying to fit the map more and more to the territory. If I read Plato, I’ll be reading an old scrapped map made with coal in a rush by someone with alzheimer. If I read Feynman, I’m using satellite technology to provide a three dimensional visualization that scales up to centimeter range.
Your usage of “actual” appears to be based on a false cognate.
I agree with you so much. Since I have limited time (like everyone) I should maximize learning/time when pursuing learning. Some old classics are still worth their weight (e. g. Plato Republic). Most however, are not.
Even tho a lot of crap books exist today due to unedited selfpublishing and whatnot one can make the case that in general, there are better books out there for nearly any learning purpose than the original.
I’d argue that a original work has historical significance and that someone can learn something by analyzing it. On the other hand one is advised to learn the initial concept from a modern textbook (e. g. modern evolution theory is much more advanced than what darwin thought of).
Typo—you want “vale of tears”, not “veil”.
(I’m now on record with several comments like this one. Please let me know if they annoy. It’s a quirk of mine that egregious misspellings bias me toward thinking less of the writer and the writing, but it seems to be a widely shared one.)
Tip: You could pm the people about the error. No need for a permanent public record for trivial mistakes.
Yeah, when something is in the permanent public record, everybody notices...
Well at least this was to a different person. Changing default behaviors is incredibly difficult. Nicely done though :)
Or you could delete it after it’s been fixed.
Strictly speaking, “veil of tears” is not egregious, but I do generally like to be corrected when I make errors of that kind.
I think this post overstates the case a bit. My general impression is that the scientific method “wins” even in economics and that later works are better than earlier works.
Now it might be true that the average macro-economist of today understands less than Keynes did but I’d be hard pressed to say that the best don’t understand more. Moreover, there are really great distillers. In macro for example, Hicks distilled Keynes into something that I would consider more useful that the original.
Nonetheless, I think it is correct that someone should be reading the originals. If not there is the propensity for a particular distiller to miss an important insight and then for everyone else to go one missing it.
What this says to me is that there should be rewards to re-discovery. Suppose that I read Adam Smith and rediscover something great. I should be rewarded for that just as much as if I had come up with the idea myself. Afterall, it has the same effect on the current state of knowledge. However, that will not happen.
Rediscovering is not as prestigious as discovering, because it is not as difficult and does not signal intellectual greatness.
I’m sure some people understand more than Keynes, both today and in his time, but can you name them? The understanding of the best unrecognized synthesizing geniuses of both today and Keynes’ day aren’t available. If you think that the most famous contemporary macro people know more than Keynes I won’t laugh, just observe that they are probably using that knowledge to make hedge fund managers rich, not sharing it with you.
Macro-economists are rightly subject to the criticism “if your so smart, why aren’t you rich”.
So the easy answers might be:
Ben Bernanke
Mark Gertler
Micheal Wooford
Greg Mankiw
Its not clear to me why macro-economists are rightly subject to such criticism. To me its like asking a mathematician, “If you’re so good at logical reasoning why didn’t you create the next killer app”
Understanding how the economy works and applying that knowledge to a particular task are completely different.
‘Designing the next killer app’ seems to rely heavily on predicting what people will want, which is many steps and a lot of knowledge away from logical reasoning.
There is a difference between rediscovering and old idea, and adapting an old idea to a new situation. Simply rediscovering an old idea does not grant much prestige. Austrians are constantly coming across Hayek quotes and parading them around as definitive solutions to current problems. The problem is that these ideas are every bit as untestable as they were on the day Hayek wrote them. A confirmation bias leads Austrians to see them as Truth, while Keysians remain skeptical.
When old ideas are adapted into a testable form they endow a great deal of prestige. There are all sorts of anecdotes about this happening, such as Henry Ford taking the idea of an assembly line from Oldsmobile and mixing it with his observations from a meat factory, to create the moving assembly line. The difference is that this is a testable idea that creates immediate results.
So clearly adapting the new idea is useful.
However, it may also be the case that there is an old idea which if re-examined will be seen to be useful in and of itself.
The problem with the Austrians is that their ideas are being considered and they are being rejected. See Byran Caplan’s Why I am Not an Austrian Economist. (link seems not to be working)
An excerpt from the Amazon description of Plausible Reasoning: “This work might have been called “How to Become a Good Guesser”.”
Polya’s How to Solve It is a great little text he wrote for teachers and students of mathematics. Polya’s Mathematics and Plausible Reasoning is even better. There’s lots of great problem-solving techniques for non-mathematicians in there too. I recommend it to everyone, it’s the best example I’ve ever seen of someone writing down their techniques.
Edit: cleared up the reference of the quote, it originally looked like I was quoting the article, sorry about that!
I agree! There is a lot of good stuff on the kad network and emule is a great client.
Question: I don’t see the quote you reply to in the original post—where did it come from?
It came from the Amazon description, actually. I’ve edited the comment to make that clear.
Thank you—I might have also moved it to the end of the comment, just to make it clear that it related to the material in the comment, not material in the post. (You were the one who brought up “Plausible Reasoning”, after all!)
That would be a non-explanation in any case. However high Newton’s IQ may have been, his brain was still operating by lawful processes within the physical universe. By the sheer improbability of inventing calculus by chance, there is bound to exist some general technique used by Newton for doing things like inventing calculus, for all that that technique may have been opaque to Newton’s own conscious introspection. Perhaps someone else may be able to formulate this technique in explicit generality (in the same way that Newton himself formulated the methods of calculus, already known in special cases, in explicit generality).
“High IQ” probably doesn’t mean more than something like high processing speed and copious amounts of RAM. The algorithms (at least in their essence) can still be run, less efficiently, on inferior hardware.
I dispute that this is a non-explanation. Besides referring to concepts whose existence has already been confirmed by other means, it makes a testable prediction about the degree to which abilities should run in genetic families as opposed to student lineages.
It’s a question of which data you’re interested in explaining. I’m more interested in understanding the mechanism of how Newton invented calculus than in explaining the (comparatively uninteresting) fact that most other people didn’t. (If you want to program an AI to invent calculus, crying “IQ!” isn’t going to help.)
[ETA: To be more explicit: the vague hypothesis that “Newton had a high IQ” adequately explains why, given that calculus was invented, Newton was among two people to have invented it. But does a much less effective job of explaining why it was invented in the first place, by anybody.]
(As it happens, most of the world’s intellectual power has in fact been spread via students rather than children.)
As for Newton’s exact mental processes, they are lost to history, and we are not going to get very specific theories about them. Newton can only give us an outside view of the circumstances of discovery. His most important finds were made alone in his private home and outside of academic institutions. Eliezer left school early himself. Perhaps a common thread?
Teachers select strongly for IQ among students when they have power to choose their students. This might be a more powerful aggregator of high-IQ individuals than transmission from parents to children. It might be the case that teachers don’t transmit any special powers to their students, but just like to affiliate with other high-IQ individuals, who then go on to do impressive things.
At a certain level of IQ (that of Yudkowsky, Newton) pedagogy becomes irrelevant and a child will teach itself, given the necessary resources. At this point, teachers are more likely to take credit for natural talent while doing nothing to aid it than they are to “transmit intellectual power.”
If academic lineages are due to an ability that teachers have to identify talent, this ability is extremely common and predicts achievement FAR better than IQ tests can. I am struck by the degree to which the financial world fails to identify talent with anything like similar reliability.
Also, the above theory is inconsistent with the extreme intellectual accomplishments of East Asians, and previously Jews, within European culture and failure of those same groups to produce similar intellectual accomplishments prior to such cultural admixture.
I remember reading that one of the most g loaded tests was recognition time. I think the experiment involved flashing letters and timing how fast it took to press the letter on a keyboard. The key correlate was “time until finger left the home keys” which the authors interpreted as the moment you realized what the letter was.
I also heard a case that sensory memory lasts for a short a relatively constant time among humans and that difference in cognitive ability were strongly related to how speed on pushing information into sensory memory. The greater the speed the larger a concept could be pushed in before key elements started to leak out.
This seems (to me) to be pretty unlikely to be the case. “High processing speed and copious amounts of RAM” would allow more efficient execution of a particular algorithm… but where does that algorithm come from in the first place? One notes that no one taught Newton the “algorithm for inventing calculus”. The true algorithm he used, as you pointed out, is likely to have been implemented at a lower level of thought than that of conscious deliberation; if he were still alive today and you asked him how he did it, he might shrug and answer, “I don’t know”, “It just seemed obvious”, or something along those lines. So where did the algorithm come from? I very much doubt that processing speed and RAM alone are enough to come up with a working algorithm good enough to invent calculus from scratch within a single human lifespan, no matter what substrate said algorithm is being run on. (If they were, so-called “AI-complete” problems such as natural language processing would plausibly be much easier to solve.) There is likely some additional aspect to intelligence (pattern-recognition, possibly?) that makes it possible for humans to engage in creative thinking of the sort Newton must have employed to invent calculus; to use Douglas Hofstadter’s terminology, “I-mode”, not “M-mode”. “High IQ”, then, would refer to not only increased processing speed and working memory, but also increased pattern-recognition skills. (Raven’s Progressive Matrices, anyone?)
I don’t think so. There are some conceptual leaps that people with inadequate intelligence will simply never be able to make, no matter how much time they put in. Part of the problem is they will lack the intuition and insight to know what type of problem or method of thought they are trying to invent. If there were a system for generating entirely new paradigms of useful thought we’d have already achieved a singularity of some kind I think.
Both Leibniz and Newton were giants among the early natural philosophers or scientists.If not for them it might have taken an Einstein or Ramanujan to invent calculus; and if it had been Einstein then instead of benefiting from the work he built on top of Newton and some of his successors we would have to wait for someone else to work out general relativity (most likely).
Human creativity isn’t magic. There IS such a system. Most likely we can codify a simpler and more efficient system. Hopefully so, as this will be required for FAI.
The fact that we haven’t coded it yet doesn’t mean it can’t be done. Once done, a below average thinker could in principle follow the algorithm.
Arguably they couldn’t.
An average thinker could surely be the computational substrate on which the algorithm was implemented in the same way transitors implement the algorithm running on this computer. However, this would simply be a version of Searle’s Chinese room. The sentient being doing the thinking here would actually be an AI running really really slowly through the application of computational rules on pencil and paper by some person.
Any rule you can follow to break down a problem or bypass a known cognitive bias makes you smarter. It IS such an algorithm. There doesn’t have to be another sentient being/AI that you’re running, that’s just proof of concept.
The point is that we do not have to rely on genetics to give us people who can come up with brilliant ideas. We can train normal people and certainly above-average people to think in ways that lead to brilliant ideas, even if more slowly or only in groups.
And we should be training the brilliant people in the same processes anyway.
What training methods are you thinking of?
For the most part, we don’t have them yet. To a small degree they are some of the things we try to work out here. To a larger degree, science in general qualifies. (Look at the difference in performance between the most brilliant people pre-science, and the most brilliant people post-science. I see no reason to assume that normal people don’t enjoy the same multiplier. At least some sub-brilliant people must have made brilliant discoveries because they used science.)
The potential future methods are somewhere in between the strategy of running an AI on pencil and paper, and giving up on making yourself more creative/rational.
Thinking at the Edge might be useful.
It grew out of Focusing, a method based on observation of who got value from therapy and who didn’t. Those who did all had a pattern of pausing, paying close attention to how they felt, spending some time searching for the exact words which satisfied them to express how they felt, and then saying them. I haven’t seen any discussion of art or music therapy in this context.
Thinking at the Edge applies the method of close observation and expression of subtle feelings to cognition.
The writing at that link is confusing. It’s too… “dense”, let’s say, and reminds me of attempts to sound profound by deliberately being hard to understand rather than actually being profound—what others may have called using too many “big words”. I don’t have a good way of describing the feeling of reading something hard to understand, and, when something is hard to understand, it’s also hard to know whether it’s worth putting in the effort to try to understand it or whether it’s just gibberish. Am I making sense here?
You’re making sense. I’m sure Focusing is legitimate, and TAE is the same process I use for accessing new ideas. The bit I quoted sounds like TAE is incredibly valuable for people who’ve gotten false ideas about thinking from school and/or mainstream society.
However, in spite of all this, I find the TAE site unreadable, and I can handle moderately difficult text.
I’m not sure what the problem is.I don’t think it’s the vocabulary—it might be that there’s too much philosophy inserted in the wrong places, but this is only a guess.
“Philosophy” should have been in scare quotes.
Can you be more specific about that?
Yeah, there is definitely something very wrong with the writing style on that site.
Not if you think what Karl mentions above. The problem is that the amount of thought that you can hold in your head at one time is finite and differs significantly from one person to another.
In other words: algorithms need working memory, which is not boundless.
Well first off, I was assuming pencil and paper were allowable augmentations.
I would be surprised if it were the case that our brain process that finds big insights with N ‘bits of working memory’ couldn’t be serialized to find the same big insights as a sequence of small insights produced by a brain running a similar process but with only N/2 available ‘bits’.
Imagine yourself studying a 4 megapixel digital image only by looking at it one pixel at a time. Yes, you can look at it, and then even write down what color it was. Later you can refer back to this list and see what color a particular pixel was. Its hard to remember more than a few dozen at once though, so how will you ever have a complete picture of it in your head?
I could find and write down a set of instructions that would allow you to determine if there was a face in the image. If you were immortal and I were smarter, I could write down a set of instructions that might enable you to derive the physics of the photographed universe given a few frames.
At this level it’s like the Chinese room.
But I don’t think the ratio between Einstein’s working memory and a normal person’s working memory is 100,000 to 1.
It would be EASY to make instructions to find faces even if someone could only see and remember 1/16th of the image at a time. You get tons of image processing for free. “Is there a dark circle surrounded by a color?”
A human runnable algorithm to turn data into concepts would be different in structure, but not in kind.
“IQ or some other unusual set of biological traits” implies that the unusual features of the cognitive process might be built upon unusual features of a biological process and fairly likely to emerge given that unusual substrate. I then argued that this was an unlikely interpretation..
This seems like strikingly accurate defintion of IQ, although I agree with dxu that pattern recognition and/or other unusual abilities (set on solving logical problems no matter the context,) also are part of it. However, the methods Newton used to come up with for example calculus, are likely not the ones that can be found inside a human brain of a newborn. He probably used a lot of creative thinking to come up with ideas that hepled him do that.
Can you say more about what you mean by this? An uncharitable reading is absurd on the face of it (if the methods Newton used weren’t to be found inside a human brain, how exactly did Newton use them?) but I can’t quite work out a coherent charitable reading.
Err, I meant that I don´t find it likely that the human brain by itself have algorithms that are made for inventing calculus. He probably developed that thinking by other means. It was misfortunate of me to forget to spell out that last part.
Well, right, but what I’m trying to understand is what “other means” you have in mind, and what you’re trying to contrast them with, and how you think he went about developing them. As it stands, it sounds like you’re trying to suggest that creative thinking isn’t a natural function of the human mind.… which, again, I assume is not what you mean, but I’m at a loss to understand what you do mean.
What I meant is simply: 1) IQ and creative thinking is not the same thing, the two concepts are not strongly connected to one and other. The brain operates differently when using stuff that requires high “IQ” and when “thinking creatively” (Algorithms related to both concepts still reside inside the brain of course.) 2) I think that Newton used both creative thinking and high IQ and perhaps some other part that the brain is equipped with by default, in order to develop his thinking in a way that allowed for the invention of calculus.
Ah! OK, this helps clarify. Thanks.
For my own part, I agree that the cognitive processes underlying what we observe when we measure IQ aren’t the same as the ones we observe when we evaluate creative thinking, though they certainly overlap significantly. And, sure, it seems likely that developing calculus requires both of those sets.
Good we sorted it out :)
I think that by “creative thinking” Okeymaker is referring to something similar to what I describe in this comment, in that Newton employed more than simply “high processing speed and copious amounts of RAM” when he developed calculus.
Honestly, I grow more confused rather than less.
So, yes, of course there’s more going on when thinking systems think than “processing speed and RAM.” Of course there are various cognitive processes engaging with input in various ways.
If I’m following, you’re suggesting that the distinction being introduced here is between two different set of cognitive processes, one of which (call it A) is understood as somehow more natural or innate or intrinsic to the human mind than the other (call it B), and creative thinking is part of B. And the claim is that Newton relied not only on A, but also (and importantly) on B to invent calculus.
Well, OK. I mean, sure, we can divide cognitive processes up into categories however we wish.
I guess what I’m failing to understand is:
a) what observable traits of cognitive processes sort them into A or B (or both or neither)? Like… is identifying words that rhyme “natural”? Is flirting with someone attractive? Is identifying the number of degrees in the unmeasured angles of an equilateral triangle? How would we answer these questions?
b) what is the benefit of having sorted cognitive processes into these categories?
EDIT: Ah. Okeymaker’s most recent comment has helped clarify matters, in that they are no longer talking about natural and unnatural cognitive processes at all, but merely processes underlying “IQ” vs “creative thinking.” That I understand.
No, I’m not suggesting that. That may be what Okeymaker is suggesting; I’m not quite clear on his/her distinction either. What I was originally addressing, however, was komponisto’s assertion that “high IQ” is merely “high processing speed and copious amounts of RAM”, which I denied, pointing out that “high processing speed and copious amounts of RAM” alone would surely not have been enough to invent calculus, and that “creative thinking” (whatever that means) is required as well. In essence, I was arguing that “high IQ” should be defined as more than simply “high processing speed and copious amounts of RAM”, but should include some tertiary or possibly even quaternary component to account for thinking of the sort Newton must have performed to invent calculus. This suggested definition of IQ seems more reasonable to me; after all, if IQ were simply defined as “high processing speeed and copious amounts of RAM”, I doubt researchers would have had so much trouble testing for it. Furthermore, it’s difficult to imagine tests like Raven’s Progressive Matrices (which are often used in IQ testing) being completed by dint of sheer processing speed and RAM.
Note that the above paragraph contains no mention of the words “natural”, “innate”, or any synonyms. The distinction between “natural” thinking and “synthetic” (I guess that would be the word? I was trying to find a good antonym for “natural”) thinking was not what I was trying to get at with my original comment; indeed, I suspect that the concept of such a distinction may not even be coherent. Furthermore, conditional on such a distinction existing, I would not sort “creative thinking” into the “synthetic” category of thinking; as I noted in my original comment, no one taught Newton the algorithm he used to invent calculus. It was probably opaque even to his own conscious introspection, probably taking the form of a brilliant flash of insight or something like that, after which he just “knew” the answer, without knowing how he “knew”. This sort of thinking, I would say, is so obviously spontaneous and untaught that I would not hesitate to classify it as “natural”—if, that is, the concept is indeed coherent.
It sounds as though you may be confused because you have been considering Okeymaker’s and my positions to be one and the same. In light of this, I think I should clarify that I simply offered my comment as a potential explanation of what Okeymaker meant by “creative thinking”; no insight was meant to be offered on his/her distinction between “natural” thinking and “synthetic” thinking.
This shows that you didn’t understand what I was arguing, because you are in fact agreeing with me.
The structure of my argument was:
(1) People say that high IQ is the reason Newton invented calculus.
(2) However, high IQ is just high processing speed and copious amounts of RAM.
(3) High processing speed and copious amounts of RAM don’t themselves suffice to invent calculus.
(4) Therefore, “high IQ” is not a good explanation of why Newton invented calculus.
I understood what you were saying; I just disagreed with your definition of “high IQ”. Put another way: I modus tollens’d your modus ponens.
EDIT: It turns out that Quill_McGee already expressed what I was trying too, and probably better than I could have myself. So yeah—what he/she said.
Whereas, if I am interpreting them correctly, what they are saying is
(1) People say that high IQ is the reason Newton invented calculus.
(2) High processing speed and copious amounts of RAM don’t themselves suffice to invent calculus.
(3) Therefore, “High processing speed and copious amounts of RAM” is not a good description of high IQ.
Personally, I’d say that ‘high IQ’ is probably most useful when just used to refer to whatever it is that enables people to do stuff like invent calculus, and that ‘working memory’ already suffices for RAM, and that there probably should be a term for ‘high processing speed’ but I do not know what it is/should be.
EDIT: that is, I think that Newton scored well along some metric which did immensely increase his chances of inventing calculus, which does extend beyond RAM and processing speed, which I would nonetheless refer to as ‘high IQ’
tabooing IQ would almost certainly be helpful here.
I apologize for being unclear; when I wrote “you’re suggesting that the distinction being introduced here” I meant introduced by Okeymaker, whose position is what I was trying to understand in the first place (and I believe I now do), and which I’d assumed (incorrectly) that you were talking about as well.
deleted, se my other comment in response to your question.
Updated link:
Reading Originals
I read old books of astronomy and I found it very helpful for understanding new books of astronomy.
I read old books on philosophy and found they are obsolete when it comes to logic and epistemology.
The economic growth of the last few decades suggests that some people, somewhere, are gradually getting more things right more often. Those genomes aren’t sequencing themselves. Or have I misunderstood you?
Specific technologies arise and fall. Capital accumulates and depreciates. Governments make up numbers. Physics touches everything, especially through solid state and semiconductor physics in recent years. Finally, as the post emphasized, ideas are a sort of capital, and accumulate over time, even if new ideas are no better than old ones, so long as the old ones aren’t thrown out.
The methods available to test these various hypotheses seem to have more of an impact on their prominence, than any objective measure of truth. Classical mechanics conformed to observations and could be confirmed by various tests. This led to widespread adoption until, observations were be made that did not fit the theories. Often the theories are available and cover various possible outcomes, all justified by the intuition offered by the current, yet untestable, theories.
This is where the social sciences run into difficulty. Predictions made by the social sciences are confirmed or disproved by the available methods of verification, at the time the predictions are made. These methods of verification evolve at a slower rate than the theories, and are always limited by the dynamic nature of human actors in large groups. Even if we could determine the utility function for everyone in the world, by the time that utility function had been applied and used to test various SS theories, they would have already changed.
It is unlikely that the LHC will produce results, not yet predicted by various physicists. When it does produce results, some thoeries will be proved and some will be disproved. The confirmation of the correct theory, however, is more valuable than 100 potentially correct yet untestable theories.
Mathematics has evolved quickly, for the same reasons that language has evolved, it is testable in its immediate ability to express and be understood. It has a very clean and objective measurement of success.
Bingo!
Economics suffers from a problem that it is the art of the royal economic advisor. Almost all radical economic advice suffers from a problem that only a very strong sovereign would be able to implement the same. In real life, almost every economic measure would be half-diluted by the time the rubber hit the road.
That doesn’t mean that the field has no advances. One might have to push and prod around a little to get progress in the direction sought.
For advancing the art of value creation, one can easily identify insights from economics that can be used.
This is what I call the naive history of science. In this view science progresses inevitably because it relies on a recipe for doing good science (the scientific method). You could probably find this in a physics textbook, but these kinds of stories aren’t taken seriously by historians of science.
Classical mechanics made incorrect predictions from the get-go (for instance it couldn’t explain the observed motion of the moon), in addition to positing occult forces which many natural philosophers (especially on The Continent) believed were a return to the natural magic tradition. The disagreement over classical mechanics was not a simple problem of applying a method. There were deep metaphysical commitments that explain why some accepted Newton’s theories and others rejected them. Theories “fitting” or “not fitting” observation cannot explain the history of physics (let alone the history of science).
Right, but they’re at least entangled with it, which is what separates scientific disciplines from their predecessors. I completely agree that the history of science is more messy, politics-laden, and irrational than the naive/textbook view acknowledges, but it only takes a weak sustained current (in this case, the fact that the results of experiments sometimes shocked and puzzled scientists) to overcome random noise in time.
um, I think you’re missing the overall point of his post; he states that we sometimes have accurate theories but our box of tools-mathematical techniques-is yet underdeveloped to make full sense of them.
it might be the case that he’s taking a naive view etc, but from your post it appears that has little to no significance to his overall point.
also, to any who downvoted, please refrain from down-voting without attempting to explain your disagreement. it’s obviously not good practice.
No, up and down votes are symmetrical. Both should usually be done without explanation.
I disagree; an explanation of a downvote is a lot more helpful to the author than an explanation of an upvote (in addition to the fact that it often mitigates status-based anger), and thus the symmetry is broken. h-H is perhaps exaggerating this principle, but it’s perfectly legitimate to say “that comment looked OK to me, what are you seeing?”
seconded, and well put.
Strong second.
Up and down votes should not be symmetrical. The space of upvote-worthy comments is much smaller than the space of downvote-worthy comments, so a down-vote, by itself, conveys less information.
In the space of comments actually posted, the reverse is the case. What class of potential comments did you have in mind?
I had in mind the space of comments that would be posted if commenters received no feedback on what kinds of comments were appropriate.
ETA: My point was that there are a lot more ways for a comment to go wrong than to go right. The region of good comments is a small target in commentspace. Given only that a comment was downvoted, it could be anywhere in a vast wasteland of bad possible comments. That’s the case even if you condition on the comment’s having appeared on LW.
Of course, sometimes one knows exactly why a comment was downvoted. But, if you’re the author, and you hadn’t expected the downvote, it’s probably not so clear why you received one. In general, you can see that the comment must have been in a relatively small region within bad-comment-land. But that’s small relative to all of bad-comment-land, so even your “small” region is probably still big compared to all of good-comment-land.
Agree, and add that I often prefer not to downvote in cases where I have expressed disagreement, simply because it reduces resentment.
With a somewhat valuable but straightforward comment, an upvote with no further discussion is optimal, because both the author and the readers understand why it’s good.
With a worthless but ingenuously written comment, the readers gain nothing from further discussion, but commentary helps the author to more easily discover his error. Do what your decision theory requires regarding the good of the many vs. the good of the few.
This somewhat echos The Value of Nature and Old Books. Sometimes, older books can be quite effective at explaining things that do not depend on the latest research—the books by e.g. Knuth, Feynman, Abelson/Susskind are good examples, and I would hearthily recommend those, even if there are newer works on similar subjects.
I’d like to quote this argument from here:
I disagree with the statement that evolutionary biology isn’t making clear progress. I’m guessing you’re talking about punctuated equilibrium, which was part of Darwin’s On the Origin of Species (albeit not by that name), deemphasized by later evolutionary biologists, and later assertively brought back by Gould et al. However, this hypothesis is only vacillating in and out of ‘style’ because it 1) has scientific merit and 2) is difficult to prove. Other aspects of Darwin’s theory have been easier to validate or disprove and so have been retained or decisively refuted over the years. On the whole modern evolutionists have a vastly more complete understanding of their subject than Darwin did. The entire new fields of genetics and molecular biology have opened up since Darwin’s day, expanding on Darwin’s theory as well as explaining the mechanics that underlie it.
Who says derivative works are always condensations? To continue with the Darwin example, On the Origin of Species was a seminal work, to be sure, but it doesn’t explain many necessary modern concepts, such as sexual selection, kin selection, silent mutations, genetic drift, etc. If you are an evolutionary biologist then you should clearly read On the Origin of Species, among other things. But if you are an interested amateur and only have time to read one book then you should read a modern evolution textbook, in the same way you would read a modern medical textbook instead of one written in the 19th century. The old texts would contain some discredited concepts and be missing a lot of substantiated ones.
I don’t just mean punctuated equilibrium.
Darwin wrote more than Origin and did talk about sexual selection.
I agree that an interested amateur should read the modern textbook over Origin. It’s not THAT good. If you can only read one book in a discipline it should pretty much always be a textbook unless the discipline is totally dysfunctional.
One book in a discipline?
Yes, you’re right. Thanks for the correction.
The bulk of my point still stands, though. Evolutionary biology has made clear progress, especially since molecular biology took off in the 50′s. Simplistically speaking, evolution is composed of mutation and natural selection, the latter of which was developed impressively by Darwin. But that was only half the story, so it was left to later biologists to complete the picture.
Progress in the last 50 years is a non sequitur response to a claim that the situation was dire 50 years ago. At least, if you claim to disagree.
Unless I misunderstand him, his claim is that there hasn’t been clear progress in the field since Darwin. My position is that there has been clear progress in the last 60 years. I concede that progress before that was slim.
Still, if the field has actually regressed between Darwin and mid 20th century (by today’s standards) without evolutionary biologists of that time being aware of that fact that’s evidence that progress in evolutionary biology is not necessarily clear, and reason enough to at least consider the possibility that the field might have regressed in other ways that we are not aware of.
I said progress was stagnant, not regressing. All of Darwin’s books have always been widely available and read, so no information was ever lost. Some of Darwin’s conjectures were deemphasized, and the biologists of the time were right to do so; they didn’t yet have the techniques to prove or disprove them, and mere conjecture should never be foundations of a scientific discipline. They weren’t central to the theory anyway, and even Darwin considered them just speculation.
With modern technical know-how, such as radiometric dating and molecular clocks, they’ve discovered evidence supporting some of Darwin’s more difficult-to-prove ideas, such as punctuated equilibrium. Darwin was an exceedingly smart man, so it’s no surprise that some of his idle speculation turned out to be accurate. But that’s a far cry from modern evolutionists “catching up” with Darwin.
I’m not necessarily trying to conivince you of anything, just interested. Assuming that you are convinced that Bayesian statistics are the correct way to treat uncertainty, would you say that the field of statistics never regressed in that respect because the works of Bayes and Laplace were always around?
That’s a pretty good argument for reading the work of the old masters though, isn’t it? (Not that you voiced any disagreement with that)
You have me at a disadvantage because I don’t know much about the history of statistics, but here is my view. Assuming the core principles of Bayesian statistics were demonstrably effective, if they were widely accepted and then later rejected or neglected for whatever reason, then that would be regression. If Bayes’ and Laplace’s methods never caught on at all until a long time later, and there were no other significant advances in the field, then that would be stagnation.
By these (admittedly my own) definitions, evolutionary biology didn’t regress after Darwin because the only parts of his theory that were neglected were the ones that weren’t yet provable. It’s as if, theoretically, Bayes came up with a variety of statistical methods, most of which were clearly effective but others were of dubious utility. It wouldn’t count as a regression, at least to me, if later generations dropped the dubious methods but kept the useful ones.
I apologize, I haven’t made my position clear about this. I think that experts should read the classics as well as modern works in their field. The interested amateur, though, should skip over the classics and go directly to modern thought, unless he or she has more free time than most.
Is this actually true?
Perhaps he means something like what Keynes said here.
I have to admit that personally I don’t see a golden thread in the post. What was the core argument? As far as I understood it the pot reasons about “relative per-capita intellectual impressiveness of people who study only condensations and people who study original works”.
Which is… to be honest, just a mockup. Who cares about the “impressiveness” while studying? Why should one optimize “impressiveness” in ones study?
Personally I think that original works carry a lot of baggage. For example the language is older, the theories sometimes incredibly outdated, … etc. It’s fun to read about this “new discovered oil” and that “this black oil will never run out!” but tbh not all books age the same. Plato ages well but 500 year old books on eye surgery are probably completely useless by now.
So I’d argue that there’s value in the “modern, condensed” form. Some expert which tells me “this obscure line has the meaning of x. Don’t mistake it for an y”.
Link to infiniteinjury.org seems to be down.
(Archived.)
The purpose of the comment was more in the sense of fixing the article… I am new to LW. Posts can be edited, right?
I’m not sure the OP pays that much attention to Less Wrong these days? The mods could do it if they wanted (or write a broken-link checker??).
It is not even necessary to write one; such tools already exist (search for “broken link” on that page).
Books are written sometimes about “The Great Ideas Of The Past”, sometimes about that great thinker of former times, and the public reads these books written by a someone else, but not the works of “The Great Man or Woman” himself/herself.
There is nothing that so greatly recreates the mind as the works of the old classic writers. Directly one has been taken up, even if it is only for half-an-hour, one feels as quickly refreshed, relieved, purified, elevated, and strengthened as if one had refreshed oneself at a mountain stream.
One can never read too little of bad, or too much of good books: bad books are intellectual poison; they destroy the mind.
In order to read what is good one must make it a condition never to read what is bad; for life is short, and both time and strength limited.
It would be a good thing to buy books if one could also buy the time to read them; but one usually confuses the purchase of books with the acquisition of their contents. To desire that a person should retain everything he has ever read, is the same as wishing him/her to retain in his stomach all that he has ever eaten. He has been bodily nourished on what he has eaten, and mentally on what he has read, and through them become what he is.
It is because people will only read what is the newest instead of what is the best, that writers remain in the narrow circle of prevailing ideas, and that we sometimes feel that the age sinks deeper and deeper in its own mire.
Putting China on BLAST!
“A good analysis book doesn’t summarize Newton it digests his insights and presents them as part of a grander theory. ”
Exactly. And I want to be in charge of doing that for myself, so I suppose I’ll continue to read original sources.
In that case, it will take you much longer to learn physics than it would if you’d just read a standard textbook. You will come out with extra knowledge, but it will be knowledge of history, not physics.
That’s too strong a claim; working it out for oneself from the intuitions available at the time probably makes good experience for a scientist, and it’s too bad we lack it. That being said, it will in fact take a lot more effort for that one benefit, and we should see if there’s a Third Alternative between being spoon-fed conclusions with tidy derivations, and trying to recapitulate the entire history of physics.
Finally I just want to say that surely you don’t disagree that there is something different about what happens in physics than what happens in astrology do you? I don’t care about deep principled distinctions here but just at a purely practical level physics (and the other sciences) let us make strictly more things now than they did 10, 50 or 100 years ago.
The notion of progress I had in mind is much much weaker than yours. I just mean that sometimes we discover shit that we find very useful (transistor technology) and that the useful consequences of scientific discoveries (be it new theories or just accurate measurements of molecular weight) are rarely lost.
In other words all I’m saying is that if you wanted nifty fun gadgets to play with or technologies to save your sick wife or the like and you had the chance to pluck 10 great scientists from any time in history to help you out during development you’d pick them from the future not the past. That is physicists can now give engineers theories that let them build both chips and buildings while before they only gave them building theories.
Ultimately, however, the aim of my post was to establish that there isn’t some kind of important knowledge best gained through the reading of original sources. The target of my argument was the frequently given argument that somehow spurning these great original works puts you at some kind of real (not just bad taste) intellectual disadvantage in terms of learning/knowledge relative to those who do.
Given that new ‘great’ originals continue to be published albeit quite slowly one can immediately conclude that either we are making progress or that there is no reason to believe reading great originals gives you a boost (i.e. helps you make progress). After all if we aren’t making progress then these new books can’t give later generations a boost (that would be progress) hence, one can’t justifiably claim that reading great originals is an aid to academic/intellectual progress.
Given that my claim is an entirely negative one I need not make any assumptions as you allege. Rather I’m just offering a reducto of position that you are simply dismissing from the start.
Ultimately, however, the aim of my post was to establish that there isn’t some kind of important knowledge best gained through the reading of original sources. The target of my argument was the frequently given argument that somehow spurning these great original works puts you at some kind of ‘objective’ disadvantage in terms of learning/knowledge relative to those who do. Sure these are fuzzy terms and I think most of them aren’t even really meaningful but the idea the advocates of this position have in mind is that somehow reading literature classics and other ‘great’ originals somehow helps you make intellectual contributions more than reading more recent works instead.
Given that new ‘great’ originals continue to be published albeit quite slowly one can immediately conclude that either we are making progress or that there is no reason to believe reading great originals gives you a boost (i.e. helps you make progress). After all if we aren’t making progress then these new books can’t give later generations a boost (that would be progress) hence, one can’t justifiably claim that reading great originals is an aid to academic/intellectual progress.
Given that my claim is an entirely negative one I need not make any assumptions as you allege. Rather I’m just offering a reducto of position that you are simply dismissing from the start.
Very weakly related to the post: I surprised Eliezer Yudkowsky last October with a quote showing off Galileo’s rationality.
You make decent points about the lack of evidence for ‘progress’ in methodology. I think it’s quite possible that we don’t significantly improve the process by which we go from the current best theory to it’s successor. Of course to make sense of this notion you would need a more precise notion of what it means to have a better methodology for generating scientific theories. I mean the first natural way to do this might be to somehow try and measure the percent of the physical world we can explain/predict from initial conditions (many complications with random events etc..) but that yields a decreasing rate of methodological progress as a matter of pure mathematics.
If f(t) is a bounded monotonically increasing differentiable function then the limit of f’(t) (f prime) as x goes to infinity is 0. So if f is some measure of the percent of the world physics has explained then it’s rate of increase has to eventually go to 0 since there is only so much world to explain.
More generally saying that a particular scientific methodology works or works better than another is equivalent to asserting that induction works and works better with respect to such and such measure of simplicity. All you can do is assume your notion of simplicity gives rise to a good scientific methodology (you can’t gain inductive evidence for it) so it doesn’t really make sense to measure our progress in scientific methodology.
So if I don’t believe in the idea of progress in the scientific method what did I mean by progress in my post? I put that in another comment since I felt it better to divide them up.