By the way, procrastinating on internet may be the #1 factor that delays Singularity. Before we make a first machine capable of programming better machines, we may make dozen machines capable of distracting us so much that we will never accomplish anything beyond that point.
People need cool names to treat ideas seriously, so let’s call this apex of human invention “Procrastinarity”. Formally, the better tools people can make, the more distraction they provide, so there is a limit for a human civilization where there is so much distraction that no one is able to focus on making better tools. (More precisely: even if some individuals can focus at this point, they will not find enough support, friends, mentors, etc., so without the necessary scientific infrastructure they cannot meaningfully contribute to human progress.) This point is called Procrastinarity and all the real human progress stops here. A natural disaster may eventually reduce humanity to pre-Procrastinarity levels, but if humans overcome these problems, they will just achieve another Procrastinarity phase. We will reach the first Procrastinarity in the following 30 years with probability 50%.
There’s another such curve, incidentally—I’ve been reading up on scientific careers, and there’s solid-looking evidence that a modern scientist makes his better discoveries about a decade later than in the early 1900s. This is a problem because productivity drops off in the 40s and is pretty small in the 50s and later, and this has remained constant (despite the small improvements in longevity over the 20th century).
So if your discoveries only really begin in your late 20s and you face a deadline of your 40s, and each century we lose a decade, this suggests within 2 centuries, most of a scientist’s career will be spent being trained, learning, helping out on other experiments, and in general just catching up!
We might call this the PhDalarity—the rate at which graduate and post-graduate experience is needed before one can make a major discovery.
As a former teacher I have noticed some unlucky trends in education (it may be different in different coutries), namely that it seems to slow down. On one end there is a public pressure to make schools easier for small children, like not giving them grades in the first class. On the other end there is a pressure to send everyone to university, for signalling (by having more people in universities we can pretend to be smart, even if the price is dumbing down university education) and reducing unemployment (more people in schools, less people in unemployment registry).
While I generally approve friendlier environment for small children and more opportunities for getting higher education, the result seems like shifting the education to later age. Students learn less in high schools (some people claim otherwise, but e.g. math curicullum is being reduced in recent decades) and many people think it’s ok, because they can still learn the necessary things in university, can’t they? So the result is a few “child prodigies” and a majority of students who are kept at schools only for legal or financial reasons.
Yeah, people live longer, prolong their childhoods, but their peak productivity does not shift accordingly. We feel there is enough time, but that’s because most people underestimate how much there is to learn.
It’s easy to learn something when you need it… if the inferential distance is short. Problem is, it often isn’t. Second problem, it is easy to find information, but it is more difficult to separate right and wrong information if the person has no background knowledge. Third problem, the usefullness of some things becomes obvious only after a person learns them.
I have seen smart people trying to jump across a large informational gap and fail. For example there are many people who taught themselves programming from internet tutorials and experiments. They can do many impressive things, just to fail at something rather easy later, because they have no concepts of “state automata” or “context-free grammar” or “halting problem”—the things that may seem like a useless academic knowledge at university, but they allow to quickly classify groups of problems into categories with already known rather easy solutions (or in the last case: known to be generally unsolvable). Lack of proper abstractions slows them at learning, they invent their own bad analogies. In theory, there are enough materials online that would allow them to learn everything properly, but that would take a lot of time and someone’s guidance. And that’s exactly what schools are for: they select materials, offer guidance, and connect you with other people studying the same topic.
In my opinion, a good “general education” is one that makes inferential distances shorter on average. Mathematics is very important, because it takes good basic knowledge to understand statistics, and without statistics you can’t understand scientific results in many fields. A recent example: in a local Mensa group there was a discussion on web whether IQ tests are really necessary, because most people know what their IQ is. I dropped them a link to an article saying that the correlation between self-reported IQ and measured value is less than 0.3. I thought that would solve the problem. Well, it did, kind of… because the discussion switched to whether “correlation 0.3″ means “0.3%” or “30%”. I couldn’t make this up. IMHO a good education should prevent such things from happening.
Though I agree that a conversion from “knowledge” to “money” is overestimated, or at least it is not very straightforward.
You are advocating a strategically devised network of knowledge which would always offer you a support from the nearest base, when you are wandering on a previously unknown land. “Here comes the marines”—you can always count on that.
Well, in science you can’t. You must fight the marines as the enemies sometimes, and you are often so far out, that nobody even knows for you. You are on your own and all the heavy equipment is both useless and to expensive to carry.
This is the situation when the stakes are high, when it really matters. When it doesn’t, it doesn’t anyway.
I think we can plausibly fight this by improving education to compress the time necessary to teach concepts. Hardly any modern education uses the Socratic method to teach, which in my experience is much faster than conventional methods, and could in theory be executed by semi-intelligent computer programs (the Stanford machine learning class embedding questions part way through their videos is just the first step).
Like Moore’s Law, at any point proponents have a stable of solutions for tackling the growth; they (or enough of them) have been successful for Moore’s Law, and it has indeed continued pretty smoothly, so if they were to propose some SENS-style intervention, I’d give them decent credit for it. But in this case, the overall stylized evidence says that nothing has reversed the changes up until I guess the ’80s at which point one could begin arguing that there’s underestimation involved (especially for the Nobel prizes). SENS and online education are great, but reversing this trend any time soon? It doesn’t seem terribly likely.
(I also wonder how big a gap between the standard courses and the ‘cutting edge’ there will be—if we make substantial gains in teaching the core courses, but there’s a ‘no mans land’ of long-tail topics too niche to program and maintain a course on which extends all the way out to the actual cutting edge, then the results might be more like a one-time improvement.)
By the way, procrastinating on internet may be the #1 factor that delays Singularity. Before we make a first machine capable of programming better machines, we may make dozen machines capable of distracting us so much that we will never accomplish anything beyond that point.
People need cool names to treat ideas seriously, so let’s call this apex of human invention “Procrastinarity”. Formally, the better tools people can make, the more distraction they provide, so there is a limit for a human civilization where there is so much distraction that no one is able to focus on making better tools. (More precisely: even if some individuals can focus at this point, they will not find enough support, friends, mentors, etc., so without the necessary scientific infrastructure they cannot meaningfully contribute to human progress.) This point is called Procrastinarity and all the real human progress stops here. A natural disaster may eventually reduce humanity to pre-Procrastinarity levels, but if humans overcome these problems, they will just achieve another Procrastinarity phase. We will reach the first Procrastinarity in the following 30 years with probability 50%.
There’s another such curve, incidentally—I’ve been reading up on scientific careers, and there’s solid-looking evidence that a modern scientist makes his better discoveries about a decade later than in the early 1900s. This is a problem because productivity drops off in the 40s and is pretty small in the 50s and later, and this has remained constant (despite the small improvements in longevity over the 20th century).
So if your discoveries only really begin in your late 20s and you face a deadline of your 40s, and each century we lose a decade, this suggests within 2 centuries, most of a scientist’s career will be spent being trained, learning, helping out on other experiments, and in general just catching up!
We might call this the PhDalarity—the rate at which graduate and post-graduate experience is needed before one can make a major discovery.
As a former teacher I have noticed some unlucky trends in education (it may be different in different coutries), namely that it seems to slow down. On one end there is a public pressure to make schools easier for small children, like not giving them grades in the first class. On the other end there is a pressure to send everyone to university, for signalling (by having more people in universities we can pretend to be smart, even if the price is dumbing down university education) and reducing unemployment (more people in schools, less people in unemployment registry).
While I generally approve friendlier environment for small children and more opportunities for getting higher education, the result seems like shifting the education to later age. Students learn less in high schools (some people claim otherwise, but e.g. math curicullum is being reduced in recent decades) and many people think it’s ok, because they can still learn the necessary things in university, can’t they? So the result is a few “child prodigies” and a majority of students who are kept at schools only for legal or financial reasons.
Yeah, people live longer, prolong their childhoods, but their peak productivity does not shift accordingly. We feel there is enough time, but that’s because most people underestimate how much there is to learn.
OTOH there is a saying—just learn where and how to get the information you need.
And it’s a big truth in that. It is easier every day to learn something (anything) when you need it.
Knowledge market value could be easily grossly overestimated.
It’s easy to learn something when you need it… if the inferential distance is short. Problem is, it often isn’t. Second problem, it is easy to find information, but it is more difficult to separate right and wrong information if the person has no background knowledge. Third problem, the usefullness of some things becomes obvious only after a person learns them.
I have seen smart people trying to jump across a large informational gap and fail. For example there are many people who taught themselves programming from internet tutorials and experiments. They can do many impressive things, just to fail at something rather easy later, because they have no concepts of “state automata” or “context-free grammar” or “halting problem”—the things that may seem like a useless academic knowledge at university, but they allow to quickly classify groups of problems into categories with already known rather easy solutions (or in the last case: known to be generally unsolvable). Lack of proper abstractions slows them at learning, they invent their own bad analogies. In theory, there are enough materials online that would allow them to learn everything properly, but that would take a lot of time and someone’s guidance. And that’s exactly what schools are for: they select materials, offer guidance, and connect you with other people studying the same topic.
In my opinion, a good “general education” is one that makes inferential distances shorter on average. Mathematics is very important, because it takes good basic knowledge to understand statistics, and without statistics you can’t understand scientific results in many fields. A recent example: in a local Mensa group there was a discussion on web whether IQ tests are really necessary, because most people know what their IQ is. I dropped them a link to an article saying that the correlation between self-reported IQ and measured value is less than 0.3. I thought that would solve the problem. Well, it did, kind of… because the discussion switched to whether “correlation 0.3″ means “0.3%” or “30%”. I couldn’t make this up. IMHO a good education should prevent such things from happening.
Though I agree that a conversion from “knowledge” to “money” is overestimated, or at least it is not very straightforward.
You are advocating a strategically devised network of knowledge which would always offer you a support from the nearest base, when you are wandering on a previously unknown land. “Here comes the marines”—you can always count on that.
Well, in science you can’t. You must fight the marines as the enemies sometimes, and you are often so far out, that nobody even knows for you. You are on your own and all the heavy equipment is both useless and to expensive to carry.
This is the situation when the stakes are high, when it really matters. When it doesn’t, it doesn’t anyway.
I think we can plausibly fight this by improving education to compress the time necessary to teach concepts. Hardly any modern education uses the Socratic method to teach, which in my experience is much faster than conventional methods, and could in theory be executed by semi-intelligent computer programs (the Stanford machine learning class embedding questions part way through their videos is just the first step).
Also, SENS.
Even better would be http://en.wikipedia.org/wiki/Bloom%27s_2_Sigma_Problem incidentally, and my own idée fixe, spaced repetition.
Like Moore’s Law, at any point proponents have a stable of solutions for tackling the growth; they (or enough of them) have been successful for Moore’s Law, and it has indeed continued pretty smoothly, so if they were to propose some SENS-style intervention, I’d give them decent credit for it. But in this case, the overall stylized evidence says that nothing has reversed the changes up until I guess the ’80s at which point one could begin arguing that there’s underestimation involved (especially for the Nobel prizes). SENS and online education are great, but reversing this trend any time soon? It doesn’t seem terribly likely.
(I also wonder how big a gap between the standard courses and the ‘cutting edge’ there will be—if we make substantial gains in teaching the core courses, but there’s a ‘no mans land’ of long-tail topics too niche to program and maintain a course on which extends all the way out to the actual cutting edge, then the results might be more like a one-time improvement.)
Thanks for the two sigma problem link.
http://arstechnica.com/web/news/2009/04/study-surfing-the-internet-at-work-boosts-productivity.ars
The article says that internet use boosts productivity only if it is done less than 20% of time. How is this relevant to the real life? :D
Also the article suggests that the productivity improvement is not caused by internet per se, but by having short breaks during work.
So I think many people are beyond the point where internet use could boost their productivity.