The problem here is that Yudkowsky is ignoring cultural evolution.
I think it’s obvious that EY’s model ignores all sorts of things—the question is whether or not these things are worth not ignoring.
The process that is responsible for Moore’s law involves human engineers, but it also involves human culture, machines and software.
So are you arguing for some superexponential growth from cultural evolution changing this process, or what? It’s completely unclear why this matters.
The human engineer’s DNA may have stayed unchanged over the last century, but their cultural software has improved dramatically over that same period—resulting in the Flynn effect.
That may be your explanation for the Flynn effect, but I think it’s safer to remain on the fence. There are too many other possible causal mechanisms at play to blame it on cultural evolution.
Only by considering how this phenomenon is rooted in the present day, can it be properly understood.
Show me a modification to one of the basic models that follows from this statement and changes the consequence of the argument.
An easy basic test of whether humans are currently the limiting factor in a process is to ask whether the labs run all night, with researchers sometimes standing idle until the results come in; a lab that runs 9-5 can be sped up by at least a factor of 3 if the individual researchers don’t have to eat, sleep or go to the bathroom.
An easy basic test of whether humans are currently the limiting factor in a process is to ask whether the labs run all night, with researchers sometimes standing idle until the results come in
It is my experience that many labs do in fact run all night, with researchers taking shifts baby sitting equipment and watching data roll in.
Well, those are the labs that don’t have a blindingly obvious route to speedups just by speeding up the researchers, though de facto I’d expect it to work anyway up to a point.
When I wrote my thesis, a major limiting factor was the speed of the computers doing the analysis; I would start the latest variant of my program in the afternoon, and come back next morning to see what it reported. I’m currently working on software to take advantage of massively-parallel processors to speed up this process by a couple of orders of magnitude.
The difficulty of brain-computer interfaces is that the brain does not appear to work with any known executable format, making running anything on it something of a hit-and-miss affair.
Of course, he could solve this by simply increasing the precision of his computer calculations until it’s the right speed for his brain...
Not every part of research is glamorous, there is a lot of routine labor to do, and most of the time its the researchers (grad students or postdocs) doing it. The first lab I ever worked in, we spent about 3 months designing and building the experiment and almost a year straight of round-the-clock data collection, I suppose you could say we temporarily stopped being researchers and became technicians but that seems a bit odd. During one of my postdocs, a good 60% of my job was sys-admin type work to keep a cluster running, while waiting for code to run. My point is that the rate-limiting step in a lot of research is that experiments take time to perform, and code takes time to run. Most labs have experiments/code running round the clock.
I guess if you want to differentiate technician work from researcher work, you could do something non-standard and say that every postdoc/grad student in a lab is 30% sales (after all, begging for money isn’t being a researcher, properly understood), 60% technician, 10% researcher.
The cluster of thingspace you’re referring to can properly be called researchers (probably).
Just the same, if that were how the term were typically used—for cases where the deep theoretical, high-inferential-distance understanding is vital for core job functions—I would not feel the need to raise the point I did.
Rather, it’s because people tend to inflate their own job descriptions, and my frequent observation of anyone working in lab-like environments being classified as a “researcher” or “doing research”, regardless of how small the intellectual component of their contribution is, that I feel the need to point out the possible mis-labeling.
(A high-profile example of this mistake is Freeman Dyson’s criticism of climate scientists for being too lazy to do the hard work of collecting data in extreme conditions, which is itself not the scientific component of the work. Start from:”It is much easier for a scientist to sit in an air-conditioned building...” )
“Baby-sitting equipment” is rather a condescending description of what a shift-taker at a particle physics experiment does. This being said, it must be admitted that the cheapness of grad-student labour is a factor in the staffing decisions, here.
An easy basic test of whether humans are currently the limiting factor in a process is to ask whether the labs run all night, with researchers sometimes standing idle until the results come in [...]
That “incorrectly” bundles culture in with the human engineers.
To separate the improving components (culture, machines, software) from the relatively static ones (systems based on human DNA) you would have to look at the demand for uncultured human beings. There are a few natural experiments in this area out there—in the form of feral children. Demand for these types of individual appears to rarely be a limiting factor in most enterprises. It is clear that progress is driven by systems that are themselves progressing and improving.
As for computers—they may not typically be on the critical path as often as humans, but that doesn’t mean that their contributions to progress are small. What it does mean is that they have immense serial speed. That is partly because we engineered them to compensate for our weaknesses.
If you know about computers operating with high serial speed, then observing that computers are waiting around for humans more than humans are waiting around for computers tells you next to nothing about their relative contributions to making progress. This proposed test is too “basic” to be of much use.
Other ways of comparing the roles of men and machines involve looking at their cost and/or their weight. However you look at it, the influence of the tech domain today on progress is hard to ignore. If someone were to take Intel’s tools away from its human employees, its contributions to making progress would immediately halt.
The process that is responsible for Moore’s law involves human engineers, but it also involves human culture, machines and software.
So are you arguing for some superexponential growth from cultural evolution changing this process, or what? It’s completely unclear why this matters.
The position I’m arguing against is:
if our old extrapolation was for Moore’s Law to follow such-and-such curve given human engineers, then faster engineers should break upward from that extrapolation.
This treats human engineers as a fixed quantity. However the process that actually produces Moore’s law involves
human engineers, human culture, machines and software. Only the former are relatively unchanging. Culture, machines and software are all improving dramatically as time passes—and they are absolutely the reason why Moore’s law can keep up the pace. Yudkowsky has a long history of not properly understanding this process—and it hinders his analysis.
The human engineer’s DNA may have stayed unchanged over the last century, but their cultural software has improved dramatically over that same period—resulting in the Flynn effect.
That may be your explanation for the Flynn effect, but I think it’s safer to remain on the fence. There are too many other possible causal mechanisms at play to blame it on cultural evolution.
Only by considering how this phenomenon is rooted in the present day, can it be properly understood.
Show me a modification to one of the basic models that follows from this statement and changes the consequence of the argument.
That seems like a vague and expensive-sounding order. How would seeing “a modification to one of the basic models that follows from this statement and changes the consequence of the argument” add to the discussion?
This treats human engineers as a fixed quantity. However the process that actually produces Moore’s law involves human engineers, human culture, machines and software. Only the former are relatively unchanging. Culture, machines and software are all improving dramatically as time passes—and they are absolutely the reason why Moore’s law can keep up the pace.
So then Moore’s law should be faster than Yudkowsky’s analysis predicts, because of cultural evolution? I still have no idea what you’re trying to argue.
Yudkowsky has a long history of not properly understanding this process—and it hinders his analysis.
How does it hinder his analysis? Please give me something concrete to work with. For example, when a mathematician says “Only by looking at the cohomology groups of a space can we properly understand the topology of its holes,” it means that under any weaker theory (e.g., looking only at the Euler characteristic—see Lakatos’ Proofs and Refutations) one quickly runs into problems (e.g., a torus has the same Euler characteristic as a Mobius strip, but the cohomology is much different).
All of the proposed explanations of the Flynn effect can be expressed in cultural evolution
Granted. I still don’t think you could cause the Flynn effect by inducing cultural evolution (whatever that means). The reactionaries would have a field day regaling you with tales of Ethiopia and decolonization.
Only by considering how this phenomenon is rooted in the present day, can it be properly understood.
Show me a modification to one of the basic models that follows from this statement and changes the consequence of the argument.
That seems like an expensive-sounding order.
Should be as simple as modifying a few terms and solving a differential equation, or perhaps a system of them. Doing such things is why humans invented computers. More importantly, it would be an actionable contribution to the study.
How would seeing “a modification to one of the basic models that follows from this statement and changes the consequence of the argument” add to the discussion?
It’d be the rent for believing cultural evolution is significantly relevant to the model.
So then Moore’s law should be faster than Yudkowsky’s analysis predicts, because of cultural evolution? I still have no idea what you’re trying to argue.
It seems to me that timtyler’s point is that Yudkowsky is wrong to claim that the current Moore’s law was extrapolated from fix-speed engineers. Engineers were ALREADY using computers to enhance their productivity, and timtyler suggests that cultural factors also increase the engineers speed. The cycle of build faster computer → increase engineering productivity → build even faster computer → increase engineering productivity even more, etc was already cooked in to the extrapolation, so there is no reason to assume we’ll break above it.
All of the proposed explanations of the Flynn effect can be expressed in cultural evolution
Granted. I still don’t think you could cause the Flynn effect by inducing cultural evolution (whatever that means). The reactionaries would have a field day regaling you with tales of Ethiopia and decolonization.
Modern cultural evolution is, on average, progressive. Fundamentally, that’s because evolution is a giant optimization process operating in a relatively benign environment. The Flynn effect is one part of that.
It’d be the rent for believing cultural evolution is significantly relevant to the model.
Machine intelligence will be a product of human culture. The process of building machine intelligence is cultural evolution in action. In the future, we will make a society of machines that will share cultural information to recapitulate the evolution of human society. That’s what memetic algorithms are all about.
I think it’s obvious that EY’s model ignores all sorts of things—the question is whether or not these things are worth not ignoring.
So are you arguing for some superexponential growth from cultural evolution changing this process, or what? It’s completely unclear why this matters.
That may be your explanation for the Flynn effect, but I think it’s safer to remain on the fence. There are too many other possible causal mechanisms at play to blame it on cultural evolution.
Show me a modification to one of the basic models that follows from this statement and changes the consequence of the argument.
An easy basic test of whether humans are currently the limiting factor in a process is to ask whether the labs run all night, with researchers sometimes standing idle until the results come in; a lab that runs 9-5 can be sped up by at least a factor of 3 if the individual researchers don’t have to eat, sleep or go to the bathroom.
It is my experience that many labs do in fact run all night, with researchers taking shifts baby sitting equipment and watching data roll in.
Well, those are the labs that don’t have a blindingly obvious route to speedups just by speeding up the researchers, though de facto I’d expect it to work anyway up to a point.
When I wrote my thesis, a major limiting factor was the speed of the computers doing the analysis; I would start the latest variant of my program in the afternoon, and come back next morning to see what it reported. I’m currently working on software to take advantage of massively-parallel processors to speed up this process by a couple of orders of magnitude.
Next time, try shifting processing resources from your brain to the analytic computers until neither is waiting on the other!
Ahem
But then his brain will be too slow.
The difficulty of brain-computer interfaces is that the brain does not appear to work with any known executable format, making running anything on it something of a hit-and-miss affair.
Of course, he could solve this by simply increasing the precision of his computer calculations until it’s the right speed for his brain...
Someone baby-sitting equipment is a technician, not a researcher, properly understood.
Not every part of research is glamorous, there is a lot of routine labor to do, and most of the time its the researchers (grad students or postdocs) doing it. The first lab I ever worked in, we spent about 3 months designing and building the experiment and almost a year straight of round-the-clock data collection, I suppose you could say we temporarily stopped being researchers and became technicians but that seems a bit odd. During one of my postdocs, a good 60% of my job was sys-admin type work to keep a cluster running, while waiting for code to run. My point is that the rate-limiting step in a lot of research is that experiments take time to perform, and code takes time to run. Most labs have experiments/code running round the clock.
I guess if you want to differentiate technician work from researcher work, you could do something non-standard and say that every postdoc/grad student in a lab is 30% sales (after all, begging for money isn’t being a researcher, properly understood), 60% technician, 10% researcher.
The cluster of thingspace you’re referring to can properly be called researchers (probably).
Just the same, if that were how the term were typically used—for cases where the deep theoretical, high-inferential-distance understanding is vital for core job functions—I would not feel the need to raise the point I did.
Rather, it’s because people tend to inflate their own job descriptions, and my frequent observation of anyone working in lab-like environments being classified as a “researcher” or “doing research”, regardless of how small the intellectual component of their contribution is, that I feel the need to point out the possible mis-labeling.
(A high-profile example of this mistake is Freeman Dyson’s criticism of climate scientists for being too lazy to do the hard work of collecting data in extreme conditions, which is itself not the scientific component of the work. Start from:”It is much easier for a scientist to sit in an air-conditioned building...” )
“Baby-sitting equipment” is rather a condescending description of what a shift-taker at a particle physics experiment does. This being said, it must be admitted that the cheapness of grad-student labour is a factor in the staffing decisions, here.
That “incorrectly” bundles culture in with the human engineers.
To separate the improving components (culture, machines, software) from the relatively static ones (systems based on human DNA) you would have to look at the demand for uncultured human beings. There are a few natural experiments in this area out there—in the form of feral children. Demand for these types of individual appears to rarely be a limiting factor in most enterprises. It is clear that progress is driven by systems that are themselves progressing and improving.
As for computers—they may not typically be on the critical path as often as humans, but that doesn’t mean that their contributions to progress are small. What it does mean is that they have immense serial speed. That is partly because we engineered them to compensate for our weaknesses.
If you know about computers operating with high serial speed, then observing that computers are waiting around for humans more than humans are waiting around for computers tells you next to nothing about their relative contributions to making progress. This proposed test is too “basic” to be of much use.
Other ways of comparing the roles of men and machines involve looking at their cost and/or their weight. However you look at it, the influence of the tech domain today on progress is hard to ignore. If someone were to take Intel’s tools away from its human employees, its contributions to making progress would immediately halt.
The position I’m arguing against is:
This treats human engineers as a fixed quantity. However the process that actually produces Moore’s law involves human engineers, human culture, machines and software. Only the former are relatively unchanging. Culture, machines and software are all improving dramatically as time passes—and they are absolutely the reason why Moore’s law can keep up the pace. Yudkowsky has a long history of not properly understanding this process—and it hinders his analysis.
All of the proposed explanations of the Flynn effect can be expressed in terms of cultural evolution—except perhaps for for Heterosis, which is rather obviously incapable of explaining the observed effect.
That seems like a vague and expensive-sounding order. How would seeing “a modification to one of the basic models that follows from this statement and changes the consequence of the argument” add to the discussion?
So then Moore’s law should be faster than Yudkowsky’s analysis predicts, because of cultural evolution? I still have no idea what you’re trying to argue.
How does it hinder his analysis? Please give me something concrete to work with. For example, when a mathematician says “Only by looking at the cohomology groups of a space can we properly understand the topology of its holes,” it means that under any weaker theory (e.g., looking only at the Euler characteristic—see Lakatos’ Proofs and Refutations) one quickly runs into problems (e.g., a torus has the same Euler characteristic as a Mobius strip, but the cohomology is much different).
Granted. I still don’t think you could cause the Flynn effect by inducing cultural evolution (whatever that means). The reactionaries would have a field day regaling you with tales of Ethiopia and decolonization.
Should be as simple as modifying a few terms and solving a differential equation, or perhaps a system of them. Doing such things is why humans invented computers. More importantly, it would be an actionable contribution to the study.
It’d be the rent for believing cultural evolution is significantly relevant to the model.
It seems to me that timtyler’s point is that Yudkowsky is wrong to claim that the current Moore’s law was extrapolated from fix-speed engineers. Engineers were ALREADY using computers to enhance their productivity, and timtyler suggests that cultural factors also increase the engineers speed. The cycle of build faster computer → increase engineering productivity → build even faster computer → increase engineering productivity even more, etc was already cooked in to the extrapolation, so there is no reason to assume we’ll break above it.
That’s a correct summary. See also my: Self-Improving Systems Are Here Already.
Using computers and culture to enhance productivity is often known as intelligence augmentation. It’s an important phenomenon.
Modern cultural evolution is, on average, progressive. Fundamentally, that’s because evolution is a giant optimization process operating in a relatively benign environment. The Flynn effect is one part of that.
Machine intelligence will be a product of human culture. The process of building machine intelligence is cultural evolution in action. In the future, we will make a society of machines that will share cultural information to recapitulate the evolution of human society. That’s what memetic algorithms are all about.
There is an ocean between us. I keep asking for specifics, and you keep giving generalities.
I give up. There was an interesting idea somewhere in here, but it was lost in too many magical categories.
Hmm. Maybe you think I am not being specific—because you are not familiar with my material on this topic?
To recap, the basics of my preferred model of this process are here.