I originally excluded the modern supercomputers, because they are a different category of beast—not really “computers”, but networks of computers. Then I included Ranger when I saw its hardware costs were only $30 million.
Besides there being some further room to reduce element size, there is 3D chip stacks—between these two factors, there should be at least a couple more possible doublings of processor power down the line. I haven’t run the math to estimate the theoretical limits.
Well, others have computed the limits. The 3rd reference links to a full article deriving the limits permitted by the laws of physics.
Seth Lloyd calculated the computational abilities of an “ultimate laptop” formed by compressing a kilogram of matter into a black hole of radius 1.485 × 10^-27 meters, concluding that it would only last about 10^-19 seconds before evaporating due to Hawking radiation, but that during this brief time it could compute at a rate of about 5 × 10^50 operations per second, ultimately performing about 10^32 operations on 10^16 bits.
“Rather serious cooling issues” is an accurate characterization, but current electronic packaging technology does very little with direct liquid cooling—there’s room to take the heat out if we can crack the theoretical challenges. You would only need to boil a few grams of liquid per second to take off kilowatts of power.
It will eventually “end” if we count switching to any non-semiconductor technology as ending it. I don’t have a strong opinion as to how long it will last.
It depends on what sort of thing Moore’s Law is. Maybe we’d be better off if it were called Moore’s Intriguing Observation.
There’s economic pressure to make information processing cheaper, faster, and more efficient, but that’s only important if the improvements are physically possible at present levels of knowledge.
There’s economic pressure to improve batteries, but improvements aren’t happening nearly as fast.
Moore’s Law takes the form of a law, but it doesn’t have have the weight of observation behind it that the law of Gravity does.
Moore’s Law started taking effect at some point. It’s very likely that it will come to an end.
If we had a bunch of independent intelligent species and knew their computation development curves, we’d have something a lot more like a Law.
Some confusion seems to be arising on what “change” means. Let X denote the quantity whose change (in absolute terms) we are referring to. Is X
The number of transistors on a chip (call it N)?
log2(N)?
The utility produced by those chips (which seems like it ought to be proportional to something between N and log2(N))?
or something else?
If Kurzweil is thinking about (1) and you are thinking about (2), then you could easily both be correct (i.e. more transistors are being added to chips this year than ever before, even though the exponent governing the growth is lower).
Put another way: the rate of change may be at historic highs even if the rate of change of the rate of change is not.
Put another way: the rate of change may be at historic highs even if the rate of change of the rate of change is not.
Yes. But I don’t think that’s compatible with his argument. He posits, basically, that progress is locked in a feedback loop, and the “rate of change of the rate of change” is proportional to the rate of change. The situation you just described is therefore impossible in his model.
Okay. Price-performance is a less-precise measure, because we can’t compare prices across decades accurately. I think you’d get similar results; though if they differed, I’d expect them to make the pre-Moore’s change look even faster. (The development cost of the ENIAC - $6M in 2008 dollars, according to Wikipedia—was small compared to the per-unit costs of later large computers. That’s much less in inflation-adjusted dollars than the IBM 7090 ($3M 1960) or Cray 2 ($25M 1985) cost per computer.)
Isn’t Kurzweil’s primary claim that technological progress has always been an exponential and that Moore’s law is just the best known instance of a broader phenomenon?
I’ve noticed that many people miss Kurzweil’s claims. Such as I keep encountering the misconception that Kurzweil claims that technical advances will become infinite, which is just silly, and he never claims this. I have talked briefly with him about it and he says that the exponential climb will probably give way to a new paradigm that changes the way things are done rather than continue to infinity (or approach an asymptote).
According to the guy from Intel (Justin Ratner) at the 2008 Singularity Summit. Moore’s Law ended in 2005⁄06. The discrete transistor is a thing of the past according to his talk.
Can you give more details of what he says stopped? Clock speeds stopped getting faster around then or earlier, but the result was to expand into multi-core chips and graphics coprocessors.
Couple of things… A leap ahead in computing would not necessarily mean that Moore’s Law was not descriptive of the event. It could still follow the same exponential trend, yet look like a giant leap forward.
It is likely that Moore’s Law will continue due to economic pressure to find newer and faster ways to compute. This may not have to do with shrinking transistor sized, but may well involve other forms of computation or architecture for chips.
A whole community of rationalists and nobody has noticed that his elementary math is wrong?
1.9 gigaFLOPS doubled 8 times is around 500 gigaFLOPS, not 500 teraFLOPS.
Big difference, and one that trashes his conclusion.
*headdesk*
Yet another reason to favor exponential notation, I think.
Whoa!
You are right!
I originally excluded the modern supercomputers, because they are a different category of beast—not really “computers”, but networks of computers. Then I included Ranger when I saw its hardware costs were only $30 million.
I certainly hope that Moore’s Law will end, but I don’t think we could get that lucky.
Besides there being some further room to reduce element size, there is 3D chip stacks—between these two factors, there should be at least a couple more possible doublings of processor power down the line. I haven’t run the math to estimate the theoretical limits.
Well, others have computed the limits. The 3rd reference links to a full article deriving the limits permitted by the laws of physics.
You’re not going to get that with silicon technology in the next twenty years, though—that’s the more urgent question.
I agree—I was just answering the question of what the ultimate, inviolable limits are, to establish when the improvements have to stop.
I recall hearing that 3D chips would have some rather serious cooling issues but I suppose that isn’t an obviously unsolvable problem.
“Rather serious cooling issues” is an accurate characterization, but current electronic packaging technology does very little with direct liquid cooling—there’s room to take the heat out if we can crack the theoretical challenges. You would only need to boil a few grams of liquid per second to take off kilowatts of power.
It will eventually “end” if we count switching to any non-semiconductor technology as ending it. I don’t have a strong opinion as to how long it will last.
Edited: And then, in 2008, Roadrunner, with a top speed of 1700 teraflops, a mere tripling in one year.
:3
Where you measure from matters a lot.
Er, that’s less than 2 doublings.
In half the expected time for 1 doubling.
It depends on what sort of thing Moore’s Law is. Maybe we’d be better off if it were called Moore’s Intriguing Observation.
There’s economic pressure to make information processing cheaper, faster, and more efficient, but that’s only important if the improvements are physically possible at present levels of knowledge.
There’s economic pressure to improve batteries, but improvements aren’t happening nearly as fast.
I don’t understand this distinction. Moore’s Law appears to take the form of a law.
Moore’s Law takes the form of a law, but it doesn’t have have the weight of observation behind it that the law of Gravity does.
Moore’s Law started taking effect at some point. It’s very likely that it will come to an end. If we had a bunch of independent intelligent species and knew their computation development curves, we’d have something a lot more like a Law.
Some confusion seems to be arising on what “change” means. Let X denote the quantity whose change (in absolute terms) we are referring to. Is X
The number of transistors on a chip (call it N)?
log2(N)?
The utility produced by those chips (which seems like it ought to be proportional to something between N and log2(N))?
or something else?
If Kurzweil is thinking about (1) and you are thinking about (2), then you could easily both be correct (i.e. more transistors are being added to chips this year than ever before, even though the exponent governing the growth is lower).
Put another way: the rate of change may be at historic highs even if the rate of change of the rate of change is not.
Yes. But I don’t think that’s compatible with his argument. He posits, basically, that progress is locked in a feedback loop, and the “rate of change of the rate of change” is proportional to the rate of change. The situation you just described is therefore impossible in his model.
I’m not sure who or what you’re even arguing against here.
Thanks for being up-front—it was lousy. It didn’t say what I was really trying to say. I rewrote it.
I think Kurzweil talks primarily about price-performance, rather than transistor numbers or clock speeds.
Okay. Price-performance is a less-precise measure, because we can’t compare prices across decades accurately. I think you’d get similar results; though if they differed, I’d expect them to make the pre-Moore’s change look even faster. (The development cost of the ENIAC - $6M in 2008 dollars, according to Wikipedia—was small compared to the per-unit costs of later large computers. That’s much less in inflation-adjusted dollars than the IBM 7090 ($3M 1960) or Cray 2 ($25M 1985) cost per computer.)
Isn’t Kurzweil’s primary claim that technological progress has always been an exponential and that Moore’s law is just the best known instance of a broader phenomenon?
I’ve noticed that many people miss Kurzweil’s claims. Such as I keep encountering the misconception that Kurzweil claims that technical advances will become infinite, which is just silly, and he never claims this. I have talked briefly with him about it and he says that the exponential climb will probably give way to a new paradigm that changes the way things are done rather than continue to infinity (or approach an asymptote).
Yes, that’s true.
I guess my point is confused. Let me reformulate. . . done.
I’m going to delete this post. I wasn’t happy with it even before mathemajician pointed out the important error in it.
According to the guy from Intel (Justin Ratner) at the 2008 Singularity Summit. Moore’s Law ended in 2005⁄06. The discrete transistor is a thing of the past according to his talk.
Can you give more details of what he says stopped? Clock speeds stopped getting faster around then or earlier, but the result was to expand into multi-core chips and graphics coprocessors.
Couple of things… A leap ahead in computing would not necessarily mean that Moore’s Law was not descriptive of the event. It could still follow the same exponential trend, yet look like a giant leap forward.
It is likely that Moore’s Law will continue due to economic pressure to find newer and faster ways to compute. This may not have to do with shrinking transistor sized, but may well involve other forms of computation or architecture for chips.