Scaling Language Model Size by 1000x relative to GPT3. 1000x is pretty feasible, but we’ll hit difficult hardware/communication bandwidth constraints beyond 1000x as I understand.
I think people are hugely underestimating how much room there is to scale.
The difficulty, as you mention, is bandwidth and communication, rather than cost per bit in isolation. An A100 manages 1.6TB/sec of bandwidth to its 40 GB of memory. We can handle sacrificing some of this speed, but something like SSDs aren’t fast enough; 350 TB of SSD memory would cost just $40k, but would only manage 1-2 TB/s over the whole array, and could not push it to a single GPU. More DRAM on the GPU does hit physical scaling issues, and scaling out to larger clusters of GPUs does start to hit difficulties after a point.
This problem is not due to physical law, but the technologies in question. DRAM is fast, but has hit a scaling limit, whereas NAND scales well, but is much slower. And the larger the cluster of machines, the more bandwidth you have to sacrifice for signal integrity and routing.
Thing is, these are fixable issues if you allow for technology to shift. For example,
Various sorts of persistent memories allow fast dense memories, like NRAM. There’s also 3D XPoint and other ReRAMs, various sorts of MRAMs, etc.
Multiple technologies allow for connecting hardware significantly more densely than we currently do, primarily things like chiplets and memory stacking. Intel’s Ponte Vecchio intends to tie 96 (or 192?) compute dies together, across 6 interconnected GPUs, each made of 2 (or 4?) groups of 8 compute dies.
Neural networks are amicable to ‘spatial computing’ (visualization), and using appropriate algorithms the end-to-end latency can largely be ignored as long as the block-to-block latency and throughput is sufficiently high. This means there’s no clear limit to this sort of scaling, since the individual latencies are invariant to scale.
You mention this, but to complete the list, sparse training makes scale-out vastly easier, at the cost of reducing the effectiveness of scaling. GShard showed effectiveness at >99.9% sparsities for mixture-of-experts models, and it seems natural to imagine that a more flexible scheme with only, say, 90% training sparsity and support for full-density inference would allow for 10x scaling without meaningful downsides.
It seems plausible to me that a Manhattan Project could scale to models with a quintillion parameters, aka. 10,000,000x scaling, within 15 years, using only lightweight training sparsity. That’s not to say it’s necessarily feasible, but that I can’t rule out technology allowing that level of scaling.
When I cite scaling limit numbers, I’m mostly deferring to my personal discussions with Tim Dettmers (whose research is on hardware, sparsity, and language models), so I’d check out his comment on this post for more details on his view of why we’ll hit scaling limits soon!
I disagree with that post and its first two links so thoroughly that any direct reply or commentary on it would be more negative than I’d like to be on this site. (I do appreciate your comment, though, don’t take this as discouragement for clarifying your position.) I don’t want to leave it at that, so instead let me give a quick thought experiment.
A neuron’s signal hop latency is about 5ms, and in that time light can travel about 1500km, a distance approximately equal to the radius of the moon. You could build a machine literally the size of the moon, floating in deep space, before the speed of light between the neurons became a problem relative to the chemical signals in biology, as long as no single neuron went more than half way through. Unlike today’s silicon chips, a system like this would be restricted by the same latency propagation limits that the brain is, but still, it’s the size of the moon. You could hook this moon-sized computer to a human-shaped shell on Earth, and as long as the computer was directly overhead, the human body could be as responsive and fully updatable as a real human.
While such a computer is obviously impractical on so many levels, I find it a good frame of reference to think about the characteristics of how computers scale upwards, much like Feynman’s There’s Plenty of Room at the Bottom was a good frame of reference for scaling down, considered back when transistors were still wired by hand. In particular, the speed of light is not a problem, and will never become one, except where it’s a resource we use inefficiently.
I think people are hugely underestimating how much room there is to scale.
The difficulty, as you mention, is bandwidth and communication, rather than cost per bit in isolation. An A100 manages 1.6TB/sec of bandwidth to its 40 GB of memory. We can handle sacrificing some of this speed, but something like SSDs aren’t fast enough; 350 TB of SSD memory would cost just $40k, but would only manage 1-2 TB/s over the whole array, and could not push it to a single GPU. More DRAM on the GPU does hit physical scaling issues, and scaling out to larger clusters of GPUs does start to hit difficulties after a point.
This problem is not due to physical law, but the technologies in question. DRAM is fast, but has hit a scaling limit, whereas NAND scales well, but is much slower. And the larger the cluster of machines, the more bandwidth you have to sacrifice for signal integrity and routing.
Thing is, these are fixable issues if you allow for technology to shift. For example,
Various sorts of persistent memories allow fast dense memories, like NRAM. There’s also 3D XPoint and other ReRAMs, various sorts of MRAMs, etc.
Multiple technologies allow for connecting hardware significantly more densely than we currently do, primarily things like chiplets and memory stacking. Intel’s Ponte Vecchio intends to tie 96 (or 192?) compute dies together, across 6 interconnected GPUs, each made of 2 (or 4?) groups of 8 compute dies.
Neural networks are amicable to ‘spatial computing’ (visualization), and using appropriate algorithms the end-to-end latency can largely be ignored as long as the block-to-block latency and throughput is sufficiently high. This means there’s no clear limit to this sort of scaling, since the individual latencies are invariant to scale.
The switches themselves between the computers are not at a limit yet, because of silicon photonics, which can even be integrated alongside compute dies. That example is in a switch, but they can also be integrated alongside GPUs.
You mention this, but to complete the list, sparse training makes scale-out vastly easier, at the cost of reducing the effectiveness of scaling. GShard showed effectiveness at >99.9% sparsities for mixture-of-experts models, and it seems natural to imagine that a more flexible scheme with only, say, 90% training sparsity and support for full-density inference would allow for 10x scaling without meaningful downsides.
It seems plausible to me that a Manhattan Project could scale to models with a quintillion parameters, aka. 10,000,000x scaling, within 15 years, using only lightweight training sparsity. That’s not to say it’s necessarily feasible, but that I can’t rule out technology allowing that level of scaling.
When I cite scaling limit numbers, I’m mostly deferring to my personal discussions with Tim Dettmers (whose research is on hardware, sparsity, and language models), so I’d check out his comment on this post for more details on his view of why we’ll hit scaling limits soon!
I disagree with that post and its first two links so thoroughly that any direct reply or commentary on it would be more negative than I’d like to be on this site. (I do appreciate your comment, though, don’t take this as discouragement for clarifying your position.) I don’t want to leave it at that, so instead let me give a quick thought experiment.
A neuron’s signal hop latency is about 5ms, and in that time light can travel about 1500km, a distance approximately equal to the radius of the moon. You could build a machine literally the size of the moon, floating in deep space, before the speed of light between the neurons became a problem relative to the chemical signals in biology, as long as no single neuron went more than half way through. Unlike today’s silicon chips, a system like this would be restricted by the same latency propagation limits that the brain is, but still, it’s the size of the moon. You could hook this moon-sized computer to a human-shaped shell on Earth, and as long as the computer was directly overhead, the human body could be as responsive and fully updatable as a real human.
While such a computer is obviously impractical on so many levels, I find it a good frame of reference to think about the characteristics of how computers scale upwards, much like Feynman’s There’s Plenty of Room at the Bottom was a good frame of reference for scaling down, considered back when transistors were still wired by hand. In particular, the speed of light is not a problem, and will never become one, except where it’s a resource we use inefficiently.