Perhaps, as we move toward more and more complex and open-ended problems, it will get harder and harder to leave humans in the dust?
A key issue with training AIs for open-ended problems is that’s a lot harder to create good training data for open-ended problems then it is to create high-quality training data for a game with clear rules.
It’s worth noting that one of the problems where humans outperform computers right now are not really the open-ended tasks but things like how to fold laundry.
A key difference between playing go well and being able to fold laundry well is that training data is easier to come by for go.
If you look at the quality that a lot of professionals make when it comes to a lot of decisions involving probability (meaning there’s a lot of uncertainty) they are pretty bad.
Sure. I’m just suggesting that the self-improvement feedback loop would be slower here, because designing and deploying a new generation of fab equipment has a much longer cycle time than training a new model, no?
You don’t need a new generation of fab equipment to make advances in GPU design. A lot of improvements of the last few years were not about having constantly a new generation of fab equipment.
You don’t need a new generation of fab equipment to make advances in GPU design. A lot of improvements of the last few years were not about having constantly a new generation of fab equipment.
Ah, by “producing GPUs” I thought you meant physical manufacturing. Yes, there has been rapid progress of late in getting more FLOPs per transistor for training and inference workloads, and yes, RSI will presumably have an impact here. The cycle time would still be slower than for software: an improved model can be immediately deployed to all existing GPUs, while an improved GPU design only impacts chips produced in the future.
Ah, by “producing GPUs” I thought you meant physical manufacturing.
Yes, that’s not just about new generations of fab equipment.
GPU performance for training models did increase faster than Moore’s law over the last decade. It’s not something where the curve of improvement is slow even without AI.
A key issue with training AIs for open-ended problems is that’s a lot harder to create good training data for open-ended problems then it is to create high-quality training data for a game with clear rules.
It’s worth noting that one of the problems where humans outperform computers right now are not really the open-ended tasks but things like how to fold laundry.
A key difference between playing go well and being able to fold laundry well is that training data is easier to come by for go.
If you look at the quality that a lot of professionals make when it comes to a lot of decisions involving probability (meaning there’s a lot of uncertainty) they are pretty bad.
You don’t need a new generation of fab equipment to make advances in GPU design. A lot of improvements of the last few years were not about having constantly a new generation of fab equipment.
Ah, by “producing GPUs” I thought you meant physical manufacturing. Yes, there has been rapid progress of late in getting more FLOPs per transistor for training and inference workloads, and yes, RSI will presumably have an impact here. The cycle time would still be slower than for software: an improved model can be immediately deployed to all existing GPUs, while an improved GPU design only impacts chips produced in the future.
Yes, that’s not just about new generations of fab equipment.
GPU performance for training models did increase faster than Moore’s law over the last decade. It’s not something where the curve of improvement is slow even without AI.