I also chatted with a few GPU researchers at NeurIPS and their take was that computing power will hit a peak, making AGI near-impossible. The newer GPUs from Google and Tesla are not necessarily better, they just avoid NVIDIA’s 4x markup on the price of GPUs.
I disagree, primarily since I think the human brain is at the limit already, and thus I see AGI as mostly possible. I also think that more energy will probably be used for computing. Now what I do agree with is there probably aren’t pareto improvement we can do with AGI. (At least not without exotica working out.)
What might not be obvious from the post is that I definitely disagree with the “AGI near-impossible” as well, for the same reasons. These are the thoughts of GPU R&D engineers I talked with. However, the GPU performance increase limitation is a significant update on the ladder of assumptions towards “scaling is all you need” leading to AGI.
If I would update at all, I’d update that without exotic computers, AGI cannot achieve a pareto improvement over the brain. I’d also update towards continuous takeoff. Scaling will get you there, because the goal of AGI is an endgame goal (at least with neuromorphic chips.)
I think we agree here. Those both seem like updates against scaling is all you need, i.e. (in this case) “data for DL in ANNs on GPUs is all you need”.
I think we agree here. Those both seem like updates against scaling is all you need, i.e. (in this case) “data for DL in ANNs on GPUs is all you need”.
That’s where I’m disagreeing, because to my mind this doesn’t undermine “scale is all you need”. It does undermine the idea that a basement group could produce AGI, but overall it gives actual limits on what AGI can do for a certain amount of energy.
I disagree, primarily since I think the human brain is at the limit already, and thus I see AGI as mostly possible. I also think that more energy will probably be used for computing. Now what I do agree with is there probably aren’t pareto improvement we can do with AGI. (At least not without exotica working out.)
Here’s a link:
https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know
What might not be obvious from the post is that I definitely disagree with the “AGI near-impossible” as well, for the same reasons. These are the thoughts of GPU R&D engineers I talked with. However, the GPU performance increase limitation is a significant update on the ladder of assumptions towards “scaling is all you need” leading to AGI.
If I would update at all, I’d update that without exotic computers, AGI cannot achieve a pareto improvement over the brain. I’d also update towards continuous takeoff. Scaling will get you there, because the goal of AGI is an endgame goal (at least with neuromorphic chips.)
I think we agree here. Those both seem like updates against scaling is all you need, i.e. (in this case) “data for DL in ANNs on GPUs is all you need”.
That’s where I’m disagreeing, because to my mind this doesn’t undermine “scale is all you need”. It does undermine the idea that a basement group could produce AGI, but overall it gives actual limits on what AGI can do for a certain amount of energy.