In this post, Jessicata describes an organization which believes:
AGI is probably coming in the next 20 years.
Many of the reasons we have for believing this are secret.
They’re secret because if we told people about those reasons, they’d learn things that would let them make an AGI even sooner than they would otherwise.
At the time, I didn’t understand why an organization would believe that. I figured they thought they had some insights into the nature of intelligence or something, some special new architecture for AI designs, that would accelerate AI progress if more people knew about it. I was skeptical, because what are the odds that breakthroughs in fundamental AI science would come from such an organization? Surely we’d expect such breakthroughs to come from e.g. DeepMind.
Now I realize: Of course! The secret wasn’t a dramatically new architecture, it was that dramatically new architectures aren’t needed. It was the scaling hypothesis. This seems much more plausible to me.
In this post, Jessicata describes an organization which believes:
AGI is probably coming in the next 20 years.
Many of the reasons we have for believing this are secret.
They’re secret because if we told people about those reasons, they’d learn things that would let them make an AGI even sooner than they would otherwise.
At the time, I didn’t understand why an organization would believe that. I figured they thought they had some insights into the nature of intelligence or something, some special new architecture for AI designs, that would accelerate AI progress if more people knew about it. I was skeptical, because what are the odds that breakthroughs in fundamental AI science would come from such an organization? Surely we’d expect such breakthroughs to come from e.g. DeepMind.
Now I realize: Of course! The secret wasn’t a dramatically new architecture, it was that dramatically new architectures aren’t needed. It was the scaling hypothesis. This seems much more plausible to me.