Human, All Too Human—Superintelligence requires learning things we can’t teach

Link post

Are we on the verge of an intelligence explosion? Maybe, but scaling alone won’t get us there.

Why? The human data bottleneck. Today’s models are dependent on human data and human feedback.

Human-level intelligence (AGI) might be possible by teaching AI everything we know, but superintelligence (ASI) requires learning things we 𝗱𝗼𝗻’𝘁 know.

For AI to learn something fundamentally new—something it cannot be taught by humans—it requires exploration and ground-truth feedback.

  • Exploration: The ability to try new strategies, experiment with new ways of thinking, discover new patterns beyond those present in human-generated training data.

  • Ground-Truth Feedback: The ability to learn from the outcome of explorations. A way to tell if these new strategies—perhaps beyond what a human could recognize as correct—are effective in the real world.


This is how we’ve 𝘢𝘭𝘳𝘦𝘢𝘥𝘺 achieved superintelligence in limited realms, like games (AlphaGo, AlphaZero) and protein folding (AlphaFold).

Without these ingredients, AI remains a reflection of human knowledge, never transcending our limited models of reality.

Full post (no paywall): https://​​bturtel.substack.com/​​p/​​human-all-too-human