The medieval lord doesn’t get to see New York. He’s asking about things he knows well: troops, castles, woodland, farmland. Towns and cities are small and less significant remember? All societies are agrarian! He doesn’t get to see what we want to show him, he’s asking us questions and we’re answering and wishing we could say ‘yes but you should be asking about our arsenal of nuclear submarines that fire 12 missiles each with 8 warheads that can incinerate an entire army anywhere in the world within 30 minutes’
We’re looking at stars, the things we know well. Stars, black holes, planets and dust are 5% of the universe. The entire visible universe is not huge nor does it have much energy, it is dwarfed by the dark stuff we don’t understand.
The entire universe is being torn apart by two mysterious forces that we cannot identify! We are staring at something enormously powerful. If we can’t identify life in the 5% we understand well, life is very likely in the other 95%.
Surely transformer-based architecture is not what superintelligences will be running on. Transformers have many limitations. The context window for one, can this be made large enough for what a superintelligence would need? What about learning and self-improvement after training? Scaling and improving transformers might be a path to superintelligence but it seems like a very inefficient route.
We’ve demonstrated that roughly human-level intelligence can, in many ways, be achieved by Transformer architecture. But what if there’s something way better than Transformers, just as Transformers are superior to what we were using before? We shouldn’t rule out someone publishing a landmark paper with a better architecture. The last landmark paper came out in 2017!
And there might well be discontinuities in performance. Pre-Stable Diffusion AI art was pretty awful, especially faces. It went from awful to artful in a matter of months, not years.