I remember thinking at the time that AlphaGo was a big update about the simplicity of human intelligence
The thing that spooks me about this is not so much simplicity of the architecture but the fact that for example Leela Zero plays superhuman Go at only 50M parameters. Put that in context of modern LLMs with 300B parameters, the distinction is in training data. With sufficiently clever synthetic data generation (that might be just some RL setup), “non-giant training runs” might well suffice for general superintelligence, rendering any governance efforts that are not AGI-assisted futile.
The thing that spooks me about this is not so much simplicity of the architecture but the fact that for example Leela Zero plays superhuman Go at only 50M parameters. Put that in context of modern LLMs with 300B parameters, the distinction is in training data. With sufficiently clever synthetic data generation (that might be just some RL setup), “non-giant training runs” might well suffice for general superintelligence, rendering any governance efforts that are not AGI-assisted futile.