But classical chess computers like DeepBlue rely a lot on raw compute, yet this brute force approach is fairly useless in cases with large search space, like Go. I think the number of parameters in a neural network would be a better approximation.
Then we can even make the scaling hypothesis more precise: Presumably it says scaling is enough for (more) intelligence_1, but not for generality.
Intelligence_1 is probably well-approximated as log(compute)
But classical chess computers like DeepBlue rely a lot on raw compute, yet this brute force approach is fairly useless in cases with large search space, like Go. I think the number of parameters in a neural network would be a better approximation.
Then we can even make the scaling hypothesis more precise: Presumably it says scaling is enough for (more) intelligence_1, but not for generality.