It’s not clear to me that an human, using their brain and a go board for reasoning could beat AlphaZero even if you give them infinite time.
I agree but I dispute that this example is relevant. I don’t think there is any step in between “start walking on two legs” to “build a spaceship” that requires as much strictly-type-A reasoning as beating AlphaZero at go or chess. This particular kind of capability class doesn’t seem to me to be very relevant.
Also, to the extent that it is relevant, a smart human with infinite time could outperform AlphaGo by programming a better chess/go computer. Which may sound silly but I actually think it’s a perfectly reasonable reply—using narrow AI to assist in brute-force cognitive tasks is something humans are allowed to do. And it’s something that LLMs are also allowed to do; if they reach superhuman performance on general reasoning, and part of how they do this is by writing python scripts for modular subproblems, then we wouldn’t say that this doesn’t count.
I agree but I dispute that this example is relevant. I don’t think there is any step in between “start walking on two legs” to “build a spaceship” that requires as much strictly-type-A reasoning as beating AlphaZero at go or chess. This particular kind of capability class doesn’t seem to me to be very relevant.
Also, to the extent that it is relevant, a smart human with infinite time could outperform AlphaGo by programming a better chess/go computer. Which may sound silly but I actually think it’s a perfectly reasonable reply—using narrow AI to assist in brute-force cognitive tasks is something humans are allowed to do. And it’s something that LLMs are also allowed to do; if they reach superhuman performance on general reasoning, and part of how they do this is by writing python scripts for modular subproblems, then we wouldn’t say that this doesn’t count.