I said that playing blindfolded chess at 1s/move is “extraordinarily hard;” I agree that might be an overstatement and “extremely hard” might be more accurate. I also agree that humans don’t need “external” tools; I feel like the whole comparison will come down to arbitrary calls like whether a human explicitly visualizing something or repeating a sound to themself is akin to an LM modifying its prompt, or whether our verbal loop is “internal” whereas an LM prompt is “external” and therefore shows that the AI is missing the special sauce.
Incidentally, I would guess that 100B model trained on 100B chess games will learn to only make valid moves with similar accuracy to a trained human. But this wouldn’t affect my views about AI timelines.
I said that playing blindfolded chess at 1s/move is “extraordinarily hard;” I agree that might be an overstatement and “extremely hard” might be more accurate. I also agree that humans don’t need “external” tools; I feel like the whole comparison will come down to arbitrary calls like whether a human explicitly visualizing something or repeating a sound to themself is akin to an LM modifying its prompt, or whether our verbal loop is “internal” whereas an LM prompt is “external” and therefore shows that the AI is missing the special sauce.
Incidentally, I would guess that 100B model trained on 100B chess games will learn to only make valid moves with similar accuracy to a trained human. But this wouldn’t affect my views about AI timelines.