Ok, sounds like you’re using “not too much data/time” in a different sense than I was thinking of; I suspect we don’t disagree. My current guess is that some humans could beat GPT-1 with ten hours of practice, but that GPT-2 or larger would be extremely difficult or and plausibly impossible with any amount of practice.
The human brain internally is performing very similar computations to transformer LLMs—as expected from all the prior research indicating strong similarity between DL vision features and primate vision—but that doesn’t mean we can immediately extract those outputs and apply them towards game performance.
Ok, sounds like you’re using “not too much data/time” in a different sense than I was thinking of; I suspect we don’t disagree. My current guess is that some humans could beat GPT-1 with ten hours of practice, but that GPT-2 or larger would be extremely difficult or and plausibly impossible with any amount of practice.
The human brain internally is performing very similar computations to transformer LLMs—as expected from all the prior research indicating strong similarity between DL vision features and primate vision—but that doesn’t mean we can immediately extract those outputs and apply them towards game performance.