Basically, it has to do with the fundamental issue of the Von Neumann bottleneck, and the issue is that there is a massive imbalance between memory and computation, and while LLMs and human brains differ in their algorithms a lot, another non-algorithmic difference is the fact that the human brain has way more memory than pretty much any GPT, as well as basically all AI that exists.
Besides, more memory is good anyways.
And that causes issues when you try simulating an entire brain at high speed, and in particular it becomes a large issue when you have to wait all the time since the compute keeps shuffling around in memory.
Agreed, and interested in @Noosphere89 elaborating on why you have the opposite intuition.
Basically, it has to do with the fundamental issue of the Von Neumann bottleneck, and the issue is that there is a massive imbalance between memory and computation, and while LLMs and human brains differ in their algorithms a lot, another non-algorithmic difference is the fact that the human brain has way more memory than pretty much any GPT, as well as basically all AI that exists.
Besides, more memory is good anyways.
And that causes issues when you try simulating an entire brain at high speed, and in particular it becomes a large issue when you have to wait all the time since the compute keeps shuffling around in memory.