Yes I agree this is an important crux. I’m not sure which way I lean here. On the one hand, most specific things we can discover about human thinking are highly parallel. On the other hand, it seems very plausible that there are some complicated sequential things going on in the brain that don’t return partial outputs, which are in the same reference class as parity; if this is the case then, insofar as an LLM is reconstructing human brain function, it would need Bayesian-suboptimal “training wheels” to capture these processes.
Yes I agree this is an important crux. I’m not sure which way I lean here. On the one hand, most specific things we can discover about human thinking are highly parallel. On the other hand, it seems very plausible that there are some complicated sequential things going on in the brain that don’t return partial outputs, which are in the same reference class as parity; if this is the case then, insofar as an LLM is reconstructing human brain function, it would need Bayesian-suboptimal “training wheels” to capture these processes.