Mm, I concede that this might not have been the most accurate title. I might’ve let the desire for hot-take clickbait titles get the better of me some. But I still mostly stand by it.
My core point is something like “the algorithms that the current SOTA AIs execute during their forward passes do not necessarily capture all the core dynamics that would happen within an AGI’s cognition, so extrapolating the limitations of their cognition to AGI is a bold claim we have little evidence for”.
I agree that the current training setups shed some data on how e. g. optimization pressures / reinforcement schedules / SGD biases work, and I even think the shard theory totally applies to general intelligences like AGIs and humans. I just think that theory is AGI-incomplete.
Mm, I concede that this might not have been the most accurate title. I might’ve let the desire for hot-take clickbait titles get the better of me some. But I still mostly stand by it.
My core point is something like “the algorithms that the current SOTA AIs execute during their forward passes do not necessarily capture all the core dynamics that would happen within an AGI’s cognition, so extrapolating the limitations of their cognition to AGI is a bold claim we have little evidence for”.
I agree that the current training setups shed some data on how e. g. optimization pressures / reinforcement schedules / SGD biases work, and I even think the shard theory totally applies to general intelligences like AGIs and humans. I just think that theory is AGI-incomplete.