I (not an expert) suspect there’s a few key ways the human/AI difference is likely to be pretty large.
Generation time. Training an expert human from scratch takes 20-30 years and no one is in a position to curate more than a small fraction of the training data and environment. AI could copy itself in potentially seconds to minutes. It could train up a new model for a specific purpose much faster, too, if it can’t just paste new capabilities into itself directly. And there should be no equivalent of capabilities that can’t be taught, only learned (the kinds of things humans need years-long apprenticeships to even have a chance to acquire imperfectly).
Even with “huge bags of matrices doing no one knows quite what” there are still things we can learn, with the tools we have today, from e.g. analyzing weights. We can also (compared to a human mind) test more precisely how such AIs “would” act in different circumstances by just giving a prompt, even if we can’t reliably predict behavior in advance. Even assuming this doesn’t hold for future architectures and higher capability levels, and AI shouldn’t have a human’s problems predicting it’s own future behavior.
Breadth. being effectively a high-level expert in every field of knowledge at once should allow a massively higher level of ability to extract and propagate insight from new data. There should be no equivalent of it taking decades to generations for results to filter from researchers to other fields of academia to public policy to common knowledge among the public. With enough compute, there should be no equivalent of a result languishing for years before anyone recognizes it’s relevance to a new problem.
Precise introspective and retrospective data. This is more about sensors and robotics than AI in some ways, but it would be much easier for me to learn new physical skills if I could see exactly what I was doing each time I attempted it, and measure exactly what the result was, and intuitively do statistics on the data, and similarly precisely control future behavior. I do think this also applies to any other kind of data feedback, too. If I had access to a precise playback of my entire life (or even just the text of all my conversations and summary of my actions) at all times, I’d be better at a lot of things.
I (not an expert) suspect there’s a few key ways the human/AI difference is likely to be pretty large.
Generation time. Training an expert human from scratch takes 20-30 years and no one is in a position to curate more than a small fraction of the training data and environment. AI could copy itself in potentially seconds to minutes. It could train up a new model for a specific purpose much faster, too, if it can’t just paste new capabilities into itself directly. And there should be no equivalent of capabilities that can’t be taught, only learned (the kinds of things humans need years-long apprenticeships to even have a chance to acquire imperfectly).
Even with “huge bags of matrices doing no one knows quite what” there are still things we can learn, with the tools we have today, from e.g. analyzing weights. We can also (compared to a human mind) test more precisely how such AIs “would” act in different circumstances by just giving a prompt, even if we can’t reliably predict behavior in advance. Even assuming this doesn’t hold for future architectures and higher capability levels, and AI shouldn’t have a human’s problems predicting it’s own future behavior.
Breadth. being effectively a high-level expert in every field of knowledge at once should allow a massively higher level of ability to extract and propagate insight from new data. There should be no equivalent of it taking decades to generations for results to filter from researchers to other fields of academia to public policy to common knowledge among the public. With enough compute, there should be no equivalent of a result languishing for years before anyone recognizes it’s relevance to a new problem.
Precise introspective and retrospective data. This is more about sensors and robotics than AI in some ways, but it would be much easier for me to learn new physical skills if I could see exactly what I was doing each time I attempted it, and measure exactly what the result was, and intuitively do statistics on the data, and similarly precisely control future behavior. I do think this also applies to any other kind of data feedback, too. If I had access to a precise playback of my entire life (or even just the text of all my conversations and summary of my actions) at all times, I’d be better at a lot of things.