It has no concept that by “+” you mean addition and not something else.
If a lot of the trainings example say “2+2=7” then of cause the AI will not think that + means addition because it doesn’t. If however people use + to mean addition GPT3 is already capable enough to learn the concept and use it to add numbers that aren’t in it’s training corpus.
To have human level cognition you need the ability to use multiple concepts together in a new way. Knowing more mathematical concepts then the average mathematician might lead to better performance given that mathematical proofs are a lot about needing to know the concepts that are required for a given proof.
I also think that for GPT-x to reach AGI-hood it will need a large enough attention field to use part of that attention field as memory which means it can do reasoning in additional ways.
If however people use + to mean addition GPT3 is already capable enough to learn the concept and use it to add numbers that aren’t in it’s training corpus.
Yes, but it will still be about as good as its training corpus.
One way of looking at this is that GPT-X is trying to produce text that looks just like human written text. Given two passages of text, there should be no easy way to tell which was written by a human, and which wasn’t.
GPT-X has expertise in all subjects, in a sense. Each time it produces text, it is sampling from the distribution of human competence. Detailed information about anteaters is in there somewhere, every now and again, it will sample an expert on them, but most of the time it will act like a person who doesn’t know much about anteaters.
If a lot of the trainings example say “2+2=7” then of cause the AI will not think that + means addition because it doesn’t. If however people use + to mean addition GPT3 is already capable enough to learn the concept and use it to add numbers that aren’t in it’s training corpus.
To have human level cognition you need the ability to use multiple concepts together in a new way. Knowing more mathematical concepts then the average mathematician might lead to better performance given that mathematical proofs are a lot about needing to know the concepts that are required for a given proof.
I also think that for GPT-x to reach AGI-hood it will need a large enough attention field to use part of that attention field as memory which means it can do reasoning in additional ways.
Yes, but it will still be about as good as its training corpus.
One way of looking at this is that GPT-X is trying to produce text that looks just like human written text. Given two passages of text, there should be no easy way to tell which was written by a human, and which wasn’t.
GPT-X has expertise in all subjects, in a sense. Each time it produces text, it is sampling from the distribution of human competence. Detailed information about anteaters is in there somewhere, every now and again, it will sample an expert on them, but most of the time it will act like a person who doesn’t know much about anteaters.