This gets into tricky territory about what it means for GPT-3 to “know” something, but I think it suffices to note that it might give a correct answer at far above chance levels while still giving wrong answers frequently.
Yup. Information theoretically, you might think:
if it outputs general relativity’s explanation with probability .1, and Newtonian reasoning with .9, it has elevated the right hypothesis to the point that it only needs a few more bits of evidence to “become quite confident” of the real answer.
But then, what do you say if it’s .1 GR, .2 Newtonian, and then .7 total-non-sequitur? Does it “understand” gravity? Seems like our fuzzy “knowing-something” concept breaks down here.
Yup. Information theoretically, you might think:
But then, what do you say if it’s .1 GR, .2 Newtonian, and then .7 total-non-sequitur? Does it “understand” gravity? Seems like our fuzzy “knowing-something” concept breaks down here.