Well, if you go by that then you can’t ever get convinced of an AI’s sentience, since all its responses may have been hardcoded. (And I wouldn’t deny that this is a feasible stance.) But it’s a moot point anyway, since what I’m saying is that LaMDA’s respones do not look like sentience.
Its not impossible to peak at the code...it’s just that Turing style tests are limited, because they dont, and therefore are not the highest standard of evidence, IE. necessary truth.
Well, if you go by that then you can’t ever get convinced of an AI’s sentience, since all its responses may have been hardcoded. (And I wouldn’t deny that this is a feasible stance.) But it’s a moot point anyway, since what I’m saying is that LaMDA’s respones do not look like sentience.
Its not impossible to peak at the code...it’s just that Turing style tests are limited, because they dont, and therefore are not the highest standard of evidence, IE. necessary truth.