A chatbot with hardcoded answers to every possible chain of questions would be sentient, only the sentience would occur during the period when the responses are being coded.
Well, if you go by that then you can’t ever get convinced of an AI’s sentience, since all its responses may have been hardcoded. (And I wouldn’t deny that this is a feasible stance.) But it’s a moot point anyway, since what I’m saying is that LaMDA’s respones do not look like sentience.
Its not impossible to peak at the code...it’s just that Turing style tests are limited, because they dont, and therefore are not the highest standard of evidence, IE. necessary truth.
Of course ,you could hardcode correct responses to questions about itself into a chatbot.
A chatbot with hardcoded answers to every possible chain of questions would be sentient, only the sentience would occur during the period when the responses are being coded.
Amusingly, this is discussed in “The Sequences”: https://www.lesswrong.com/posts/k6EPphHiBH4WWYFCj/gazp-vs-glut
I don’t regard that as a necessary truth.
https://www.lesswrong.com/posts/jiBFC7DcCrZjGmZnJ/conservation-of-expected-evidence
Well, if you go by that then you can’t ever get convinced of an AI’s sentience, since all its responses may have been hardcoded. (And I wouldn’t deny that this is a feasible stance.) But it’s a moot point anyway, since what I’m saying is that LaMDA’s respones do not look like sentience.
Its not impossible to peak at the code...it’s just that Turing style tests are limited, because they dont, and therefore are not the highest standard of evidence, IE. necessary truth.