A chatbot with hardcoded answers to every possible chain of questions would be sentient, only the sentience would occur during the period when the responses are being coded.
Amusingly, this is discussed in “The Sequences”: https://www.lesswrong.com/posts/k6EPphHiBH4WWYFCj/gazp-vs-glut
I don’t regard that as a necessary truth.
A chatbot with hardcoded answers to every possible chain of questions would be sentient, only the sentience would occur during the period when the responses are being coded.
Amusingly, this is discussed in “The Sequences”: https://www.lesswrong.com/posts/k6EPphHiBH4WWYFCj/gazp-vs-glut
I don’t regard that as a necessary truth.