So it would be really wonderful if we could just go to the philosophy literature and find a recipe for how an AI needs to behave if it’s to learn human referents from human references. Or at least it might have some important insights that would help with developing such a recipe.
I found Brian Cantwell Smith’s On the Origin of Objects to be quite insightful on reference. While it isn’t a formal recipe for giving AIs the ability to reference, I think its insights will be relevant to such a recipe if one is ever created.
(some examples of insights from the book: all reference is indexical in a way similar to how physical forces such as magnetism are indexical; apparently non-indexical references are formed from indexical reference through a stabilization process similar to image stabilization; abstractions are meaningful and effective through translations between them and direct high-fidelity engagement.)
I found Brian Cantwell Smith’s On the Origin of Objects to be quite insightful on reference. While it isn’t a formal recipe for giving AIs the ability to reference, I think its insights will be relevant to such a recipe if one is ever created.
(some examples of insights from the book: all reference is indexical in a way similar to how physical forces such as magnetism are indexical; apparently non-indexical references are formed from indexical reference through a stabilization process similar to image stabilization; abstractions are meaningful and effective through translations between them and direct high-fidelity engagement.)
Thanks for the recommendation, I’ll check it out. From the library.
EDIT: Aw, it’s checked out.