Jessicata wrote a post on The Absurdity of Un-referencable Entities. The comments there give many good reasons against her claim that unreferencable entities don’t make sense. Nonetheless, I think it is worthwhile to consider this issue in more detail as there are many philosophical problems in which seemingly unreferencable entities arise.
We will begin by making two key points about references. Once we understand them, we will have dissolved the paradox.
Firstly, “referencable” is relative to the agent. If the universe consists of two non-interacting boxes, then agents in the first box can reference object in the first box and agents in the second box can reference objects in the second, but neither can reference objects in the other. Further, if an agent is in a simulation with no access to the outside, that agent can only reference objects in the simulation, while someone outside can reference objects in both the external world and simulation.
Secondly, even when given an agent as a reference frame, we still need further clarification. Here’s a list of things we might want to reference:
Phenomenal content such as an image of a chair
Objects such as the chair we presume to be behind the image
Properties like red or round
Universals like the idea of a chair
Abstract concepts like the number 2
Each of these kinds of entities is referred to in a slightly different manner. Precisely defining how each of these works would be a very involved task, so I’ll just sketch it out quickly. The purpose of the following paragraph is only to show how we could go about doing this:
For phenomenal content, we have the ability to recall what we experienced and interface it with the rest of the brain. We typically reference objects by experiencing visually or through our other sense or indirectly via scientific instruments and saying “Whatever is creating that experience”. For primitive properties, a person will normally understand the meaning if they are told a bunch of objects are in the class and a bunch of objects aren’t. For non-primitive properties, we can use the same method or we can define these in terms of other properties.
Again, the specifics aren’t important. The point is only to illustrate that even though these all look alike on the surface, the actual manner in which these entities are triangulated greatly varies.
Some people would say that we can’t reference some of these entities as merely being able to saying the words doesn’t mean we can pin down the object enough for us to be able to truly say we are referencing it. And fair enough, but where this line is drawn is ultimately a matter of linguistic convention rather than a matter of metaphysics.
The Unreferentiable:
We’ve explained above that some agents can reference objects that are unreferenceable to other agents. However, even if an agent can’t reference entity X, an agent can construct a model of a world containing itself and an entity X’ that stands in for X. It can even imagine an agent Y that can reference X’. How is this possible? Doesn’t providing a stand-in for X involve referencing the object we said was unreferencable?
Well, as we just noted, there are difference senses in which we can reference an object. Let’s make this concrete: Imagine a programmer and a manager observing a perfectly closed system containing an AI human and an AI dog. The AI human can reference the AI dog by “seeing” it and presuming there is an entity behind what it is seeing. It can’t reference the external agents in the same way. However, it could construct an internal model containing a programmer and an AI system which the manager could interpret as a reference to the programmer. The AI human can’t include a full copy of itself inside the model, but it can include general description of itself which will be recognisable to the manager.
Referencing an entity through perceptions and by positing a reflective model are entirely different types of reference. Many people would prefer that we limit the term reference to apply to the former, which is perfectly fine so long as they allow there to be a word for the second, non-reference kind of reference. Personally, I prefer to just call this Reference by Analogy.
One potential objection is that such references could fail to be resolve such as if there are two programmers, making the reference underspecified. But that’s hardly unique to this situation: what if there isn’t actually a chair object behind the chair image, but it is all in the imagination?
I want to finish by reiterating the point of this article in as simple a manner as possible: An entity can be unreferencable in one sense, but referencable in another. In fact, we should probably just taboo this word the majority of the time.
Referencing the Unreferencable
Jessicata wrote a post on The Absurdity of Un-referencable Entities. The comments there give many good reasons against her claim that unreferencable entities don’t make sense. Nonetheless, I think it is worthwhile to consider this issue in more detail as there are many philosophical problems in which seemingly unreferencable entities arise.
We will begin by making two key points about references. Once we understand them, we will have dissolved the paradox.
Firstly, “referencable” is relative to the agent. If the universe consists of two non-interacting boxes, then agents in the first box can reference object in the first box and agents in the second box can reference objects in the second, but neither can reference objects in the other. Further, if an agent is in a simulation with no access to the outside, that agent can only reference objects in the simulation, while someone outside can reference objects in both the external world and simulation.
Secondly, even when given an agent as a reference frame, we still need further clarification. Here’s a list of things we might want to reference:
Phenomenal content such as an image of a chair
Objects such as the chair we presume to be behind the image
Properties like red or round
Universals like the idea of a chair
Abstract concepts like the number 2
Each of these kinds of entities is referred to in a slightly different manner. Precisely defining how each of these works would be a very involved task, so I’ll just sketch it out quickly. The purpose of the following paragraph is only to show how we could go about doing this:
For phenomenal content, we have the ability to recall what we experienced and interface it with the rest of the brain. We typically reference objects by experiencing visually or through our other sense or indirectly via scientific instruments and saying “Whatever is creating that experience”. For primitive properties, a person will normally understand the meaning if they are told a bunch of objects are in the class and a bunch of objects aren’t. For non-primitive properties, we can use the same method or we can define these in terms of other properties.
Again, the specifics aren’t important. The point is only to illustrate that even though these all look alike on the surface, the actual manner in which these entities are triangulated greatly varies.
Some people would say that we can’t reference some of these entities as merely being able to saying the words doesn’t mean we can pin down the object enough for us to be able to truly say we are referencing it. And fair enough, but where this line is drawn is ultimately a matter of linguistic convention rather than a matter of metaphysics.
The Unreferentiable:
We’ve explained above that some agents can reference objects that are unreferenceable to other agents. However, even if an agent can’t reference entity X, an agent can construct a model of a world containing itself and an entity X’ that stands in for X. It can even imagine an agent Y that can reference X’. How is this possible? Doesn’t providing a stand-in for X involve referencing the object we said was unreferencable?
Well, as we just noted, there are difference senses in which we can reference an object. Let’s make this concrete: Imagine a programmer and a manager observing a perfectly closed system containing an AI human and an AI dog. The AI human can reference the AI dog by “seeing” it and presuming there is an entity behind what it is seeing. It can’t reference the external agents in the same way. However, it could construct an internal model containing a programmer and an AI system which the manager could interpret as a reference to the programmer. The AI human can’t include a full copy of itself inside the model, but it can include general description of itself which will be recognisable to the manager.
Referencing an entity through perceptions and by positing a reflective model are entirely different types of reference. Many people would prefer that we limit the term reference to apply to the former, which is perfectly fine so long as they allow there to be a word for the second, non-reference kind of reference. Personally, I prefer to just call this Reference by Analogy.
One potential objection is that such references could fail to be resolve such as if there are two programmers, making the reference underspecified. But that’s hardly unique to this situation: what if there isn’t actually a chair object behind the chair image, but it is all in the imagination?
I want to finish by reiterating the point of this article in as simple a manner as possible: An entity can be unreferencable in one sense, but referencable in another. In fact, we should probably just taboo this word the majority of the time.