but there is nothing else physical that we could plausibly say it means.
Why do you say this? Flesh out the definition of ‘wrong’ and you’re done. ‘Wrong’ refers to arrangements of matter and their consequences. It doesn’t attempt to refer to intrinsic properties of objects that exist apart from their physicality. If (cognitive object X) is (attribute Y) this just means that (arrangements of matter that correspond to what I give the label X) have (physical properties that I group together into the heading Y). It doesn’t matter if you’re saying “freedom is good” or “murder is wrong” or “that sign is red”. ‘Freedom’ refers to arrangements of matter and physical laws governing them. ‘Good’ refers to local physical descriptions of the ways that things can yield fortunate outcomes, where fortunate outcomes can be further chased down in its physical meaning, etc.
“X is wrong” unpacks to statements about the time evolution of physical systems. You can’t simply say
there is nothing else physical that we could plausibly say it means.
Have you gone and checked every possible physical thing? Have you done experiments showing that making correspondences between cognitive objects and physical arrangements of matter somehow “fails” to capture its “meaning”?
This seems to me to be one of those times where you need to ask yourself: is it really the case that cognitive objects are not just linguistic devices for labeling arrangements of matter and laws governing the matter......… or do I just think that’s the case?
Have you gone and checked every possible physical thing?
Your whole argument rests on this, since you have not provided a counterexample to my claim. You’ve just repeated the fact that there is some physical referent, over and over.
This is not how burden of proof works! It would be simply impossible for me to check every possible physical thing. Is it, therefore, impossible for you to be convinced that I am right?
Is it, therefore, impossible for you to be convinced that I am right?
This is what it means for a claim to fail falsifiability. It’s easy to generate claims whose proof would only be constituted by fact-checking against every physical thing. This is a far cry from a decision-theoretic claim where, though we can’t have perfect evidence, we can make useful quantifications of the evidence we do have and our uncertainty about it.
The empty set has many interesting properties.
It’s impossible to quantify your claim without having all of the evidence up front.
You’ve just repeated the fact that there is some physical referent, over and over.
What I’m trying to say is that I can test the hypothesis of whether or not there is a physical referent. If someone says to me, “Is there or isn’t there a physical referent?” and I have to respond, then I have to do so on the strength of evidence alone. I may not be able to provide a referent explicitly, but I know that non-zero probability can be assigned to a physical system in which cognitive objects are placeholders for complicated sets of matter and governing laws of physics. I cannot make the same claim about the hypothesis that cognitive objects do not have utterly physical referents, and therefore, whether or not I have explicit examples of referents, the hypothesis that there must be underlying physical referents wins hands down.
The criticism you’re making of me, that I insist there are referents without supplying the actual referents, is physically backwards in this case. For example, someone might say “consciousness is a process that does not correspond to any physically existing thing.” If I then reply,
“But consciousness is a property of material and varies directly with changes in that material (or some similar, more detailed argument about cognition), and therefore, I can assign non-zero probability to its being a physical computation, and since I do not have the capacity to assign probabilities to non-physical entities, the hypothesis that consciousness is physical wins.”
this is a convincing argument, up to the quantification of the evidence. If you personally don’t feel like it’s convincing, that’s fine, but then you’re outside of decision theory and the claim you’re making contains literally no semantic information.
The same can be said of the referent of a belief. I think you’re failing to appreciate that you’re making the very mistake you’re claiming that I am making. You’re just asserting that beliefs can’t plausibly correspond to physically existing things. That’s just an assertion. It might be a good assertion or might not even be a coherent assertion. In order to check, I am going to go draft up some sort of probability model that relates that claim to what I know about thoughts and beliefs. Oh, snap, when I do that, I run into the unfortunate wall that if beliefs don’t have physical referents, then talking about their referents at all suddenly has no physical meaning. Therefore, I will stick with my hypothesis that they are physical, pending explicit evidence that they aren’t.
The convincingness of the argument lies in the fact that one side of this can be made quantitative and experimentally relevant and the other cannot. The burden of proof, as I view it in this situation, is on making a non-zero probability connection between beliefs and some type of referent. I don’t see anything in your argument that prevents this connection being made to physical things. I do, however, fail to see any part of your argument that makes this probabilistic connection with non-physical referents.
Maybe it’s better to think of it like an argument from non-cognitivism. You’re trying to make up a solution space for the problem (non-physical referents) that is incompatible with the whole system in which the problem takes place (physics). Until you make an explicit physical definition of what a “non-physical referent” actually is, then I will not entertain it as a possible hypothesis.
Ultimately, even though your epistemology is more complicated, you argument might as well be: beliefs are pointers to magical unicorns outside of space and time and these magical unicorns are what determine human values. ‘Non-physical referents’ simply are not. I can’t assign a probability to the existence of something which is itself hypothesized to fail to exist, since existence and “being a physical part of reality” are one in the same thing.
It’s easy to generate claims whose proof would only be constituted by fact-checking against every physical thing
That’s the good kind of claim, the falsifiable kind, like the Law of Universal Gravitation. That’s the kind of claim I’m making.
It’s impossible to quantify your claim without having all of the evidence up front.
Your argument seems to depend on the idea that the only way to evaluate a claim is to list the physical universes in which it is true and the physical universes in which it is not true.
This, obviously, is circular.
Do you acknowledge that your reasoning is circular and defend it, presumably with Eliezer’s defense of circular reasoning? Or do you claim that it is not circular?
I cannot make the same claim about the hypothesis that cognitive objects do not have utterly physical referents
Sure you can. You take a world, find all the cognitive objects in it, then find all the corresponding physical referents, cross those objects off the list.
I am saying that there are beliefs (strings of symbols with meaning) endowed meaning by their place in a functional mind but for which the set of physical referents they correspond to is the empty set.
Surely you can admit the existence of strings of symbols without physical referents, like this one: “fj4892fjsoidfj390ds;j9d3”. There’s nothing non-physical about it.
The convincingness of the argument lies in the fact that one side of this can be made quantitative and experimentally relevant and the other cannot.
If “X” is quantitative and experimentally relevant, how could “not-X” be irrelevant? If X makes predictions, how could not-X not make the opposite predictions?
I do, however, fail to see any part of your argument that makes this probabilistic connection with non-physical referents.
Sure you can. You take a world, find all the cognitive objects in it
My claim is that if one had really done this, then by definition of “find”, they have the physical referents for the cognitive objects. If a cognitive object has the empty set as the set of physical referents, then it is the null cognitive object. The string of symbols “fj4892fjsoidfj390ds;j9d3″ might have no meaning to you when thinking in English, say, but then it just means it is an instantiation of the empty cognitive object, any string of symbols failing to point to a physical referent.
I’m trying to say that if the cognitive object is to be considered as pointing to something, that is, it is in some sense not the null cognitive object, then the thing which is its referent is physical. It’s incoherent to say that a string of symbols refers to something that’s not physical. What do you mean by ‘refer’ in that setting? There is no existing thing to be referred to, hence the symbol does no action of referring. So when someone speaks about “X” being right or wrong, either they are speaking about physical events or else “X” fails to be a cognitive object.
I claim that my reasoning is not circular.
Your argument seems to depend on the idea that the only way to evaluate a claim is to list the physical universes in which it is true and the physical universes in which it is not true.
It depends on what you mean by ‘evaluate’. What I’m taking for that definition right now is that if I want to assess whether proposition P is true, then I can only do so in a setting of decision theory and degrees of evidence and uncertainty. This means that I need a model for the action P and a way of corresponding probabilities to the various hypotheses about P. In this case, P = “Some cognitive objects have referents that are not the null referent and are also not physical”. I claim that all referents are either physical or else they are the null referent. The set of non-physical referents is empty.
Just because a string fails to have a physical referent does not mean that it succeeds in having a non-physical one. What evidence do I have that there exist non-physical referents. What model of cognitive objects exists with which it is possible to achieve experimental evidence of a non-physical referent?
I am saying that there are beliefs (strings of symbols with meaning) endowed meaning by their place in a functional mind but for which the set of physical referents they correspond to is the empty set.
What do you mean by ‘endowed meaning’? If a cognitive object has no physical referent, to me, that is the definition of meaningless. It fails to correspond to reality.
What do you mean by ‘endowed meaning’? If a cognitive object has no physical referent, to me, that is the definition of meaningless. It fails to correspond to reality.
This is the heart of the matter. You are saying that the only relevant properties of a cognitive object are its referents. Thus, no referents = no relevant properties = null object.
I say that, on the contrary, a cognitive object has other relevant properties. One such property is its place on the code of a brain.
Imagine an AI with a set of objects that it marks as either “true” or “false”. These objects have no referents, but they influence the formation of objects with referents. I think it’s fair to say that:
Such an Ai could exist, with the objects having referents/no referents as appropriate.
These objects are not just the null object.
The AI is thinking irrationally.
Now imagine an AI with a set of objects that it marks as either “true” or “false”. These objects have no referents, but they influence the AI’s choices. I think it’s fair to say that:
Such an Ai could exist, with the objects having referents/no referents as appropriate.
I think that the meaning of “The AI could be thinking rationally” is that it could turn out to be the case that the objects labeled true and false have a correspondence to physically existing things and that correspondence allows the A.I. to construct decision rules which correspond to reality within some computable range of uncertainty.
If we are unable to map the inputs to the A.I.’s decision process (in this case objects labeled true or false and whose referents, if any, are unknown to us at the start) back to physical reality, then it is still mysterious to us and we can’t claim that it’s rational in any sense other than pure statistical experience (it could just be that when asked to make a series of decisions using the true/false labeled objects, the A.I. got incredibly lucky).
In order to conclude (in any more than a superficial way) that the A.I. is rational, there must be an explicit correspondence between the labeled objects and the physics world, and hence we would have found their referents. If this is, in principle, an impossible task (as you claim), then the concept of rationality doesn’t apply to the A.I. In what sense would it be said to actually be rational, rather than just produce a sequence of outputs that appear to be rational to us for mysterious reasons?
I think that the meaning of “The AI could be thinking rationally” is that it could turn out to be the case that the objects labeled true and false are integral components of a program that provably calculates rational decisions every time.
A proof that a program calculates rational decisions every time necessarily provides the physical referents of its calculation. There’s no difference between knowing that a program calculates rational decisions every time and knowing how it is that it calculates rational decisions every time. If you don’t know the explicit correspondence between its calculations and reality then your state of knowledge cannot include the fact that the program always yields rational conclusions. You can have degrees of certainty that it is rational without having full knowledge of its referents, but not factual knowledge as in a mathematical proof.
It may be that a slick mathematical argument reduces the connection to symbols that don’t readily convey the physical connection, but
don’t tell me that knowledge is “subjective”. Knowledge has to be represented in a brain, and that makes it as physical as anything else. For M to physically represent an accurate picture of the state of Y, M’s physical state must correlate with the state of Y. You can take thermodynamic advantage of that—it’s called a Szilard engine.
Or as E.T. Jaynes put it, “The old adage ‘knowledge is power’ is a very cogent truth, both in human relations and in thermodynamics.”
And conversely, one subsystem cannot increase in mutual information with another subsystem, without (a) interacting with it and (b) doing thermodynamic work. Otherwise you could build a Maxwell’s Demon and violate the Second Law of Thermodynamics—which in turn would violate Liouville’s Theorem—which is prohibited in the standard model of physics.
Which is to say: To form accurate beliefs about something, you really do have to observe it. It’s a very physical, very real process: any rational mind does “work” in the thermodynamic sense, not just the sense of mental effort.
If your state of knowledge (brain chemistry) is updated to include special knowledge of the rationality of an agent, then there is entanglement between you and that agent, for that is what knowledge is. You can’t know that an agent is rational without knowing the physical connection between its cognitive objects and reality. To whatever degree you lack knowledge about the physical referents of its cognitive objects, that is the degree to which you lack knowledge about whether or not it is rational.
Why do you say this? Flesh out the definition of ‘wrong’ and you’re done. ‘Wrong’ refers to arrangements of matter and their consequences. It doesn’t attempt to refer to intrinsic properties of objects that exist apart from their physicality. If (cognitive object X) is (attribute Y) this just means that (arrangements of matter that correspond to what I give the label X) have (physical properties that I group together into the heading Y). It doesn’t matter if you’re saying “freedom is good” or “murder is wrong” or “that sign is red”. ‘Freedom’ refers to arrangements of matter and physical laws governing them. ‘Good’ refers to local physical descriptions of the ways that things can yield fortunate outcomes, where fortunate outcomes can be further chased down in its physical meaning, etc.
“X is wrong” unpacks to statements about the time evolution of physical systems. You can’t simply say
Have you gone and checked every possible physical thing? Have you done experiments showing that making correspondences between cognitive objects and physical arrangements of matter somehow “fails” to capture its “meaning”?
This seems to me to be one of those times where you need to ask yourself: is it really the case that cognitive objects are not just linguistic devices for labeling arrangements of matter and laws governing the matter......… or do I just think that’s the case?
Your whole argument rests on this, since you have not provided a counterexample to my claim. You’ve just repeated the fact that there is some physical referent, over and over.
This is not how burden of proof works! It would be simply impossible for me to check every possible physical thing. Is it, therefore, impossible for you to be convinced that I am right?
I expect better from lesswrong posters.
This is what it means for a claim to fail falsifiability. It’s easy to generate claims whose proof would only be constituted by fact-checking against every physical thing. This is a far cry from a decision-theoretic claim where, though we can’t have perfect evidence, we can make useful quantifications of the evidence we do have and our uncertainty about it.
The empty set has many interesting properties.
It’s impossible to quantify your claim without having all of the evidence up front.
What I’m trying to say is that I can test the hypothesis of whether or not there is a physical referent. If someone says to me, “Is there or isn’t there a physical referent?” and I have to respond, then I have to do so on the strength of evidence alone. I may not be able to provide a referent explicitly, but I know that non-zero probability can be assigned to a physical system in which cognitive objects are placeholders for complicated sets of matter and governing laws of physics. I cannot make the same claim about the hypothesis that cognitive objects do not have utterly physical referents, and therefore, whether or not I have explicit examples of referents, the hypothesis that there must be underlying physical referents wins hands down.
The criticism you’re making of me, that I insist there are referents without supplying the actual referents, is physically backwards in this case. For example, someone might say “consciousness is a process that does not correspond to any physically existing thing.” If I then reply,
“But consciousness is a property of material and varies directly with changes in that material (or some similar, more detailed argument about cognition), and therefore, I can assign non-zero probability to its being a physical computation, and since I do not have the capacity to assign probabilities to non-physical entities, the hypothesis that consciousness is physical wins.”
this is a convincing argument, up to the quantification of the evidence. If you personally don’t feel like it’s convincing, that’s fine, but then you’re outside of decision theory and the claim you’re making contains literally no semantic information.
The same can be said of the referent of a belief. I think you’re failing to appreciate that you’re making the very mistake you’re claiming that I am making. You’re just asserting that beliefs can’t plausibly correspond to physically existing things. That’s just an assertion. It might be a good assertion or might not even be a coherent assertion. In order to check, I am going to go draft up some sort of probability model that relates that claim to what I know about thoughts and beliefs. Oh, snap, when I do that, I run into the unfortunate wall that if beliefs don’t have physical referents, then talking about their referents at all suddenly has no physical meaning. Therefore, I will stick with my hypothesis that they are physical, pending explicit evidence that they aren’t.
The convincingness of the argument lies in the fact that one side of this can be made quantitative and experimentally relevant and the other cannot. The burden of proof, as I view it in this situation, is on making a non-zero probability connection between beliefs and some type of referent. I don’t see anything in your argument that prevents this connection being made to physical things. I do, however, fail to see any part of your argument that makes this probabilistic connection with non-physical referents.
Maybe it’s better to think of it like an argument from non-cognitivism. You’re trying to make up a solution space for the problem (non-physical referents) that is incompatible with the whole system in which the problem takes place (physics). Until you make an explicit physical definition of what a “non-physical referent” actually is, then I will not entertain it as a possible hypothesis.
Ultimately, even though your epistemology is more complicated, you argument might as well be: beliefs are pointers to magical unicorns outside of space and time and these magical unicorns are what determine human values. ‘Non-physical referents’ simply are not. I can’t assign a probability to the existence of something which is itself hypothesized to fail to exist, since existence and “being a physical part of reality” are one in the same thing.
That’s the good kind of claim, the falsifiable kind, like the Law of Universal Gravitation. That’s the kind of claim I’m making.
Your argument seems to depend on the idea that the only way to evaluate a claim is to list the physical universes in which it is true and the physical universes in which it is not true.
This, obviously, is circular.
Do you acknowledge that your reasoning is circular and defend it, presumably with Eliezer’s defense of circular reasoning? Or do you claim that it is not circular?
Sure you can. You take a world, find all the cognitive objects in it, then find all the corresponding physical referents, cross those objects off the list.
I am saying that there are beliefs (strings of symbols with meaning) endowed meaning by their place in a functional mind but for which the set of physical referents they correspond to is the empty set.
Surely you can admit the existence of strings of symbols without physical referents, like this one: “fj4892fjsoidfj390ds;j9d3”. There’s nothing non-physical about it.
If “X” is quantitative and experimentally relevant, how could “not-X” be irrelevant? If X makes predictions, how could not-X not make the opposite predictions?
Who said that all beliefs have referents?
My claim is that if one had really done this, then by definition of “find”, they have the physical referents for the cognitive objects. If a cognitive object has the empty set as the set of physical referents, then it is the null cognitive object. The string of symbols “fj4892fjsoidfj390ds;j9d3″ might have no meaning to you when thinking in English, say, but then it just means it is an instantiation of the empty cognitive object, any string of symbols failing to point to a physical referent.
I’m trying to say that if the cognitive object is to be considered as pointing to something, that is, it is in some sense not the null cognitive object, then the thing which is its referent is physical. It’s incoherent to say that a string of symbols refers to something that’s not physical. What do you mean by ‘refer’ in that setting? There is no existing thing to be referred to, hence the symbol does no action of referring. So when someone speaks about “X” being right or wrong, either they are speaking about physical events or else “X” fails to be a cognitive object.
I claim that my reasoning is not circular.
It depends on what you mean by ‘evaluate’. What I’m taking for that definition right now is that if I want to assess whether proposition P is true, then I can only do so in a setting of decision theory and degrees of evidence and uncertainty. This means that I need a model for the action P and a way of corresponding probabilities to the various hypotheses about P. In this case, P = “Some cognitive objects have referents that are not the null referent and are also not physical”. I claim that all referents are either physical or else they are the null referent. The set of non-physical referents is empty.
Just because a string fails to have a physical referent does not mean that it succeeds in having a non-physical one. What evidence do I have that there exist non-physical referents. What model of cognitive objects exists with which it is possible to achieve experimental evidence of a non-physical referent?
What do you mean by ‘endowed meaning’? If a cognitive object has no physical referent, to me, that is the definition of meaningless. It fails to correspond to reality.
This is the heart of the matter. You are saying that the only relevant properties of a cognitive object are its referents. Thus, no referents = no relevant properties = null object.
I say that, on the contrary, a cognitive object has other relevant properties. One such property is its place on the code of a brain.
Imagine an AI with a set of objects that it marks as either “true” or “false”. These objects have no referents, but they influence the formation of objects with referents. I think it’s fair to say that:
Such an Ai could exist, with the objects having referents/no referents as appropriate.
These objects are not just the null object.
The AI is thinking irrationally.
Now imagine an AI with a set of objects that it marks as either “true” or “false”. These objects have no referents, but they influence the AI’s choices. I think it’s fair to say that:
Such an Ai could exist, with the objects having referents/no referents as appropriate.
These objects are not just the null object.
The AI could be thinking rationally.
I think that the meaning of “The AI could be thinking rationally” is that it could turn out to be the case that the objects labeled true and false have a correspondence to physically existing things and that correspondence allows the A.I. to construct decision rules which correspond to reality within some computable range of uncertainty.
If we are unable to map the inputs to the A.I.’s decision process (in this case objects labeled true or false and whose referents, if any, are unknown to us at the start) back to physical reality, then it is still mysterious to us and we can’t claim that it’s rational in any sense other than pure statistical experience (it could just be that when asked to make a series of decisions using the true/false labeled objects, the A.I. got incredibly lucky).
In order to conclude (in any more than a superficial way) that the A.I. is rational, there must be an explicit correspondence between the labeled objects and the physics world, and hence we would have found their referents. If this is, in principle, an impossible task (as you claim), then the concept of rationality doesn’t apply to the A.I. In what sense would it be said to actually be rational, rather than just produce a sequence of outputs that appear to be rational to us for mysterious reasons?
I think that the meaning of “The AI could be thinking rationally” is that it could turn out to be the case that the objects labeled true and false are integral components of a program that provably calculates rational decisions every time.
A proof that a program calculates rational decisions every time necessarily provides the physical referents of its calculation. There’s no difference between knowing that a program calculates rational decisions every time and knowing how it is that it calculates rational decisions every time. If you don’t know the explicit correspondence between its calculations and reality then your state of knowledge cannot include the fact that the program always yields rational conclusions. You can have degrees of certainty that it is rational without having full knowledge of its referents, but not factual knowledge as in a mathematical proof.
It may be that a slick mathematical argument reduces the connection to symbols that don’t readily convey the physical connection, but
If your state of knowledge (brain chemistry) is updated to include special knowledge of the rationality of an agent, then there is entanglement between you and that agent, for that is what knowledge is. You can’t know that an agent is rational without knowing the physical connection between its cognitive objects and reality. To whatever degree you lack knowledge about the physical referents of its cognitive objects, that is the degree to which you lack knowledge about whether or not it is rational.