I don’t think something like ‘ought’ can intuitively point to something that has ontological ramifications.
I don’t believe in an ontology of morals, only an epistemology of them.
namely, some distribution over configurations of human minds.
Do you think that “The sign is red” means something different from “I believe the sign is red”? (In the technical sense of believe, not the pop sense.)
Do you think that “Murder is wrong” means something different from “I believe that murder is wrong.”?
The verb believe goes without saying when making claims about the world. To assert that ‘the sign is red’ is true would not make sense if I did not believe it, by definition. I would either be lying or unaware of my own mental state. To me, your question borders more on opinions and their consequences.
Quoting from there: “But your beliefs are not about you; beliefs are about the world. Your beliefs should be your best available estimate of the way things are; anything else is a lie.”
What I’m trying to say is that the statement (Murder is wrong) implies the further slight linguistic variant (I believe murder is wrong) (modulo the possibility that someone is lying or mentally ill, etc.) The question then is whether (I believe murder is wrong) → (murder is wrong). Ultimately, from the perspective of the person making these claims, the answer is ‘yes’. It makes no sense for me to feel that my preferences are not universally and unequivocally true.
I don’t find this at odds with a situation where a notorious murderer who is caught, say Hannibal Lecter, can simultaneously choose his actions and say “murder is wrong”. Maybe the person is mentally insane. But even if they aren’t, they could simply choose a preference ordering such that the local wrongness of failing to gratify their desire to murder is worse than the local wrongness of murder itself in their society. Thus, they can see that to people who don’t have the same preference for murdering someone for self-gratification, the computation of beliefs works out that (murder is wrong) is generally true, but not true when you substitute their local situations into their personal formula for computing the belief. In this case it just becomes an argument over words because the murderer is tacitly substituting his personal local definitions for things when making choices, but then using more general definitions when making statements of beliefs. In essence, the murderer believes it is not wrong for him to murder and get the gratification, but that murder, as society defines it and views it, is “wrong” where “wrong” is a society-level description, not the murderer’s personal description. I put a little more about the “words” problem below.
The apparent difference between this way of thinking and the way we all experience our thinking is that, among our assertions is the meta-assertion that (over-asserting beliefs is bad) → (I believe over-asserting beliefs is bad) or something similar to this. All specific beliefs, including such meta-beliefs, are intertwined. You can’t have independent beliefs about whether murder is right that don’t depend on your beliefs about whether beliefs should be acted upon like they are cold hard facts.
But at the root, all beliefs are statements about physics. Mapping a complicated human belief down to the level of making statistical pattern recognition claims about amplitude distributions is really hard and inaccessible to us. Further, evolutionarily, we can’t afford to burn computation time exploring a fully determined picture of our beliefs. After some amount of computation time, we have to make our chess moves or else the clock runs out and we lose.
It only feels like saying (I believe murder is wrong) fails to imply the claim (murder is wrong). Prefacing a claim with “I believe” is a human-level way or trying to mitigate the harshness of the claim. It could be a statement that tries to roughly quantify how much evidence I can attest to for the claim which the belief describes. It certainly sounds more assured to say (murder is wrong) than to say (I believe murder is wrong), but this is a phantom distinction.
The other thing, which I think you are trying to take special pains to avoid, is that you can very easily run into a battle of words here. If someone says, “I believe murder is wrong” and what they really mean is something like “I believe that it does an intolerable amount of social disservice in the modern society that I live in for anyone to act as if murdering is acceptable, and thus to always make sure to punish murderers,” basically, if someone translates “murder” into “the local definition of murder in the world that I frequently experience” and they translate “wrong” into “the local definition of wrong (e.g. punishable in court proceedings or something)”, then they are no longer talking about the cognitive concept of murder. An alien race might not define murder the same or “wrong” the same.
If someone uses ‘believe’ to distinguish between making a claim about the most generalized form of murder they can think of, applicable to the widest array of potential sentient beings, or something like that, then the two statements are different, but only artificially.
If I say “I believe murder is wrong” and I really mean “I believe (my local definition of murder) is (my local definition of wrong)” then this implies the statement (The concept described by my local definition of murder is locally wrong), with no “quantifier” of belief required.
In the end, all statements can be reduced this way. If a statement has “I believe” as a “quantifier”, then either it is only an artificial facet of language that restricts the definitions of words in the claim to some (usually local) subset on which the full, unprefaced claim can be made… or else if local definitions of words aren’t being implicated, then the “I believe” prefix literally contains no additional information about the state of your mind than the raw assertion would yield.
This is why rhetoric professors go nuts when students write argumentative papers and drop “I think that” or “I believe that” all over the place. Assertions are assertions. It’s a social custom that you can allude to the fact that you might not have 100% confidence in your assertion by prefacing it with “I believe”. It’s also a social custom that you can allude to respect for other beliefs or participation in a negotiation process by prefacing claims with “I believe”, but in the strictest sense of what information you’re conveying to third parties (separate from any social custom dressings), the “I believe” preface adds no information content.
Alice: “I bet you $500 that the sign is red”
Bob: “OK”
later, they find out it’s blue
Bob: “Pay up!”
Alice: “I bet you $500 that I believe the sign is red”
Bob: “OK”
later, they find out it’s blue
Alice: “But I thought it was red! Pay up!”
That’s the difference between “X” and “I believe X”. We say them in the same situation, but they mean different things.
But even if they aren’t, they could simply choose a preference ordering such that the local wrongness of failing to gratify their desire to murder is worse than the local wrongness of murder itself in their society.
The way statements like “murder is wrong” communicate facts about preference orders is pretty ambiguous. But suppose someone says that “Murder is wrong, and this is more important than gratifying my desire, possible positive consequences of murder, and so on” and then murders, without changing their mind. Would they therefore be insane? If yes, you agree with me.
It makes no sense for me to feel that my preferences are not universally and unequivocally true.
Correct is at issue, not true.
But at the root, all beliefs are statements about physics
Why? Why do you say this?
It only feels like saying (I believe murder is wrong) fails to imply the claim (murder is wrong).
Does “i believe the sky is green” imply “the sky is green”? Sure, you believe that, when you believe X, X is probably true, but that’s a belief, not a logical implication.
I am suggesting a similar thing for morality. People believe that “(I believe murder is wrong) ⇒ (murder is wrong)” and that belief is not reducible to physics.
literally contains no additional information about the state of your mind than the raw assertion would yield.
Assertions aren’t about the state of your mind! At least some of them are about the world—that thing, over there.
Alice: “I bet you $500 that the sign is red” Bob: “OK” later, they find out it’s blue Bob: “Pay up!”
Alice: “I bet you $500 that I believe the sign is red” Bob: “OK” later, they find out it’s blue Alice: “But I thought it was red! Pay up!”
I don’t understand this. If Alice bet Bob that she believed that the sign was red, then going and looking at the sign would in no way settle the bet. They would have to go look at her brain to settle that bet, because the claim, “I believe the sign is red” is a statement about the physics of Alice’s brain.
I want to think more about this and come up with a more coherent reply to the other points. I’m very intrigued. Also, I think that I accidentally hit the ‘report’ button when trying to reply. Please disregard any communication you might get about that. I’ll take care of it if anyone happens to follow up.
I think this address this topic very well. The first person experience of belief is one in the same with fact-assertion. ‘I ought to do X’ refers to a 4-tuple of actions, outcomes, utility function, and conditional probability function.
W.r.t. your question about whether a murderer who, prior to and immediately after committing murder, attests to believing that murder is wrong, I would say it is a mistaken question to bring their sanity into it. You can’t decide that question without debating what is meant by ‘sane’. How a person’s preference ordering and resulting actions look from the outside does not necessarily reveal that the person failed to behave rationally, according to their utility function, on the inside. If I choose to label them as ‘insane’ for seeming to violate their own belief, this is just a verbal distinction about how I will label such third-person viewings of that occurrence. Really though, their preference ordering might have been temporarily suspended due to clouded judgment from rage or emotion. Or, they might not be telling the full truth about their preference ordering and may not even be aware of some aspects of it.
The point is that beliefs are always statements of physics. If I say, “murder is wrong”, I am referring to some quantified subset of states of matter and their consequences. If I say, “I believe murder is wrong”, I am telling you that I assert that “murder is wrong” is true, which is a statement about my brain’s chemistry.
If I say, “murder is wrong”, I am referring to some quantified subset of states of matter and their consequences. If I say, “I believe murder is wrong”, I am telling you that I assert that “murder is wrong” is true, which is a statement about my brain’s chemistry.
Pardon me, but I believe the burden of proof here is for you to supply something non-physical that’s being specified and then produce evidence that this is the case. If the thing you’re talking about is supposed to be outside of a magisterium of evidence, then I fail to see how your claim is any different than that we are zombies.
At a coarse scale, we’re both asking about the evidence that we observe, which is the first-person experience of assertions about beliefs. Over models that can explain this phenomenon, I am attempting to select the one with minimum message length, as a computer program for producing the experience of beliefs out of physical material can have some non-zero probability attached to it through evidence. How are we to assign probability to the explanation that beliefs do not point to things that physically exist? Is that claim falsifiable? Are there experiments we can do which depend on the result? If not, then the burden of proof here is squarely on you to present a convincing case why the same-old same-old punting to complicated physics is not good enough. If it’s not good enough for you, and you insist on going further, that’s fine. But physics is good enough for me here and that’s not a cop out or an unjustified conclusion in the slightest.
but there is nothing else physical that we could plausibly say it means.
Why do you say this? Flesh out the definition of ‘wrong’ and you’re done. ‘Wrong’ refers to arrangements of matter and their consequences. It doesn’t attempt to refer to intrinsic properties of objects that exist apart from their physicality. If (cognitive object X) is (attribute Y) this just means that (arrangements of matter that correspond to what I give the label X) have (physical properties that I group together into the heading Y). It doesn’t matter if you’re saying “freedom is good” or “murder is wrong” or “that sign is red”. ‘Freedom’ refers to arrangements of matter and physical laws governing them. ‘Good’ refers to local physical descriptions of the ways that things can yield fortunate outcomes, where fortunate outcomes can be further chased down in its physical meaning, etc.
“X is wrong” unpacks to statements about the time evolution of physical systems. You can’t simply say
there is nothing else physical that we could plausibly say it means.
Have you gone and checked every possible physical thing? Have you done experiments showing that making correspondences between cognitive objects and physical arrangements of matter somehow “fails” to capture its “meaning”?
This seems to me to be one of those times where you need to ask yourself: is it really the case that cognitive objects are not just linguistic devices for labeling arrangements of matter and laws governing the matter......… or do I just think that’s the case?
Have you gone and checked every possible physical thing?
Your whole argument rests on this, since you have not provided a counterexample to my claim. You’ve just repeated the fact that there is some physical referent, over and over.
This is not how burden of proof works! It would be simply impossible for me to check every possible physical thing. Is it, therefore, impossible for you to be convinced that I am right?
Is it, therefore, impossible for you to be convinced that I am right?
This is what it means for a claim to fail falsifiability. It’s easy to generate claims whose proof would only be constituted by fact-checking against every physical thing. This is a far cry from a decision-theoretic claim where, though we can’t have perfect evidence, we can make useful quantifications of the evidence we do have and our uncertainty about it.
The empty set has many interesting properties.
It’s impossible to quantify your claim without having all of the evidence up front.
You’ve just repeated the fact that there is some physical referent, over and over.
What I’m trying to say is that I can test the hypothesis of whether or not there is a physical referent. If someone says to me, “Is there or isn’t there a physical referent?” and I have to respond, then I have to do so on the strength of evidence alone. I may not be able to provide a referent explicitly, but I know that non-zero probability can be assigned to a physical system in which cognitive objects are placeholders for complicated sets of matter and governing laws of physics. I cannot make the same claim about the hypothesis that cognitive objects do not have utterly physical referents, and therefore, whether or not I have explicit examples of referents, the hypothesis that there must be underlying physical referents wins hands down.
The criticism you’re making of me, that I insist there are referents without supplying the actual referents, is physically backwards in this case. For example, someone might say “consciousness is a process that does not correspond to any physically existing thing.” If I then reply,
“But consciousness is a property of material and varies directly with changes in that material (or some similar, more detailed argument about cognition), and therefore, I can assign non-zero probability to its being a physical computation, and since I do not have the capacity to assign probabilities to non-physical entities, the hypothesis that consciousness is physical wins.”
this is a convincing argument, up to the quantification of the evidence. If you personally don’t feel like it’s convincing, that’s fine, but then you’re outside of decision theory and the claim you’re making contains literally no semantic information.
The same can be said of the referent of a belief. I think you’re failing to appreciate that you’re making the very mistake you’re claiming that I am making. You’re just asserting that beliefs can’t plausibly correspond to physically existing things. That’s just an assertion. It might be a good assertion or might not even be a coherent assertion. In order to check, I am going to go draft up some sort of probability model that relates that claim to what I know about thoughts and beliefs. Oh, snap, when I do that, I run into the unfortunate wall that if beliefs don’t have physical referents, then talking about their referents at all suddenly has no physical meaning. Therefore, I will stick with my hypothesis that they are physical, pending explicit evidence that they aren’t.
The convincingness of the argument lies in the fact that one side of this can be made quantitative and experimentally relevant and the other cannot. The burden of proof, as I view it in this situation, is on making a non-zero probability connection between beliefs and some type of referent. I don’t see anything in your argument that prevents this connection being made to physical things. I do, however, fail to see any part of your argument that makes this probabilistic connection with non-physical referents.
Maybe it’s better to think of it like an argument from non-cognitivism. You’re trying to make up a solution space for the problem (non-physical referents) that is incompatible with the whole system in which the problem takes place (physics). Until you make an explicit physical definition of what a “non-physical referent” actually is, then I will not entertain it as a possible hypothesis.
Ultimately, even though your epistemology is more complicated, you argument might as well be: beliefs are pointers to magical unicorns outside of space and time and these magical unicorns are what determine human values. ‘Non-physical referents’ simply are not. I can’t assign a probability to the existence of something which is itself hypothesized to fail to exist, since existence and “being a physical part of reality” are one in the same thing.
It’s easy to generate claims whose proof would only be constituted by fact-checking against every physical thing
That’s the good kind of claim, the falsifiable kind, like the Law of Universal Gravitation. That’s the kind of claim I’m making.
It’s impossible to quantify your claim without having all of the evidence up front.
Your argument seems to depend on the idea that the only way to evaluate a claim is to list the physical universes in which it is true and the physical universes in which it is not true.
This, obviously, is circular.
Do you acknowledge that your reasoning is circular and defend it, presumably with Eliezer’s defense of circular reasoning? Or do you claim that it is not circular?
I cannot make the same claim about the hypothesis that cognitive objects do not have utterly physical referents
Sure you can. You take a world, find all the cognitive objects in it, then find all the corresponding physical referents, cross those objects off the list.
I am saying that there are beliefs (strings of symbols with meaning) endowed meaning by their place in a functional mind but for which the set of physical referents they correspond to is the empty set.
Surely you can admit the existence of strings of symbols without physical referents, like this one: “fj4892fjsoidfj390ds;j9d3”. There’s nothing non-physical about it.
The convincingness of the argument lies in the fact that one side of this can be made quantitative and experimentally relevant and the other cannot.
If “X” is quantitative and experimentally relevant, how could “not-X” be irrelevant? If X makes predictions, how could not-X not make the opposite predictions?
I do, however, fail to see any part of your argument that makes this probabilistic connection with non-physical referents.
Sure you can. You take a world, find all the cognitive objects in it
My claim is that if one had really done this, then by definition of “find”, they have the physical referents for the cognitive objects. If a cognitive object has the empty set as the set of physical referents, then it is the null cognitive object. The string of symbols “fj4892fjsoidfj390ds;j9d3″ might have no meaning to you when thinking in English, say, but then it just means it is an instantiation of the empty cognitive object, any string of symbols failing to point to a physical referent.
I’m trying to say that if the cognitive object is to be considered as pointing to something, that is, it is in some sense not the null cognitive object, then the thing which is its referent is physical. It’s incoherent to say that a string of symbols refers to something that’s not physical. What do you mean by ‘refer’ in that setting? There is no existing thing to be referred to, hence the symbol does no action of referring. So when someone speaks about “X” being right or wrong, either they are speaking about physical events or else “X” fails to be a cognitive object.
I claim that my reasoning is not circular.
Your argument seems to depend on the idea that the only way to evaluate a claim is to list the physical universes in which it is true and the physical universes in which it is not true.
It depends on what you mean by ‘evaluate’. What I’m taking for that definition right now is that if I want to assess whether proposition P is true, then I can only do so in a setting of decision theory and degrees of evidence and uncertainty. This means that I need a model for the action P and a way of corresponding probabilities to the various hypotheses about P. In this case, P = “Some cognitive objects have referents that are not the null referent and are also not physical”. I claim that all referents are either physical or else they are the null referent. The set of non-physical referents is empty.
Just because a string fails to have a physical referent does not mean that it succeeds in having a non-physical one. What evidence do I have that there exist non-physical referents. What model of cognitive objects exists with which it is possible to achieve experimental evidence of a non-physical referent?
I am saying that there are beliefs (strings of symbols with meaning) endowed meaning by their place in a functional mind but for which the set of physical referents they correspond to is the empty set.
What do you mean by ‘endowed meaning’? If a cognitive object has no physical referent, to me, that is the definition of meaningless. It fails to correspond to reality.
What do you mean by ‘endowed meaning’? If a cognitive object has no physical referent, to me, that is the definition of meaningless. It fails to correspond to reality.
This is the heart of the matter. You are saying that the only relevant properties of a cognitive object are its referents. Thus, no referents = no relevant properties = null object.
I say that, on the contrary, a cognitive object has other relevant properties. One such property is its place on the code of a brain.
Imagine an AI with a set of objects that it marks as either “true” or “false”. These objects have no referents, but they influence the formation of objects with referents. I think it’s fair to say that:
Such an Ai could exist, with the objects having referents/no referents as appropriate.
These objects are not just the null object.
The AI is thinking irrationally.
Now imagine an AI with a set of objects that it marks as either “true” or “false”. These objects have no referents, but they influence the AI’s choices. I think it’s fair to say that:
Such an Ai could exist, with the objects having referents/no referents as appropriate.
I think that the meaning of “The AI could be thinking rationally” is that it could turn out to be the case that the objects labeled true and false have a correspondence to physically existing things and that correspondence allows the A.I. to construct decision rules which correspond to reality within some computable range of uncertainty.
If we are unable to map the inputs to the A.I.’s decision process (in this case objects labeled true or false and whose referents, if any, are unknown to us at the start) back to physical reality, then it is still mysterious to us and we can’t claim that it’s rational in any sense other than pure statistical experience (it could just be that when asked to make a series of decisions using the true/false labeled objects, the A.I. got incredibly lucky).
In order to conclude (in any more than a superficial way) that the A.I. is rational, there must be an explicit correspondence between the labeled objects and the physics world, and hence we would have found their referents. If this is, in principle, an impossible task (as you claim), then the concept of rationality doesn’t apply to the A.I. In what sense would it be said to actually be rational, rather than just produce a sequence of outputs that appear to be rational to us for mysterious reasons?
I think that the meaning of “The AI could be thinking rationally” is that it could turn out to be the case that the objects labeled true and false are integral components of a program that provably calculates rational decisions every time.
A proof that a program calculates rational decisions every time necessarily provides the physical referents of its calculation. There’s no difference between knowing that a program calculates rational decisions every time and knowing how it is that it calculates rational decisions every time. If you don’t know the explicit correspondence between its calculations and reality then your state of knowledge cannot include the fact that the program always yields rational conclusions. You can have degrees of certainty that it is rational without having full knowledge of its referents, but not factual knowledge as in a mathematical proof.
It may be that a slick mathematical argument reduces the connection to symbols that don’t readily convey the physical connection, but
don’t tell me that knowledge is “subjective”. Knowledge has to be represented in a brain, and that makes it as physical as anything else. For M to physically represent an accurate picture of the state of Y, M’s physical state must correlate with the state of Y. You can take thermodynamic advantage of that—it’s called a Szilard engine.
Or as E.T. Jaynes put it, “The old adage ‘knowledge is power’ is a very cogent truth, both in human relations and in thermodynamics.”
And conversely, one subsystem cannot increase in mutual information with another subsystem, without (a) interacting with it and (b) doing thermodynamic work. Otherwise you could build a Maxwell’s Demon and violate the Second Law of Thermodynamics—which in turn would violate Liouville’s Theorem—which is prohibited in the standard model of physics.
Which is to say: To form accurate beliefs about something, you really do have to observe it. It’s a very physical, very real process: any rational mind does “work” in the thermodynamic sense, not just the sense of mental effort.
If your state of knowledge (brain chemistry) is updated to include special knowledge of the rationality of an agent, then there is entanglement between you and that agent, for that is what knowledge is. You can’t know that an agent is rational without knowing the physical connection between its cognitive objects and reality. To whatever degree you lack knowledge about the physical referents of its cognitive objects, that is the degree to which you lack knowledge about whether or not it is rational.
The point is that beliefs are always statements of physics. If I say, “murder is wrong”, I am referring to some quantified subset of states of matter and their consequences. If I say, “I believe murder is wrong”, I am telling you that I assert that “murder is wrong” is true, which is a statement about my brain’s chemistry.
Hm? It’s easy to form beliefs about things that aren’t physical. Suppose I tell you that the infinite cardinal aleph-1 is strictly larger than aleph-0. What’s the physical referent of the claim?
I’m not making a claim about the messy physical neural structures in my head that correspond to those sets—I’m making a claim about the nonphysical infinite sets.
Likewise, I can make all sorts of claims about fictional characters. Those aren’t claims about the physical book, they’re claims about its nonphysical implications.
Why do you think that nonphysical implications are ontologically existing things? I argue that what you’re trying to get at by saying “nonphysical implications” are actual quantified subsets of matter. Ideas, however abstract, are referring to arrangements of matter. The vision in your mind when you talk about aleph-1 is of a physically existing thing. When’s the last time you imagined something that wasn’t physical? A unicorn? You mean a horse with wings glued onto it? Mathematical objects represent states of knowledge, which are as physical as anything else. The color red refers to a particular frequency of light and the physical processes by which it is a common human experience. There is no idea of what red is apart from this. Red is something different to a blind man than it is to you, but by speaking about your physical referent, the blind man can construct his own useful physical referent.
Claims about fictional characters are no better. What do you mean by Bugs Bunny other than some arrangement of colors brought to your eyes by watching TV in the past. That’s what Bugs Bunny is. There’s no separately existing entity which is Bugs Bunny that can be spoken about as if it ontologically was. Every person who refers to Bugs Bunny refers to physical subsets of matter from their experience, whether that’s because they witnessed the cartoon and were told through supervised learning what cognitive object to attach it to or they heard about it later through second hand experience. A blind person can have a physical referent when speaking about Bugs Bunny, albeit one that I have a very hard time mentally simulating.
In any case, merely asserting that something fails to have a physical referent is not a convincing reason to believe so. Ask yourself why you think there is no physical referent and whether one could construct a computational system that behaves that way.
I have no very firm ontological beliefs. I don’t want to make any claim about whether fictional characters or mathematical abstractions “really exist”.
I do claim that I can talk about abstractions without there being any set of physical referents for that abstraction. I think it’s utterly routine to write software that manipulates things without physical referents. A type-checker, for instance, isn’t making claims about the contents of memory; it’s making higher-order claims about how those values will be used across all possible program executions—including ones that can’t physically happen.
I would cheerfully agree with you that the cognitive process (or program execution) is carried out by physical processes. Of course. But the subject of that process isn’t the mechanism. There’s nothing very strange about this, as far as I can tell. It’s routine for programs and programmers to talk about “infinite lists”; obviously there is no such thing in the physical world, but it is a very useful abstraction.
By the way, I think your Bugs Bunny example fails. When I talk to somebody about Bugs Bunny, I am able to make myself understood. The other person and I are able to talk, in every sense that matters, about the same thing. But we don’t share the same mental states. Conversely, my mental picture isn’t isomorphic to any particular set of photons; it’s a composite. Somehow, that doesn’t defeat practical communication.
The case might be clearer for purely literary characters. When I talk about the character King Lear, I certainly am not saying something about the physical copy I read! Consider the perfectly ordinary (and true) sentence “King Lear had three daughters.” That’s not a claim about ink, it’s a claim about the mental models created in competent speakers of English by the work (which itself is an abstraction, not a physical thing). Those models are physically embodied, but they are not physical things! There’s no set of quarks you can point to and say “there’s the mental model.”
mental models created in competent speakers of English by the work (which itself is an abstraction, not a physical thing)
This is where we disagree. Those mental models are simply arrangements of matter. The fact that it feels like you’re referring something separate from an arrangement of matter-memory in your brain is another thing all together. The reason that practical communication works at all is that there is an extreme amount of mutual information held between the set of features which you use to categorize the physical memory of, say, Bugs Bunny, and the features used to categorize Bugs in someone else’s mind. You can reference your brain’s physical memory in such a way as to cause another’s physical memory to reference something, and if an algorithm sorts the mutual information of these concepts until it finds a maximum, and common experience then forms all sorts of additional memories about what wound up being referenced, it is not surprising at all that a purely physical model of concepts would allow communication. I don’t see how anything you’ve said represents more than an assertion that it feels to you as if abstractions are not simply the brain matter that they are made out of in your mind. It’s not a convincing reason for me to think abstractions have ontological properties. I think the hypothesis that it just feels that way since my brain is made of meat and I can’t look at the wiring schematics is more likely.
This is starting to feel like a shallow game of definition-bending. I don’t think we’re disagreeing about any testable claim.
So I’m not going to argue about why your definition is wrong, but I will describe why I think it’s less useful in expressing the sorts of claims we make about the world.
When we talk about whether two mental models are similar, the similarity function we use is representation-independent. You and I might have very similar mental models, even if you are thinking with superconducting wires in liquid helium and our physical brains have nothing in common. Not being willing to talk honestly about abstractions makes it hard to ask how closely aligned two mental models are—and that’s a useful question to ask, since it helps predict speech-acts.
Conversely, saying that “everything is a physical property” deprives us of what was previously a useful category. A toaster is physical in a way that an eight-dimensional vector space is not and in a way that a not-yet-produced toaster is not. I want a word to capture that difference.
In particular, physical objects, as most of the world uses the term, means that objects have position and mass that evolve in predictable ways. It’s sensible to ask what a toaster weighs. It’s not sensible to ask what a mental model weighs.
I think your definitions here mean that you can’t actually explain ordinary ostensive reference. There is a toaster over there, and a mental model over here, and there is some correspondence. And the way most of the world uses language, I can have the same referential relationship to a fictional person as to a real person, as to a toaster.
When we talk about whether two mental models are similar, the similarity function we use is representation-independent. …
Not being willing to talk honestly about abstractions makes it hard to ask how closely aligned two mental models are—and that’s a useful question to ask, since it helps predict speech-acts.
Conversely, saying that “everything is a physical property” deprives us of what was previously a useful category. A toaster is physical in a way that an eight-dimensional vector space is not and in a way that a not-yet-produced toaster is not. I want a word to capture that difference.
First, I didn’t say anything at all about the usefulness of treating abstractions the way we do. I don’t believe in actual free will but I certainly believe that the way we walk around acting as if free will was a real attribute that we have is very useful. You can arrange a network of neurons in such a way that it will allow identification of a concept, and we use natural language to talk about this sort of arrangement of matter. Talking about it that way is just fine, and indeed very useful. But this thread was about a defense of metaethics and partially about the defense of beliefs as non-physical, but still really existing, entities. For purposes of debating that point, I think it starts to matter whether someone does or does not recognize that concepts are just arrangements of matter: information which can be extracted from brain states but does not in and of itself point to any actual, ontological entity.
I think I am quite willing to talk about abstractions and their usefulness … just not willing to agree that they are fundamental parts of reality rather than merely hallucinations the same way that free will is.
In conversations about the ontology of physical categories, it’s better to say that the category of toasters in my brain is just a pattern of matter that happens to score high correlations with image, auditory, and verbals feature vectors generated by toasters. In conversations about making toast, it’s better to talk about the abstraction of the category of toasters as if it was itself something.
It’s the same as talking about the wing of an airplane.
But this thread was about a defense of metaethics and partially about the defense of beliefs as non-physical, but still really existing, entities. For purposes of debating that point, I think it starts to matter whether someone does or does not recognize that concepts are just arrangements of matter: information which can be extracted from brain states but does not in and of itself point to any actual, ontological entity.
Thank you, that explained where you were coming from.
But I don’t see that any of this ontology gets you the meta-ethical result you want to show. I think all you’ve shown is that ethical claims aren’t more true than, say, mathematical truth or physical law. But by any normal standard, “as true as the proof of Fermat’s last theorem” is a very high degree of truth.
I think to get the ethical result you want, you should be showing that moral terms are strictly less meaningful than mathematical ones. Certainly you need to somehow separate mathematical truth from “ethical truth”—and I don’t see that ontology gets you there.
Actually, I am opposed to the argument of ontology of belief, which is why I was trying to argue that beliefs are encoded states of matter. If I assert that “X is wrong” it must mean I assert “I believe X is wrong” as well. If I assert “I believe X is wrong” but don’t assert “X is wrong”, something’s clearly a miss. As pointed out here, beliefs are reflections of best available estimates about physically existing things. If I do assert that I believe X is wrong but don’t assert that X is wrong, then either I am lying about the belief, or else there’s some muddling of definitions and maybe I mean some local version of X or some local version of “wrong”, or I am unaware of my actual state of beliefs (possibly due to insanity, etc.) But my point is that in a sane person, from that person’s first-person experience, the two statements, “I believe X is wrong” and “X is wrong” contain exactly the same information about the state of my brain. They are the same statement.
My point in all this was that “I believe X is wrong” has the same first-person referent as “X is wrong”. If X = murder, say, and I assert that “murder is wrong”, then once you unpack whatever definitions in terms of physical matter and consequence that I mean by “murder” and “wrong”, you’re left with a pointer to a physical arrangement of matter in my brain that resonates when feature vectors of my sensory input correlate with the pattern that stores “murder” and “wrong” in my brain’s memory. It’s a physical thing. The wrongness of murder is that thing, it isn’t an ontological concept that exists outside my brain as some non-physical attribute of reality. Even though other humans have remarkably similar brain-matter-patterns of wrongness and murder, enough so that the mutual information between the pattern allows effective communication, this doesn’t suddenly cause the idea that murder is wrong to stop being just a local manifestation in my brain and start being a separate idea that many humans share pointers to.
If someone wanted to establish metaethical claims based on the idea that there exist non-physical referents being referred to by common human beliefs, and that this set of referents somehow reflects an inherent property of reality, I think this would be misguided and experimentally either not falsifiable or at the very least unsupported by evidence. I don’t guess that this makes too much practical difference, other than being a sort of Pandora’s box for religious-type reasoning (but what isn’t?).
I think more salient examples that make this question hard are not going to be borne out of trying to come up with something increasingly abstract. The more puzzling cognitive objects to explain are when you apply unphysical transformations to obvious objects… like taking a dog and imagining it stretched out to the length of a football field. Or a person with a torus-like hole in their abdomen. But these are simply images in the brain. That the semantic content of the image can be interpreted as strange unions of other cognitive objects is not a reason to think that the cognitive object itself isn’t physical.
I don’t believe in an ontology of morals, only an epistemology of them.
Do you think that “The sign is red” means something different from “I believe the sign is red”? (In the technical sense of believe, not the pop sense.)
Do you think that “Murder is wrong” means something different from “I believe that murder is wrong.”?
The verb believe goes without saying when making claims about the world. To assert that ‘the sign is red’ is true would not make sense if I did not believe it, by definition. I would either be lying or unaware of my own mental state. To me, your question borders more on opinions and their consequences.
Quoting from there: “But your beliefs are not about you; beliefs are about the world. Your beliefs should be your best available estimate of the way things are; anything else is a lie.”
What I’m trying to say is that the statement (Murder is wrong) implies the further slight linguistic variant (I believe murder is wrong) (modulo the possibility that someone is lying or mentally ill, etc.) The question then is whether (I believe murder is wrong) → (murder is wrong). Ultimately, from the perspective of the person making these claims, the answer is ‘yes’. It makes no sense for me to feel that my preferences are not universally and unequivocally true.
I don’t find this at odds with a situation where a notorious murderer who is caught, say Hannibal Lecter, can simultaneously choose his actions and say “murder is wrong”. Maybe the person is mentally insane. But even if they aren’t, they could simply choose a preference ordering such that the local wrongness of failing to gratify their desire to murder is worse than the local wrongness of murder itself in their society. Thus, they can see that to people who don’t have the same preference for murdering someone for self-gratification, the computation of beliefs works out that (murder is wrong) is generally true, but not true when you substitute their local situations into their personal formula for computing the belief. In this case it just becomes an argument over words because the murderer is tacitly substituting his personal local definitions for things when making choices, but then using more general definitions when making statements of beliefs. In essence, the murderer believes it is not wrong for him to murder and get the gratification, but that murder, as society defines it and views it, is “wrong” where “wrong” is a society-level description, not the murderer’s personal description. I put a little more about the “words” problem below.
The apparent difference between this way of thinking and the way we all experience our thinking is that, among our assertions is the meta-assertion that (over-asserting beliefs is bad) → (I believe over-asserting beliefs is bad) or something similar to this. All specific beliefs, including such meta-beliefs, are intertwined. You can’t have independent beliefs about whether murder is right that don’t depend on your beliefs about whether beliefs should be acted upon like they are cold hard facts.
But at the root, all beliefs are statements about physics. Mapping a complicated human belief down to the level of making statistical pattern recognition claims about amplitude distributions is really hard and inaccessible to us. Further, evolutionarily, we can’t afford to burn computation time exploring a fully determined picture of our beliefs. After some amount of computation time, we have to make our chess moves or else the clock runs out and we lose.
It only feels like saying (I believe murder is wrong) fails to imply the claim (murder is wrong). Prefacing a claim with “I believe” is a human-level way or trying to mitigate the harshness of the claim. It could be a statement that tries to roughly quantify how much evidence I can attest to for the claim which the belief describes. It certainly sounds more assured to say (murder is wrong) than to say (I believe murder is wrong), but this is a phantom distinction.
The other thing, which I think you are trying to take special pains to avoid, is that you can very easily run into a battle of words here. If someone says, “I believe murder is wrong” and what they really mean is something like “I believe that it does an intolerable amount of social disservice in the modern society that I live in for anyone to act as if murdering is acceptable, and thus to always make sure to punish murderers,” basically, if someone translates “murder” into “the local definition of murder in the world that I frequently experience” and they translate “wrong” into “the local definition of wrong (e.g. punishable in court proceedings or something)”, then they are no longer talking about the cognitive concept of murder. An alien race might not define murder the same or “wrong” the same.
If someone uses ‘believe’ to distinguish between making a claim about the most generalized form of murder they can think of, applicable to the widest array of potential sentient beings, or something like that, then the two statements are different, but only artificially.
If I say “I believe murder is wrong” and I really mean “I believe (my local definition of murder) is (my local definition of wrong)” then this implies the statement (The concept described by my local definition of murder is locally wrong), with no “quantifier” of belief required.
In the end, all statements can be reduced this way. If a statement has “I believe” as a “quantifier”, then either it is only an artificial facet of language that restricts the definitions of words in the claim to some (usually local) subset on which the full, unprefaced claim can be made… or else if local definitions of words aren’t being implicated, then the “I believe” prefix literally contains no additional information about the state of your mind than the raw assertion would yield.
This is why rhetoric professors go nuts when students write argumentative papers and drop “I think that” or “I believe that” all over the place. Assertions are assertions. It’s a social custom that you can allude to the fact that you might not have 100% confidence in your assertion by prefacing it with “I believe”. It’s also a social custom that you can allude to respect for other beliefs or participation in a negotiation process by prefacing claims with “I believe”, but in the strictest sense of what information you’re conveying to third parties (separate from any social custom dressings), the “I believe” preface adds no information content.
The difference is here
Alice: “I bet you $500 that the sign is red” Bob: “OK” later, they find out it’s blue Bob: “Pay up!”
Alice: “I bet you $500 that I believe the sign is red” Bob: “OK” later, they find out it’s blue Alice: “But I thought it was red! Pay up!”
That’s the difference between “X” and “I believe X”. We say them in the same situation, but they mean different things.
The way statements like “murder is wrong” communicate facts about preference orders is pretty ambiguous. But suppose someone says that “Murder is wrong, and this is more important than gratifying my desire, possible positive consequences of murder, and so on” and then murders, without changing their mind. Would they therefore be insane? If yes, you agree with me.
Correct is at issue, not true.
Why? Why do you say this?
Does “i believe the sky is green” imply “the sky is green”? Sure, you believe that, when you believe X, X is probably true, but that’s a belief, not a logical implication.
I am suggesting a similar thing for morality. People believe that “(I believe murder is wrong) ⇒ (murder is wrong)” and that belief is not reducible to physics.
Assertions aren’t about the state of your mind! At least some of them are about the world—that thing, over there.
I don’t understand this. If Alice bet Bob that she believed that the sign was red, then going and looking at the sign would in no way settle the bet. They would have to go look at her brain to settle that bet, because the claim, “I believe the sign is red” is a statement about the physics of Alice’s brain.
I want to think more about this and come up with a more coherent reply to the other points. I’m very intrigued. Also, I think that I accidentally hit the ‘report’ button when trying to reply. Please disregard any communication you might get about that. I’ll take care of it if anyone happens to follow up.
You are correct in your first paragraph, I oversimplified.
I think this address this topic very well. The first person experience of belief is one in the same with fact-assertion. ‘I ought to do X’ refers to a 4-tuple of actions, outcomes, utility function, and conditional probability function.
W.r.t. your question about whether a murderer who, prior to and immediately after committing murder, attests to believing that murder is wrong, I would say it is a mistaken question to bring their sanity into it. You can’t decide that question without debating what is meant by ‘sane’. How a person’s preference ordering and resulting actions look from the outside does not necessarily reveal that the person failed to behave rationally, according to their utility function, on the inside. If I choose to label them as ‘insane’ for seeming to violate their own belief, this is just a verbal distinction about how I will label such third-person viewings of that occurrence. Really though, their preference ordering might have been temporarily suspended due to clouded judgment from rage or emotion. Or, they might not be telling the full truth about their preference ordering and may not even be aware of some aspects of it.
The point is that beliefs are always statements of physics. If I say, “murder is wrong”, I am referring to some quantified subset of states of matter and their consequences. If I say, “I believe murder is wrong”, I am telling you that I assert that “murder is wrong” is true, which is a statement about my brain’s chemistry.
Everyone keeps saying that, but they never give convincing arguments for it.
I also disagree with this.
Pardon me, but I believe the burden of proof here is for you to supply something non-physical that’s being specified and then produce evidence that this is the case. If the thing you’re talking about is supposed to be outside of a magisterium of evidence, then I fail to see how your claim is any different than that we are zombies.
At a coarse scale, we’re both asking about the evidence that we observe, which is the first-person experience of assertions about beliefs. Over models that can explain this phenomenon, I am attempting to select the one with minimum message length, as a computer program for producing the experience of beliefs out of physical material can have some non-zero probability attached to it through evidence. How are we to assign probability to the explanation that beliefs do not point to things that physically exist? Is that claim falsifiable? Are there experiments we can do which depend on the result? If not, then the burden of proof here is squarely on you to present a convincing case why the same-old same-old punting to complicated physics is not good enough. If it’s not good enough for you, and you insist on going further, that’s fine. But physics is good enough for me here and that’s not a cop out or an unjustified conclusion in the slightest.
Suppose I say “X is red”.
That indicates something physical—it indicates that I believe X is red
but it means something different, and also physical—it means that X is red
Now suppose I say “X is wrong”
That indicates something physical—it indicates that I believe X is wrong
using the same-old, same-old principle, we include that it means something different.
but there is nothing else physical that we could plausibly say it means.
Why do you say this? Flesh out the definition of ‘wrong’ and you’re done. ‘Wrong’ refers to arrangements of matter and their consequences. It doesn’t attempt to refer to intrinsic properties of objects that exist apart from their physicality. If (cognitive object X) is (attribute Y) this just means that (arrangements of matter that correspond to what I give the label X) have (physical properties that I group together into the heading Y). It doesn’t matter if you’re saying “freedom is good” or “murder is wrong” or “that sign is red”. ‘Freedom’ refers to arrangements of matter and physical laws governing them. ‘Good’ refers to local physical descriptions of the ways that things can yield fortunate outcomes, where fortunate outcomes can be further chased down in its physical meaning, etc.
“X is wrong” unpacks to statements about the time evolution of physical systems. You can’t simply say
Have you gone and checked every possible physical thing? Have you done experiments showing that making correspondences between cognitive objects and physical arrangements of matter somehow “fails” to capture its “meaning”?
This seems to me to be one of those times where you need to ask yourself: is it really the case that cognitive objects are not just linguistic devices for labeling arrangements of matter and laws governing the matter......… or do I just think that’s the case?
Your whole argument rests on this, since you have not provided a counterexample to my claim. You’ve just repeated the fact that there is some physical referent, over and over.
This is not how burden of proof works! It would be simply impossible for me to check every possible physical thing. Is it, therefore, impossible for you to be convinced that I am right?
I expect better from lesswrong posters.
This is what it means for a claim to fail falsifiability. It’s easy to generate claims whose proof would only be constituted by fact-checking against every physical thing. This is a far cry from a decision-theoretic claim where, though we can’t have perfect evidence, we can make useful quantifications of the evidence we do have and our uncertainty about it.
The empty set has many interesting properties.
It’s impossible to quantify your claim without having all of the evidence up front.
What I’m trying to say is that I can test the hypothesis of whether or not there is a physical referent. If someone says to me, “Is there or isn’t there a physical referent?” and I have to respond, then I have to do so on the strength of evidence alone. I may not be able to provide a referent explicitly, but I know that non-zero probability can be assigned to a physical system in which cognitive objects are placeholders for complicated sets of matter and governing laws of physics. I cannot make the same claim about the hypothesis that cognitive objects do not have utterly physical referents, and therefore, whether or not I have explicit examples of referents, the hypothesis that there must be underlying physical referents wins hands down.
The criticism you’re making of me, that I insist there are referents without supplying the actual referents, is physically backwards in this case. For example, someone might say “consciousness is a process that does not correspond to any physically existing thing.” If I then reply,
“But consciousness is a property of material and varies directly with changes in that material (or some similar, more detailed argument about cognition), and therefore, I can assign non-zero probability to its being a physical computation, and since I do not have the capacity to assign probabilities to non-physical entities, the hypothesis that consciousness is physical wins.”
this is a convincing argument, up to the quantification of the evidence. If you personally don’t feel like it’s convincing, that’s fine, but then you’re outside of decision theory and the claim you’re making contains literally no semantic information.
The same can be said of the referent of a belief. I think you’re failing to appreciate that you’re making the very mistake you’re claiming that I am making. You’re just asserting that beliefs can’t plausibly correspond to physically existing things. That’s just an assertion. It might be a good assertion or might not even be a coherent assertion. In order to check, I am going to go draft up some sort of probability model that relates that claim to what I know about thoughts and beliefs. Oh, snap, when I do that, I run into the unfortunate wall that if beliefs don’t have physical referents, then talking about their referents at all suddenly has no physical meaning. Therefore, I will stick with my hypothesis that they are physical, pending explicit evidence that they aren’t.
The convincingness of the argument lies in the fact that one side of this can be made quantitative and experimentally relevant and the other cannot. The burden of proof, as I view it in this situation, is on making a non-zero probability connection between beliefs and some type of referent. I don’t see anything in your argument that prevents this connection being made to physical things. I do, however, fail to see any part of your argument that makes this probabilistic connection with non-physical referents.
Maybe it’s better to think of it like an argument from non-cognitivism. You’re trying to make up a solution space for the problem (non-physical referents) that is incompatible with the whole system in which the problem takes place (physics). Until you make an explicit physical definition of what a “non-physical referent” actually is, then I will not entertain it as a possible hypothesis.
Ultimately, even though your epistemology is more complicated, you argument might as well be: beliefs are pointers to magical unicorns outside of space and time and these magical unicorns are what determine human values. ‘Non-physical referents’ simply are not. I can’t assign a probability to the existence of something which is itself hypothesized to fail to exist, since existence and “being a physical part of reality” are one in the same thing.
That’s the good kind of claim, the falsifiable kind, like the Law of Universal Gravitation. That’s the kind of claim I’m making.
Your argument seems to depend on the idea that the only way to evaluate a claim is to list the physical universes in which it is true and the physical universes in which it is not true.
This, obviously, is circular.
Do you acknowledge that your reasoning is circular and defend it, presumably with Eliezer’s defense of circular reasoning? Or do you claim that it is not circular?
Sure you can. You take a world, find all the cognitive objects in it, then find all the corresponding physical referents, cross those objects off the list.
I am saying that there are beliefs (strings of symbols with meaning) endowed meaning by their place in a functional mind but for which the set of physical referents they correspond to is the empty set.
Surely you can admit the existence of strings of symbols without physical referents, like this one: “fj4892fjsoidfj390ds;j9d3”. There’s nothing non-physical about it.
If “X” is quantitative and experimentally relevant, how could “not-X” be irrelevant? If X makes predictions, how could not-X not make the opposite predictions?
Who said that all beliefs have referents?
My claim is that if one had really done this, then by definition of “find”, they have the physical referents for the cognitive objects. If a cognitive object has the empty set as the set of physical referents, then it is the null cognitive object. The string of symbols “fj4892fjsoidfj390ds;j9d3″ might have no meaning to you when thinking in English, say, but then it just means it is an instantiation of the empty cognitive object, any string of symbols failing to point to a physical referent.
I’m trying to say that if the cognitive object is to be considered as pointing to something, that is, it is in some sense not the null cognitive object, then the thing which is its referent is physical. It’s incoherent to say that a string of symbols refers to something that’s not physical. What do you mean by ‘refer’ in that setting? There is no existing thing to be referred to, hence the symbol does no action of referring. So when someone speaks about “X” being right or wrong, either they are speaking about physical events or else “X” fails to be a cognitive object.
I claim that my reasoning is not circular.
It depends on what you mean by ‘evaluate’. What I’m taking for that definition right now is that if I want to assess whether proposition P is true, then I can only do so in a setting of decision theory and degrees of evidence and uncertainty. This means that I need a model for the action P and a way of corresponding probabilities to the various hypotheses about P. In this case, P = “Some cognitive objects have referents that are not the null referent and are also not physical”. I claim that all referents are either physical or else they are the null referent. The set of non-physical referents is empty.
Just because a string fails to have a physical referent does not mean that it succeeds in having a non-physical one. What evidence do I have that there exist non-physical referents. What model of cognitive objects exists with which it is possible to achieve experimental evidence of a non-physical referent?
What do you mean by ‘endowed meaning’? If a cognitive object has no physical referent, to me, that is the definition of meaningless. It fails to correspond to reality.
This is the heart of the matter. You are saying that the only relevant properties of a cognitive object are its referents. Thus, no referents = no relevant properties = null object.
I say that, on the contrary, a cognitive object has other relevant properties. One such property is its place on the code of a brain.
Imagine an AI with a set of objects that it marks as either “true” or “false”. These objects have no referents, but they influence the formation of objects with referents. I think it’s fair to say that:
Such an Ai could exist, with the objects having referents/no referents as appropriate.
These objects are not just the null object.
The AI is thinking irrationally.
Now imagine an AI with a set of objects that it marks as either “true” or “false”. These objects have no referents, but they influence the AI’s choices. I think it’s fair to say that:
Such an Ai could exist, with the objects having referents/no referents as appropriate.
These objects are not just the null object.
The AI could be thinking rationally.
I think that the meaning of “The AI could be thinking rationally” is that it could turn out to be the case that the objects labeled true and false have a correspondence to physically existing things and that correspondence allows the A.I. to construct decision rules which correspond to reality within some computable range of uncertainty.
If we are unable to map the inputs to the A.I.’s decision process (in this case objects labeled true or false and whose referents, if any, are unknown to us at the start) back to physical reality, then it is still mysterious to us and we can’t claim that it’s rational in any sense other than pure statistical experience (it could just be that when asked to make a series of decisions using the true/false labeled objects, the A.I. got incredibly lucky).
In order to conclude (in any more than a superficial way) that the A.I. is rational, there must be an explicit correspondence between the labeled objects and the physics world, and hence we would have found their referents. If this is, in principle, an impossible task (as you claim), then the concept of rationality doesn’t apply to the A.I. In what sense would it be said to actually be rational, rather than just produce a sequence of outputs that appear to be rational to us for mysterious reasons?
I think that the meaning of “The AI could be thinking rationally” is that it could turn out to be the case that the objects labeled true and false are integral components of a program that provably calculates rational decisions every time.
A proof that a program calculates rational decisions every time necessarily provides the physical referents of its calculation. There’s no difference between knowing that a program calculates rational decisions every time and knowing how it is that it calculates rational decisions every time. If you don’t know the explicit correspondence between its calculations and reality then your state of knowledge cannot include the fact that the program always yields rational conclusions. You can have degrees of certainty that it is rational without having full knowledge of its referents, but not factual knowledge as in a mathematical proof.
It may be that a slick mathematical argument reduces the connection to symbols that don’t readily convey the physical connection, but
If your state of knowledge (brain chemistry) is updated to include special knowledge of the rationality of an agent, then there is entanglement between you and that agent, for that is what knowledge is. You can’t know that an agent is rational without knowing the physical connection between its cognitive objects and reality. To whatever degree you lack knowledge about the physical referents of its cognitive objects, that is the degree to which you lack knowledge about whether or not it is rational.
Hm? It’s easy to form beliefs about things that aren’t physical. Suppose I tell you that the infinite cardinal aleph-1 is strictly larger than aleph-0. What’s the physical referent of the claim?
I’m not making a claim about the messy physical neural structures in my head that correspond to those sets—I’m making a claim about the nonphysical infinite sets.
Likewise, I can make all sorts of claims about fictional characters. Those aren’t claims about the physical book, they’re claims about its nonphysical implications.
Why do you think that nonphysical implications are ontologically existing things? I argue that what you’re trying to get at by saying “nonphysical implications” are actual quantified subsets of matter. Ideas, however abstract, are referring to arrangements of matter. The vision in your mind when you talk about aleph-1 is of a physically existing thing. When’s the last time you imagined something that wasn’t physical? A unicorn? You mean a horse with wings glued onto it? Mathematical objects represent states of knowledge, which are as physical as anything else. The color red refers to a particular frequency of light and the physical processes by which it is a common human experience. There is no idea of what red is apart from this. Red is something different to a blind man than it is to you, but by speaking about your physical referent, the blind man can construct his own useful physical referent.
Claims about fictional characters are no better. What do you mean by Bugs Bunny other than some arrangement of colors brought to your eyes by watching TV in the past. That’s what Bugs Bunny is. There’s no separately existing entity which is Bugs Bunny that can be spoken about as if it ontologically was. Every person who refers to Bugs Bunny refers to physical subsets of matter from their experience, whether that’s because they witnessed the cartoon and were told through supervised learning what cognitive object to attach it to or they heard about it later through second hand experience. A blind person can have a physical referent when speaking about Bugs Bunny, albeit one that I have a very hard time mentally simulating.
In any case, merely asserting that something fails to have a physical referent is not a convincing reason to believe so. Ask yourself why you think there is no physical referent and whether one could construct a computational system that behaves that way.
No.
I have no very firm ontological beliefs. I don’t want to make any claim about whether fictional characters or mathematical abstractions “really exist”.
I do claim that I can talk about abstractions without there being any set of physical referents for that abstraction. I think it’s utterly routine to write software that manipulates things without physical referents. A type-checker, for instance, isn’t making claims about the contents of memory; it’s making higher-order claims about how those values will be used across all possible program executions—including ones that can’t physically happen.
I would cheerfully agree with you that the cognitive process (or program execution) is carried out by physical processes. Of course. But the subject of that process isn’t the mechanism. There’s nothing very strange about this, as far as I can tell. It’s routine for programs and programmers to talk about “infinite lists”; obviously there is no such thing in the physical world, but it is a very useful abstraction.
By the way, I think your Bugs Bunny example fails. When I talk to somebody about Bugs Bunny, I am able to make myself understood. The other person and I are able to talk, in every sense that matters, about the same thing. But we don’t share the same mental states. Conversely, my mental picture isn’t isomorphic to any particular set of photons; it’s a composite. Somehow, that doesn’t defeat practical communication.
The case might be clearer for purely literary characters. When I talk about the character King Lear, I certainly am not saying something about the physical copy I read! Consider the perfectly ordinary (and true) sentence “King Lear had three daughters.” That’s not a claim about ink, it’s a claim about the mental models created in competent speakers of English by the work (which itself is an abstraction, not a physical thing). Those models are physically embodied, but they are not physical things! There’s no set of quarks you can point to and say “there’s the mental model.”
This is where we disagree. Those mental models are simply arrangements of matter. The fact that it feels like you’re referring something separate from an arrangement of matter-memory in your brain is another thing all together. The reason that practical communication works at all is that there is an extreme amount of mutual information held between the set of features which you use to categorize the physical memory of, say, Bugs Bunny, and the features used to categorize Bugs in someone else’s mind. You can reference your brain’s physical memory in such a way as to cause another’s physical memory to reference something, and if an algorithm sorts the mutual information of these concepts until it finds a maximum, and common experience then forms all sorts of additional memories about what wound up being referenced, it is not surprising at all that a purely physical model of concepts would allow communication. I don’t see how anything you’ve said represents more than an assertion that it feels to you as if abstractions are not simply the brain matter that they are made out of in your mind. It’s not a convincing reason for me to think abstractions have ontological properties. I think the hypothesis that it just feels that way since my brain is made of meat and I can’t look at the wiring schematics is more likely.
This is starting to feel like a shallow game of definition-bending. I don’t think we’re disagreeing about any testable claim. So I’m not going to argue about why your definition is wrong, but I will describe why I think it’s less useful in expressing the sorts of claims we make about the world.
When we talk about whether two mental models are similar, the similarity function we use is representation-independent. You and I might have very similar mental models, even if you are thinking with superconducting wires in liquid helium and our physical brains have nothing in common. Not being willing to talk honestly about abstractions makes it hard to ask how closely aligned two mental models are—and that’s a useful question to ask, since it helps predict speech-acts.
Conversely, saying that “everything is a physical property” deprives us of what was previously a useful category. A toaster is physical in a way that an eight-dimensional vector space is not and in a way that a not-yet-produced toaster is not. I want a word to capture that difference.
In particular, physical objects, as most of the world uses the term, means that objects have position and mass that evolve in predictable ways. It’s sensible to ask what a toaster weighs. It’s not sensible to ask what a mental model weighs.
I think your definitions here mean that you can’t actually explain ordinary ostensive reference. There is a toaster over there, and a mental model over here, and there is some correspondence. And the way most of the world uses language, I can have the same referential relationship to a fictional person as to a real person, as to a toaster.
And I think I’m now done with the topic.
First, I didn’t say anything at all about the usefulness of treating abstractions the way we do. I don’t believe in actual free will but I certainly believe that the way we walk around acting as if free will was a real attribute that we have is very useful. You can arrange a network of neurons in such a way that it will allow identification of a concept, and we use natural language to talk about this sort of arrangement of matter. Talking about it that way is just fine, and indeed very useful. But this thread was about a defense of metaethics and partially about the defense of beliefs as non-physical, but still really existing, entities. For purposes of debating that point, I think it starts to matter whether someone does or does not recognize that concepts are just arrangements of matter: information which can be extracted from brain states but does not in and of itself point to any actual, ontological entity.
I think I am quite willing to talk about abstractions and their usefulness … just not willing to agree that they are fundamental parts of reality rather than merely hallucinations the same way that free will is.
In conversations about the ontology of physical categories, it’s better to say that the category of toasters in my brain is just a pattern of matter that happens to score high correlations with image, auditory, and verbals feature vectors generated by toasters. In conversations about making toast, it’s better to talk about the abstraction of the category of toasters as if it was itself something.
It’s the same as talking about the wing of an airplane.
Thank you, that explained where you were coming from.
But I don’t see that any of this ontology gets you the meta-ethical result you want to show. I think all you’ve shown is that ethical claims aren’t more true than, say, mathematical truth or physical law. But by any normal standard, “as true as the proof of Fermat’s last theorem” is a very high degree of truth.
I think to get the ethical result you want, you should be showing that moral terms are strictly less meaningful than mathematical ones. Certainly you need to somehow separate mathematical truth from “ethical truth”—and I don’t see that ontology gets you there.
Actually, I am opposed to the argument of ontology of belief, which is why I was trying to argue that beliefs are encoded states of matter. If I assert that “X is wrong” it must mean I assert “I believe X is wrong” as well. If I assert “I believe X is wrong” but don’t assert “X is wrong”, something’s clearly a miss. As pointed out here, beliefs are reflections of best available estimates about physically existing things. If I do assert that I believe X is wrong but don’t assert that X is wrong, then either I am lying about the belief, or else there’s some muddling of definitions and maybe I mean some local version of X or some local version of “wrong”, or I am unaware of my actual state of beliefs (possibly due to insanity, etc.) But my point is that in a sane person, from that person’s first-person experience, the two statements, “I believe X is wrong” and “X is wrong” contain exactly the same information about the state of my brain. They are the same statement.
My point in all this was that “I believe X is wrong” has the same first-person referent as “X is wrong”. If X = murder, say, and I assert that “murder is wrong”, then once you unpack whatever definitions in terms of physical matter and consequence that I mean by “murder” and “wrong”, you’re left with a pointer to a physical arrangement of matter in my brain that resonates when feature vectors of my sensory input correlate with the pattern that stores “murder” and “wrong” in my brain’s memory. It’s a physical thing. The wrongness of murder is that thing, it isn’t an ontological concept that exists outside my brain as some non-physical attribute of reality. Even though other humans have remarkably similar brain-matter-patterns of wrongness and murder, enough so that the mutual information between the pattern allows effective communication, this doesn’t suddenly cause the idea that murder is wrong to stop being just a local manifestation in my brain and start being a separate idea that many humans share pointers to.
If someone wanted to establish metaethical claims based on the idea that there exist non-physical referents being referred to by common human beliefs, and that this set of referents somehow reflects an inherent property of reality, I think this would be misguided and experimentally either not falsifiable or at the very least unsupported by evidence. I don’t guess that this makes too much practical difference, other than being a sort of Pandora’s box for religious-type reasoning (but what isn’t?).
I think more salient examples that make this question hard are not going to be borne out of trying to come up with something increasingly abstract. The more puzzling cognitive objects to explain are when you apply unphysical transformations to obvious objects… like taking a dog and imagining it stretched out to the length of a football field. Or a person with a torus-like hole in their abdomen. But these are simply images in the brain. That the semantic content of the image can be interpreted as strange unions of other cognitive objects is not a reason to think that the cognitive object itself isn’t physical.