This is starting to feel like a shallow game of definition-bending. I don’t think we’re disagreeing about any testable claim.
So I’m not going to argue about why your definition is wrong, but I will describe why I think it’s less useful in expressing the sorts of claims we make about the world.
When we talk about whether two mental models are similar, the similarity function we use is representation-independent. You and I might have very similar mental models, even if you are thinking with superconducting wires in liquid helium and our physical brains have nothing in common. Not being willing to talk honestly about abstractions makes it hard to ask how closely aligned two mental models are—and that’s a useful question to ask, since it helps predict speech-acts.
Conversely, saying that “everything is a physical property” deprives us of what was previously a useful category. A toaster is physical in a way that an eight-dimensional vector space is not and in a way that a not-yet-produced toaster is not. I want a word to capture that difference.
In particular, physical objects, as most of the world uses the term, means that objects have position and mass that evolve in predictable ways. It’s sensible to ask what a toaster weighs. It’s not sensible to ask what a mental model weighs.
I think your definitions here mean that you can’t actually explain ordinary ostensive reference. There is a toaster over there, and a mental model over here, and there is some correspondence. And the way most of the world uses language, I can have the same referential relationship to a fictional person as to a real person, as to a toaster.
When we talk about whether two mental models are similar, the similarity function we use is representation-independent. …
Not being willing to talk honestly about abstractions makes it hard to ask how closely aligned two mental models are—and that’s a useful question to ask, since it helps predict speech-acts.
Conversely, saying that “everything is a physical property” deprives us of what was previously a useful category. A toaster is physical in a way that an eight-dimensional vector space is not and in a way that a not-yet-produced toaster is not. I want a word to capture that difference.
First, I didn’t say anything at all about the usefulness of treating abstractions the way we do. I don’t believe in actual free will but I certainly believe that the way we walk around acting as if free will was a real attribute that we have is very useful. You can arrange a network of neurons in such a way that it will allow identification of a concept, and we use natural language to talk about this sort of arrangement of matter. Talking about it that way is just fine, and indeed very useful. But this thread was about a defense of metaethics and partially about the defense of beliefs as non-physical, but still really existing, entities. For purposes of debating that point, I think it starts to matter whether someone does or does not recognize that concepts are just arrangements of matter: information which can be extracted from brain states but does not in and of itself point to any actual, ontological entity.
I think I am quite willing to talk about abstractions and their usefulness … just not willing to agree that they are fundamental parts of reality rather than merely hallucinations the same way that free will is.
In conversations about the ontology of physical categories, it’s better to say that the category of toasters in my brain is just a pattern of matter that happens to score high correlations with image, auditory, and verbals feature vectors generated by toasters. In conversations about making toast, it’s better to talk about the abstraction of the category of toasters as if it was itself something.
It’s the same as talking about the wing of an airplane.
But this thread was about a defense of metaethics and partially about the defense of beliefs as non-physical, but still really existing, entities. For purposes of debating that point, I think it starts to matter whether someone does or does not recognize that concepts are just arrangements of matter: information which can be extracted from brain states but does not in and of itself point to any actual, ontological entity.
Thank you, that explained where you were coming from.
But I don’t see that any of this ontology gets you the meta-ethical result you want to show. I think all you’ve shown is that ethical claims aren’t more true than, say, mathematical truth or physical law. But by any normal standard, “as true as the proof of Fermat’s last theorem” is a very high degree of truth.
I think to get the ethical result you want, you should be showing that moral terms are strictly less meaningful than mathematical ones. Certainly you need to somehow separate mathematical truth from “ethical truth”—and I don’t see that ontology gets you there.
Actually, I am opposed to the argument of ontology of belief, which is why I was trying to argue that beliefs are encoded states of matter. If I assert that “X is wrong” it must mean I assert “I believe X is wrong” as well. If I assert “I believe X is wrong” but don’t assert “X is wrong”, something’s clearly a miss. As pointed out here, beliefs are reflections of best available estimates about physically existing things. If I do assert that I believe X is wrong but don’t assert that X is wrong, then either I am lying about the belief, or else there’s some muddling of definitions and maybe I mean some local version of X or some local version of “wrong”, or I am unaware of my actual state of beliefs (possibly due to insanity, etc.) But my point is that in a sane person, from that person’s first-person experience, the two statements, “I believe X is wrong” and “X is wrong” contain exactly the same information about the state of my brain. They are the same statement.
My point in all this was that “I believe X is wrong” has the same first-person referent as “X is wrong”. If X = murder, say, and I assert that “murder is wrong”, then once you unpack whatever definitions in terms of physical matter and consequence that I mean by “murder” and “wrong”, you’re left with a pointer to a physical arrangement of matter in my brain that resonates when feature vectors of my sensory input correlate with the pattern that stores “murder” and “wrong” in my brain’s memory. It’s a physical thing. The wrongness of murder is that thing, it isn’t an ontological concept that exists outside my brain as some non-physical attribute of reality. Even though other humans have remarkably similar brain-matter-patterns of wrongness and murder, enough so that the mutual information between the pattern allows effective communication, this doesn’t suddenly cause the idea that murder is wrong to stop being just a local manifestation in my brain and start being a separate idea that many humans share pointers to.
If someone wanted to establish metaethical claims based on the idea that there exist non-physical referents being referred to by common human beliefs, and that this set of referents somehow reflects an inherent property of reality, I think this would be misguided and experimentally either not falsifiable or at the very least unsupported by evidence. I don’t guess that this makes too much practical difference, other than being a sort of Pandora’s box for religious-type reasoning (but what isn’t?).
This is starting to feel like a shallow game of definition-bending. I don’t think we’re disagreeing about any testable claim. So I’m not going to argue about why your definition is wrong, but I will describe why I think it’s less useful in expressing the sorts of claims we make about the world.
When we talk about whether two mental models are similar, the similarity function we use is representation-independent. You and I might have very similar mental models, even if you are thinking with superconducting wires in liquid helium and our physical brains have nothing in common. Not being willing to talk honestly about abstractions makes it hard to ask how closely aligned two mental models are—and that’s a useful question to ask, since it helps predict speech-acts.
Conversely, saying that “everything is a physical property” deprives us of what was previously a useful category. A toaster is physical in a way that an eight-dimensional vector space is not and in a way that a not-yet-produced toaster is not. I want a word to capture that difference.
In particular, physical objects, as most of the world uses the term, means that objects have position and mass that evolve in predictable ways. It’s sensible to ask what a toaster weighs. It’s not sensible to ask what a mental model weighs.
I think your definitions here mean that you can’t actually explain ordinary ostensive reference. There is a toaster over there, and a mental model over here, and there is some correspondence. And the way most of the world uses language, I can have the same referential relationship to a fictional person as to a real person, as to a toaster.
And I think I’m now done with the topic.
First, I didn’t say anything at all about the usefulness of treating abstractions the way we do. I don’t believe in actual free will but I certainly believe that the way we walk around acting as if free will was a real attribute that we have is very useful. You can arrange a network of neurons in such a way that it will allow identification of a concept, and we use natural language to talk about this sort of arrangement of matter. Talking about it that way is just fine, and indeed very useful. But this thread was about a defense of metaethics and partially about the defense of beliefs as non-physical, but still really existing, entities. For purposes of debating that point, I think it starts to matter whether someone does or does not recognize that concepts are just arrangements of matter: information which can be extracted from brain states but does not in and of itself point to any actual, ontological entity.
I think I am quite willing to talk about abstractions and their usefulness … just not willing to agree that they are fundamental parts of reality rather than merely hallucinations the same way that free will is.
In conversations about the ontology of physical categories, it’s better to say that the category of toasters in my brain is just a pattern of matter that happens to score high correlations with image, auditory, and verbals feature vectors generated by toasters. In conversations about making toast, it’s better to talk about the abstraction of the category of toasters as if it was itself something.
It’s the same as talking about the wing of an airplane.
Thank you, that explained where you were coming from.
But I don’t see that any of this ontology gets you the meta-ethical result you want to show. I think all you’ve shown is that ethical claims aren’t more true than, say, mathematical truth or physical law. But by any normal standard, “as true as the proof of Fermat’s last theorem” is a very high degree of truth.
I think to get the ethical result you want, you should be showing that moral terms are strictly less meaningful than mathematical ones. Certainly you need to somehow separate mathematical truth from “ethical truth”—and I don’t see that ontology gets you there.
Actually, I am opposed to the argument of ontology of belief, which is why I was trying to argue that beliefs are encoded states of matter. If I assert that “X is wrong” it must mean I assert “I believe X is wrong” as well. If I assert “I believe X is wrong” but don’t assert “X is wrong”, something’s clearly a miss. As pointed out here, beliefs are reflections of best available estimates about physically existing things. If I do assert that I believe X is wrong but don’t assert that X is wrong, then either I am lying about the belief, or else there’s some muddling of definitions and maybe I mean some local version of X or some local version of “wrong”, or I am unaware of my actual state of beliefs (possibly due to insanity, etc.) But my point is that in a sane person, from that person’s first-person experience, the two statements, “I believe X is wrong” and “X is wrong” contain exactly the same information about the state of my brain. They are the same statement.
My point in all this was that “I believe X is wrong” has the same first-person referent as “X is wrong”. If X = murder, say, and I assert that “murder is wrong”, then once you unpack whatever definitions in terms of physical matter and consequence that I mean by “murder” and “wrong”, you’re left with a pointer to a physical arrangement of matter in my brain that resonates when feature vectors of my sensory input correlate with the pattern that stores “murder” and “wrong” in my brain’s memory. It’s a physical thing. The wrongness of murder is that thing, it isn’t an ontological concept that exists outside my brain as some non-physical attribute of reality. Even though other humans have remarkably similar brain-matter-patterns of wrongness and murder, enough so that the mutual information between the pattern allows effective communication, this doesn’t suddenly cause the idea that murder is wrong to stop being just a local manifestation in my brain and start being a separate idea that many humans share pointers to.
If someone wanted to establish metaethical claims based on the idea that there exist non-physical referents being referred to by common human beliefs, and that this set of referents somehow reflects an inherent property of reality, I think this would be misguided and experimentally either not falsifiable or at the very least unsupported by evidence. I don’t guess that this makes too much practical difference, other than being a sort of Pandora’s box for religious-type reasoning (but what isn’t?).