There is a large difference between knowing the meaning of a word, and knowing its definition. You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples. In philosophy those are somewhat famous cases of words that are hard to define, but most words from natural language could be chosen instead.
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions. Answering “Do you know where I left my keys?” doesn’t require a definition for “knowledge”. Answering “Is believing in ghosts irrational?” doesn’t require a definition of “rationality”. And answering “Is eating Bob’s lunch bad?” doesn’t require a definition of “bad”.
Attempts of finding such definitions is called philosophy, or conceptual analysis specifically. It helps with abstract reasoning by finding relations between concepts. For example, when asked explicitly, most people can’t say how knowledge and belief relate to each other (I tried). Philosophers would reply that knowledge implies belief but not the other way round, or that belief is internal while knowledge is (partly) external. In some cases knowing this is kind of important, but usually it isn’t.
What anticipated experiences come about from the belief that something is “good” or “bad”? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time).
Well, why not try to answer it yourself? I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want. I directionally agree with EY’s extrapolated volition explication of goodness (I linked to it in a neighboring comment). As he mentions, there are several philosophers who have provided similar analyses.
You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples.
It is interesting that you chose the example of “knowledge” because I think that is yet another illustration of the complete opposite of the position you are arguing for. I was not born with an intuitive understanding of Bayesianism, for example. However, I now consider anyone who hasn’t grasped Bayesian thinking (such as previous versions of me) but is nonetheless trying to seriously reason about what it means to know something to be terribly confused and to have a low likelihood of achieving anything meaningful in any non-intuitive context where formalizing/using precise meanings of knowledge is necessary. I would thus say that the vast majority of people who use ordinary words like “knowledge” don’t understand what they mean (or, to be more precise, they don’t understand the concepts that result from carving reality at its joints in a coherent manner).
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions.
I don’t care about definitions per se. The vast majority of human concepts and mental categories don’t work on the basis of necessary and sufficient conditions anyway, so an inability to supply a fully generalizable definition for something is caused much more by the fundamental failings of our inadequate language than by issues with our conceptual formation. Nevertheless, informal and non-rigorous thinking about concepts can easily lead into confusion and the reification of ultimately nonsensical ideas if they are not subject to enough critical analysis in the process.
or conceptual analysis specifically
Given my previous paragraph, I don’t think you would be surprised to hear that I find conceptual analysis to be virtually useless and a waste of resources, for basically the reasons laid out in detail by @lukeprog in “Concepts Don’t Work That Way” and “Intuitions Aren’t Shared That Way” almost 12 years ago. His (in my view incomplete) sequence on Rationality and Philosophy is as much a part of LW’s core as Eliezer’s own Sequences are, so while reasonable disagreement with it is certainly possible, I start with a very strong prior that it is correct, for purposes of our discussion.
Well, why not try to answer it yourself?
Well, I have tried to answer it myself, and after thinking about it very seriously and reading what people on all sides of the issue have thought about it, I have come to the conclusion that concepts of “moral truth” are inherently confused, pay no rent in anticipated experiences, and are based upon flaws in thinking that reveal how common-sensical intuitions are totally unmoored from reality when you get down to the nitty-gritty of it. Nevertheless, given the importance of this topic, I am certainly willing to change my mind if presented with evidence.
I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want.
That might well be evidence (in the Bayesian sense) that a given act, value, or person belongs to a certain category which we slap the label “good” onto. But it has little to do with my initial question. We have no reason to care about the property of “goodness” at all if we do not believe that knowing something is “good” gives us powerful evidence that allows us to anticipate experiences and to constrain the territory around us. Otherwise, “goodness” is just an arbitrary bag of things that is no more useful than the category of “bleggs” that is generated for no coherent reason whatsoever, or the random category “r398t”s that I just made up and contains only apples, weasels, and Ron Weasley. Indeed, we would not even have enough reason to raise the question of what “goodness” is in the first place.
To take a simple illustration of the difference between the conditions for membership in a category and the anticipated experiences resulting from “knowing” that something is a member of that category, consider groups in mathematics. The definition of a group is “a set together with a binary operation that satisfies the axioms of associativity, identity, and inverses.” But we don’t care about groups for reasons that deal only with these axioms; on the contrary, groups matter because they help model important situations in reality (such as symmetry groups in physics) and because we can tell a lot about the nature and structure of groups through mathematical reasoning. The fact that finite simple groups can be classified in a clear and concise manner is a consequence of their definition (not a formal precondition for their membership) and allows us to anticipate with extremely high (although not full) certainty that if we consider a finite simple group G, it will be isomorphic to one of the sample groups in the description above.
I don’t understand your point about anticipated experience. If I believe some action is good, I anticipate that doing that action will produce evidence (experience) that is indicative of increased welfare. That is exactly not like believing something to be “blegg”. Regarding mathematical groups, whether or not we care about them for their usefulness in physics seems not relevant for “group” to have a specific meaning. Like, you may not care about horses, but you still anticipate a certain visual experience when someone tells you they bought you a horse, it’s right outside. And for a group you’d anticipate that it turns out to satisfy associativity etc.
There is a large difference between knowing the meaning of a word, and knowing its definition. You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples. In philosophy those are somewhat famous cases of words that are hard to define, but most words from natural language could be chosen instead.
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions. Answering “Do you know where I left my keys?” doesn’t require a definition for “knowledge”. Answering “Is believing in ghosts irrational?” doesn’t require a definition of “rationality”. And answering “Is eating Bob’s lunch bad?” doesn’t require a definition of “bad”.
Attempts of finding such definitions is called philosophy, or conceptual analysis specifically. It helps with abstract reasoning by finding relations between concepts. For example, when asked explicitly, most people can’t say how knowledge and belief relate to each other (I tried). Philosophers would reply that knowledge implies belief but not the other way round, or that belief is internal while knowledge is (partly) external. In some cases knowing this is kind of important, but usually it isn’t.
Well, why not try to answer it yourself? I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want. I directionally agree with EY’s extrapolated volition explication of goodness (I linked to it in a neighboring comment). As he mentions, there are several philosophers who have provided similar analyses.
It is interesting that you chose the example of “knowledge” because I think that is yet another illustration of the complete opposite of the position you are arguing for. I was not born with an intuitive understanding of Bayesianism, for example. However, I now consider anyone who hasn’t grasped Bayesian thinking (such as previous versions of me) but is nonetheless trying to seriously reason about what it means to know something to be terribly confused and to have a low likelihood of achieving anything meaningful in any non-intuitive context where formalizing/using precise meanings of knowledge is necessary. I would thus say that the vast majority of people who use ordinary words like “knowledge” don’t understand what they mean (or, to be more precise, they don’t understand the concepts that result from carving reality at its joints in a coherent manner).
I don’t care about definitions per se. The vast majority of human concepts and mental categories don’t work on the basis of necessary and sufficient conditions anyway, so an inability to supply a fully generalizable definition for something is caused much more by the fundamental failings of our inadequate language than by issues with our conceptual formation. Nevertheless, informal and non-rigorous thinking about concepts can easily lead into confusion and the reification of ultimately nonsensical ideas if they are not subject to enough critical analysis in the process.
Given my previous paragraph, I don’t think you would be surprised to hear that I find conceptual analysis to be virtually useless and a waste of resources, for basically the reasons laid out in detail by @lukeprog in “Concepts Don’t Work That Way” and “Intuitions Aren’t Shared That Way” almost 12 years ago. His (in my view incomplete) sequence on Rationality and Philosophy is as much a part of LW’s core as Eliezer’s own Sequences are, so while reasonable disagreement with it is certainly possible, I start with a very strong prior that it is correct, for purposes of our discussion.
Well, I have tried to answer it myself, and after thinking about it very seriously and reading what people on all sides of the issue have thought about it, I have come to the conclusion that concepts of “moral truth” are inherently confused, pay no rent in anticipated experiences, and are based upon flaws in thinking that reveal how common-sensical intuitions are totally unmoored from reality when you get down to the nitty-gritty of it. Nevertheless, given the importance of this topic, I am certainly willing to change my mind if presented with evidence.
That might well be evidence (in the Bayesian sense) that a given act, value, or person belongs to a certain category which we slap the label “good” onto. But it has little to do with my initial question. We have no reason to care about the property of “goodness” at all if we do not believe that knowing something is “good” gives us powerful evidence that allows us to anticipate experiences and to constrain the territory around us. Otherwise, “goodness” is just an arbitrary bag of things that is no more useful than the category of “bleggs” that is generated for no coherent reason whatsoever, or the random category “r398t”s that I just made up and contains only apples, weasels, and Ron Weasley. Indeed, we would not even have enough reason to raise the question of what “goodness” is in the first place.
To take a simple illustration of the difference between the conditions for membership in a category and the anticipated experiences resulting from “knowing” that something is a member of that category, consider groups in mathematics. The definition of a group is “a set together with a binary operation that satisfies the axioms of associativity, identity, and inverses.” But we don’t care about groups for reasons that deal only with these axioms; on the contrary, groups matter because they help model important situations in reality (such as symmetry groups in physics) and because we can tell a lot about the nature and structure of groups through mathematical reasoning. The fact that finite simple groups can be classified in a clear and concise manner is a consequence of their definition (not a formal precondition for their membership) and allows us to anticipate with extremely high (although not full) certainty that if we consider a finite simple group G, it will be isomorphic to one of the sample groups in the description above.
I don’t understand your point about anticipated experience. If I believe some action is good, I anticipate that doing that action will produce evidence (experience) that is indicative of increased welfare. That is exactly not like believing something to be “blegg”. Regarding mathematical groups, whether or not we care about them for their usefulness in physics seems not relevant for “group” to have a specific meaning. Like, you may not care about horses, but you still anticipate a certain visual experience when someone tells you they bought you a horse, it’s right outside. And for a group you’d anticipate that it turns out to satisfy associativity etc.