Imagine you say that believing X is irrational, and I reply “I don’t believe in ‘rational realism’, I think ‘rational’ is a vague term, can you give me a definition of ‘rational’ please?” That would be absurd. Of course I know what rational means, I just can’t define it, but we humans can hardly define any natural language terms at all.
I don’t think I could disagree any more strongly about this. In fact, I am kind of confused about your choice of example, because ‘rationality’ seems to me like such a clear counter to your argument. It is precisely the type of slippery concept that is portrayed inaccurately (relative to LW terminology) in mainstream culture and thus inherently requires a more rigorous definition and explanation. This was so important that “the best intro-to-rationality for the general public” (according to @lukeprog) specifically addressed the common misconception that being rational means being a Spock-like Straw Vulcan. It was so important that one of the crucial posts in the first Sequence by Eliezer spends almost 2000 words defining rationality. So important that, 14 years later, @Raemon had to write yet another post (with 150 upvotes) explaining what rationality is not, as a result of common and lasting confusions by users on this very site (presumably coming about as a result of the original posts not clarifying matters sufficiently).
What about the excellent and important post “Realism about Rationality” by Richard Ngo, which expresses “skepticism” about the mindset he calls “realism about rationality,” thus disagreeing with others who do think “this broad mindset is mostly correct, and the objections outlined in this essay are mostly wrong” and argue that “we should expect a clean mathematical theory of rationality and intelligence to exist”? Do you “of course know what rationality means” if you cannot settle as important a question as this? What about Bryan Caplan’s arguments that a drug addict who claims they want to stop buying drugs but can’t prevent themselves from doing so is actually acting perfectly rationally, because, in reality, their revealed preferences show that they really do want to consume drugs, and are thus rationally pursuing those goals by buying them? Caplan is a smart person expressing serious disagreement with the mainstream, intuitive perceptions of rationality and human desires; this strongly suggests that rationality is indeed, as you put it, “ambiguous or somehow defective and in need of an explicit definition.”
It wouldn’t be wrong to say that LessWrong was built to advance the study of rationality, both as it relates to humans and to AI. The very basis of this site and of the many Sequences and posts expanding upon these ideas is the notion that our understanding of rationality is currently inadequate and needs to be straightened out.
That I believe something is good doesn’t mean that I feel positive toward myself, or that I like it, or that I’m cheering for myself, or that I’m booing my past self if I changed my mind. Sometimes I may also just wonder whether something is good or bad (e.g. eating meat) which arguably makes no sense under those interpretations.
What anticipated experiences come about from the belief that something is “good” or “bad”? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time). And if there is no answer, then the concept of “moral facts” becomes essentially useless, like any other belief that pays no rent.
There is a large difference between knowing the meaning of a word, and knowing its definition. You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples. In philosophy those are somewhat famous cases of words that are hard to define, but most words from natural language could be chosen instead.
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions. Answering “Do you know where I left my keys?” doesn’t require a definition for “knowledge”. Answering “Is believing in ghosts irrational?” doesn’t require a definition of “rationality”. And answering “Is eating Bob’s lunch bad?” doesn’t require a definition of “bad”.
Attempts of finding such definitions is called philosophy, or conceptual analysis specifically. It helps with abstract reasoning by finding relations between concepts. For example, when asked explicitly, most people can’t say how knowledge and belief relate to each other (I tried). Philosophers would reply that knowledge implies belief but not the other way round, or that belief is internal while knowledge is (partly) external. In some cases knowing this is kind of important, but usually it isn’t.
What anticipated experiences come about from the belief that something is “good” or “bad”? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time).
Well, why not try to answer it yourself? I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want. I directionally agree with EY’s extrapolated volition explication of goodness (I linked to it in a neighboring comment). As he mentions, there are several philosophers who have provided similar analyses.
You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples.
It is interesting that you chose the example of “knowledge” because I think that is yet another illustration of the complete opposite of the position you are arguing for. I was not born with an intuitive understanding of Bayesianism, for example. However, I now consider anyone who hasn’t grasped Bayesian thinking (such as previous versions of me) but is nonetheless trying to seriously reason about what it means to know something to be terribly confused and to have a low likelihood of achieving anything meaningful in any non-intuitive context where formalizing/using precise meanings of knowledge is necessary. I would thus say that the vast majority of people who use ordinary words like “knowledge” don’t understand what they mean (or, to be more precise, they don’t understand the concepts that result from carving reality at its joints in a coherent manner).
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions.
I don’t care about definitions per se. The vast majority of human concepts and mental categories don’t work on the basis of necessary and sufficient conditions anyway, so an inability to supply a fully generalizable definition for something is caused much more by the fundamental failings of our inadequate language than by issues with our conceptual formation. Nevertheless, informal and non-rigorous thinking about concepts can easily lead into confusion and the reification of ultimately nonsensical ideas if they are not subject to enough critical analysis in the process.
or conceptual analysis specifically
Given my previous paragraph, I don’t think you would be surprised to hear that I find conceptual analysis to be virtually useless and a waste of resources, for basically the reasons laid out in detail by @lukeprog in “Concepts Don’t Work That Way” and “Intuitions Aren’t Shared That Way” almost 12 years ago. His (in my view incomplete) sequence on Rationality and Philosophy is as much a part of LW’s core as Eliezer’s own Sequences are, so while reasonable disagreement with it is certainly possible, I start with a very strong prior that it is correct, for purposes of our discussion.
Well, why not try to answer it yourself?
Well, I have tried to answer it myself, and after thinking about it very seriously and reading what people on all sides of the issue have thought about it, I have come to the conclusion that concepts of “moral truth” are inherently confused, pay no rent in anticipated experiences, and are based upon flaws in thinking that reveal how common-sensical intuitions are totally unmoored from reality when you get down to the nitty-gritty of it. Nevertheless, given the importance of this topic, I am certainly willing to change my mind if presented with evidence.
I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want.
That might well be evidence (in the Bayesian sense) that a given act, value, or person belongs to a certain category which we slap the label “good” onto. But it has little to do with my initial question. We have no reason to care about the property of “goodness” at all if we do not believe that knowing something is “good” gives us powerful evidence that allows us to anticipate experiences and to constrain the territory around us. Otherwise, “goodness” is just an arbitrary bag of things that is no more useful than the category of “bleggs” that is generated for no coherent reason whatsoever, or the random category “r398t”s that I just made up and contains only apples, weasels, and Ron Weasley. Indeed, we would not even have enough reason to raise the question of what “goodness” is in the first place.
To take a simple illustration of the difference between the conditions for membership in a category and the anticipated experiences resulting from “knowing” that something is a member of that category, consider groups in mathematics. The definition of a group is “a set together with a binary operation that satisfies the axioms of associativity, identity, and inverses.” But we don’t care about groups for reasons that deal only with these axioms; on the contrary, groups matter because they help model important situations in reality (such as symmetry groups in physics) and because we can tell a lot about the nature and structure of groups through mathematical reasoning. The fact that finite simple groups can be classified in a clear and concise manner is a consequence of their definition (not a formal precondition for their membership) and allows us to anticipate with extremely high (although not full) certainty that if we consider a finite simple group G, it will be isomorphic to one of the sample groups in the description above.
I don’t understand your point about anticipated experience. If I believe some action is good, I anticipate that doing that action will produce evidence (experience) that is indicative of increased welfare. That is exactly not like believing something to be “blegg”. Regarding mathematical groups, whether or not we care about them for their usefulness in physics seems not relevant for “group” to have a specific meaning. Like, you may not care about horses, but you still anticipate a certain visual experience when someone tells you they bought you a horse, it’s right outside. And for a group you’d anticipate that it turns out to satisfy associativity etc.
I don’t think I could disagree any more strongly about this. In fact, I am kind of confused about your choice of example, because ‘rationality’ seems to me like such a clear counter to your argument. It is precisely the type of slippery concept that is portrayed inaccurately (relative to LW terminology) in mainstream culture and thus inherently requires a more rigorous definition and explanation. This was so important that “the best intro-to-rationality for the general public” (according to @lukeprog) specifically addressed the common misconception that being rational means being a Spock-like Straw Vulcan. It was so important that one of the crucial posts in the first Sequence by Eliezer spends almost 2000 words defining rationality. So important that, 14 years later, @Raemon had to write yet another post (with 150 upvotes) explaining what rationality is not, as a result of common and lasting confusions by users on this very site (presumably coming about as a result of the original posts not clarifying matters sufficiently).
What about the excellent and important post “Realism about Rationality” by Richard Ngo, which expresses “skepticism” about the mindset he calls “realism about rationality,” thus disagreeing with others who do think “this broad mindset is mostly correct, and the objections outlined in this essay are mostly wrong” and argue that “we should expect a clean mathematical theory of rationality and intelligence to exist”? Do you “of course know what rationality means” if you cannot settle as important a question as this? What about Bryan Caplan’s arguments that a drug addict who claims they want to stop buying drugs but can’t prevent themselves from doing so is actually acting perfectly rationally, because, in reality, their revealed preferences show that they really do want to consume drugs, and are thus rationally pursuing those goals by buying them? Caplan is a smart person expressing serious disagreement with the mainstream, intuitive perceptions of rationality and human desires; this strongly suggests that rationality is indeed, as you put it, “ambiguous or somehow defective and in need of an explicit definition.”
It wouldn’t be wrong to say that LessWrong was built to advance the study of rationality, both as it relates to humans and to AI. The very basis of this site and of the many Sequences and posts expanding upon these ideas is the notion that our understanding of rationality is currently inadequate and needs to be straightened out.
What anticipated experiences come about from the belief that something is “good” or “bad”? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time). And if there is no answer, then the concept of “moral facts” becomes essentially useless, like any other belief that pays no rent.
A long time ago, @Roko laid out a possible thesis of “strong moral realism” that “All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory morphism, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.” He also correctly noted that “most modern philosophers who call themselves “realists” don’t mean anything nearly this strong. They mean that that there are moral “facts”, for varying definitions of “fact” that typically fade away into meaninglessness on closer examination, and actually make the same empirical predictions as antirealism.” Roko’s post lays out clear anticipated experiences coming about from this version of moral realism; it is falsifiable, and most importantly, it is about reality because it constrains reality, if true (but, as it strongly conflicts with the Orthogonality Thesis, the vast majority of users here would strongly disbelieve is true). Something like what Roko illustrated is necessary to answer the critiques of moral anti-realists like @Steven Byrnes, who are implicitly saying that reality is not at all constrained to any system of (human-intelligible) morality.
There is a large difference between knowing the meaning of a word, and knowing its definition. You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples. In philosophy those are somewhat famous cases of words that are hard to define, but most words from natural language could be chosen instead.
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions. Answering “Do you know where I left my keys?” doesn’t require a definition for “knowledge”. Answering “Is believing in ghosts irrational?” doesn’t require a definition of “rationality”. And answering “Is eating Bob’s lunch bad?” doesn’t require a definition of “bad”.
Attempts of finding such definitions is called philosophy, or conceptual analysis specifically. It helps with abstract reasoning by finding relations between concepts. For example, when asked explicitly, most people can’t say how knowledge and belief relate to each other (I tried). Philosophers would reply that knowledge implies belief but not the other way round, or that belief is internal while knowledge is (partly) external. In some cases knowing this is kind of important, but usually it isn’t.
Well, why not try to answer it yourself? I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want. I directionally agree with EY’s extrapolated volition explication of goodness (I linked to it in a neighboring comment). As he mentions, there are several philosophers who have provided similar analyses.
It is interesting that you chose the example of “knowledge” because I think that is yet another illustration of the complete opposite of the position you are arguing for. I was not born with an intuitive understanding of Bayesianism, for example. However, I now consider anyone who hasn’t grasped Bayesian thinking (such as previous versions of me) but is nonetheless trying to seriously reason about what it means to know something to be terribly confused and to have a low likelihood of achieving anything meaningful in any non-intuitive context where formalizing/using precise meanings of knowledge is necessary. I would thus say that the vast majority of people who use ordinary words like “knowledge” don’t understand what they mean (or, to be more precise, they don’t understand the concepts that result from carving reality at its joints in a coherent manner).
I don’t care about definitions per se. The vast majority of human concepts and mental categories don’t work on the basis of necessary and sufficient conditions anyway, so an inability to supply a fully generalizable definition for something is caused much more by the fundamental failings of our inadequate language than by issues with our conceptual formation. Nevertheless, informal and non-rigorous thinking about concepts can easily lead into confusion and the reification of ultimately nonsensical ideas if they are not subject to enough critical analysis in the process.
Given my previous paragraph, I don’t think you would be surprised to hear that I find conceptual analysis to be virtually useless and a waste of resources, for basically the reasons laid out in detail by @lukeprog in “Concepts Don’t Work That Way” and “Intuitions Aren’t Shared That Way” almost 12 years ago. His (in my view incomplete) sequence on Rationality and Philosophy is as much a part of LW’s core as Eliezer’s own Sequences are, so while reasonable disagreement with it is certainly possible, I start with a very strong prior that it is correct, for purposes of our discussion.
Well, I have tried to answer it myself, and after thinking about it very seriously and reading what people on all sides of the issue have thought about it, I have come to the conclusion that concepts of “moral truth” are inherently confused, pay no rent in anticipated experiences, and are based upon flaws in thinking that reveal how common-sensical intuitions are totally unmoored from reality when you get down to the nitty-gritty of it. Nevertheless, given the importance of this topic, I am certainly willing to change my mind if presented with evidence.
That might well be evidence (in the Bayesian sense) that a given act, value, or person belongs to a certain category which we slap the label “good” onto. But it has little to do with my initial question. We have no reason to care about the property of “goodness” at all if we do not believe that knowing something is “good” gives us powerful evidence that allows us to anticipate experiences and to constrain the territory around us. Otherwise, “goodness” is just an arbitrary bag of things that is no more useful than the category of “bleggs” that is generated for no coherent reason whatsoever, or the random category “r398t”s that I just made up and contains only apples, weasels, and Ron Weasley. Indeed, we would not even have enough reason to raise the question of what “goodness” is in the first place.
To take a simple illustration of the difference between the conditions for membership in a category and the anticipated experiences resulting from “knowing” that something is a member of that category, consider groups in mathematics. The definition of a group is “a set together with a binary operation that satisfies the axioms of associativity, identity, and inverses.” But we don’t care about groups for reasons that deal only with these axioms; on the contrary, groups matter because they help model important situations in reality (such as symmetry groups in physics) and because we can tell a lot about the nature and structure of groups through mathematical reasoning. The fact that finite simple groups can be classified in a clear and concise manner is a consequence of their definition (not a formal precondition for their membership) and allows us to anticipate with extremely high (although not full) certainty that if we consider a finite simple group G, it will be isomorphic to one of the sample groups in the description above.
I don’t understand your point about anticipated experience. If I believe some action is good, I anticipate that doing that action will produce evidence (experience) that is indicative of increased welfare. That is exactly not like believing something to be “blegg”. Regarding mathematical groups, whether or not we care about them for their usefulness in physics seems not relevant for “group” to have a specific meaning. Like, you may not care about horses, but you still anticipate a certain visual experience when someone tells you they bought you a horse, it’s right outside. And for a group you’d anticipate that it turns out to satisfy associativity etc.