You need to define what you mean by good for this to make sense. If good means “what you should do” then it’s exactly the big claim Steve is arguing against. If it means something else, what is it?
I do have ideas about what people mean by “good” other than “what you should do”, but they’re complex. That’s why I think you need to define the term more for this claim to make sense.
If good means “what you should do” then it’s exactly the big claim Steve is arguing against.
If Steve is saying that the moral facts need to be intrinsically motivating, that is a stronger claim than “the good is what you should do”, ie, it is the claim that “the good is what you would do”. But, as cubefox points out, being intrinsically motivating isn’t part of moral realism as defined in the mainstream. (it is apparently part of moral realism as defined in LW, because of something EY said years ago). Also, since moral realism is metaethical claim, there is no need to specify the good at object level.
I’d be happy to come back later and give my guesses at what people tend to mean by “good”; it’s something like “stuff people do whom I want on my team” or “actions that make me feel positively toward someone”.
Once again, theories aren’t definitions.
People don’t all have to have the same moral theory. At the same time, there has to be a common semantic basis for disagreement, rather than talking past, to take place. “The good is what you should do” is pretty reasonable as a shared definition, since it is hard to dispute, but also neutral between “the good” being define personally, tribally, or universally.
Good points. I think the term moral realism is probably used in a variety of ways in the public sphere. I think the relevant sense is “will alignment solve itself because a smart machine will decide to behave in a way we like”. If there’s some vague sense of stuff everyone “should” do, but it doesn’t make them actually do it, then it doesn’t matter for this purpose.
I was (and have been) making a theory about definitions.
I think “the good is what you should do” is remarkably devoid of useful meaning. People often mean very little by “should”, are unclear both to others and themselves, and use it in different ways in different situations.
My theory is that “good” is usually defined as an emotion, not another set of words, and that emotion roughly means “I want that person on my team” (when applied to behavior), because evolution engineered us to find useful teammates, and that feeling is its mechanism for doing so.
Good points. I think the term moral realism is probably used in a variety of ways in the public sphere. I think the relevant sense is “will alignment solve itself because a smart machine will decide to behave in a way we like”. If there’s some vague sense of stuff everyone “should” do, but it doesn’t make them actually do it, then it doesn’t matter for this purpose.
I think “the good is what you should do” is remarkably devoid of useful meaning. People often mean very little by “should”, are unclear both to others and themselves, and use it in different ways in different situations.
For understanding human ethics, the important thing is that it grounds out in punishments and rewards—the good is what you should do , and if you don’t do it, you face punishment. Another thing that means is that a theory of ethics must be sufficient to justify putting people in jail. But a definition is not a theory.
My theory is that “good” is usually defined as an emotion, not another set of words, and that emotion roughly means “I want that person on my team” (when applied to behavior),
If your whole theory of ethics is to rubber stamp emotions or opinions, you end up with a very superficial theory that is open to objections like the Open Question argument. Just because somebody feels it is good to do X does not mean it was necessarily is—it is an open question. If the good is your emotions , then it is a closed question...your emotions are your emotions , likewise your values are your values, and your opinions are your opinions. The openness of the question “you feel that X is good, but is it really?” is a *theoretical” reason for believing that “goodness” works more like “truth” and less like “belief”.
(And the OQA is quite likely what this passage by Nostalgebraist hints at:-
*Who shoots down the enemy soldiers while thinking, “if I had been born there, it would have been all-important for their side to win, and so I would have shot at the men on this side. However, I was born in my country, not theirs, and so it is all-important that my country should win, and that theirs should lose.
There is no reason for this. It could have been the other way around, and everything would be left exactly the same, except for the ‘values.’
I cannot argue with the enemy, for there is no argument in my favor. I can only shoot them down.)
because evolution engineered us to find useful teammates, and that feeling is its mechanism for
And having gathered our team to fight the other team, we can ask ourselves whether we might actually be the baddies.
The *practical* objection kicks in when there are conflicts between subjective views.
A theory of ethics needs to justify real world actions—especially actions that impact other people , especially actions that impact other people negatively.( It’s not just about passively understanding the world, about ‘what anticipated experiences come about from the belief that something is “good” or “bad”?’)Why should someone really go to jail ,if they havent really done anything wrong? Well, if the good is what you should do, jailing people is justifiable , because the kind of ting you shouldn’t do is the kind of thing you deserve punishment for.
Of course, the open question argument doesn’t take you all the way to full strength moral realism. Less obviously, there are many alternatives to MR. Nihilism is one: you can’t argue that emotivism is true because MR is false—emotivism might be wrong because ethics is nothing. Emotivism might also be wrong because some position weaker than MR is right.
I don’t think anyone needs to define what words used in ordinary language mean because the validity of any attempt of such a definition would itself have to be checked against the intuitive meaning of the word in common usage.
If good means “what you should do” then it’s exactly the big claim Steve is arguing against.
I do think the meaning is indeed similar (except of supererogatory statements), but the argument isn’t affected. For example, I can believe that I shouldn’t eat meat, or that eating meat is bad, without being motivated to stop eating meat.
I have no idea what you mean by your claim if you won’t define the central term. Or I do, but I’m just guessing. I think people are typically very vague in what they mean by “good”, so it’s not adequate for analytical discussion. In this case, a vague sense of good produces only a vague sense in which “moral realism” isn’t a strong claim. I just don’t know what you mean by that.
I’d be happy to come back later and give my guesses at what people tend to mean by “good”; it’s something like “stuff people do whom I want on my team” or “actions that make me feel positively toward someone”. But it would require a lot more words to even start nailing down. And while that’s a claim about reality, it’s quite a complex, dependent, and therefore vague claim, so I’d be reluctant to call it moral realism. Although it is in one sense. So maybe that’s what you mean?
Almost all terms in natural language are vague, but that doesn’t mean they are all ambiguous or somehow defective and in need of an explicit definition. We know what words mean, we can give examples, but we don’t have definitions in our mind. Imagine you say that believing X is irrational, and I reply “I don’t believe in ‘rational realism’, I think ‘rational’ is a vague term, can you give me a definition of ‘rational’ please?” That would be absurd. Of course I know what rational means, I just can’t define it, but we humans can hardly define any natural language terms at all.
it’s something like “stuff people do whom I want on my team” or “actions that make me feel positively toward someone”. But it would require a lot more words to even start nailing down. And while that’s a claim about reality, it’s quite a complex, dependent, and therefore vague claim, so I’d be reluctant to call it moral realism.
That would indeed not count as moral realism, the form of anti-realism would probably be something similar to subjectivism (“x is good” ≈ “I like X”) or expressivism (“x is good” ≈ “Yay x!”).
But I don’t think this can make reasonable sense of beliefs. That I believe something is good doesn’t mean that I feel positive toward myself, or that I like it, or that I’m cheering for myself, or that I’m booing my past self if I changed my mind. Sometimes I may also just wonder whether something is good or bad (e.g. eating meat) which arguably makes no sense under those interpretations.
Imagine you say that believing X is irrational, and I reply “I don’t believe in ‘rational realism’, I think ‘rational’ is a vague term, can you give me a definition of ‘rational’ please?” That would be absurd. Of course I know what rational means, I just can’t define it, but we humans can hardly define any natural language terms at all.
I don’t think I could disagree any more strongly about this. In fact, I am kind of confused about your choice of example, because ‘rationality’ seems to me like such a clear counter to your argument. It is precisely the type of slippery concept that is portrayed inaccurately (relative to LW terminology) in mainstream culture and thus inherently requires a more rigorous definition and explanation. This was so important that “the best intro-to-rationality for the general public” (according to @lukeprog) specifically addressed the common misconception that being rational means being a Spock-like Straw Vulcan. It was so important that one of the crucial posts in the first Sequence by Eliezer spends almost 2000 words defining rationality. So important that, 14 years later, @Raemon had to write yet another post (with 150 upvotes) explaining what rationality is not, as a result of common and lasting confusions by users on this very site (presumably coming about as a result of the original posts not clarifying matters sufficiently).
What about the excellent and important post “Realism about Rationality” by Richard Ngo, which expresses “skepticism” about the mindset he calls “realism about rationality,” thus disagreeing with others who do think “this broad mindset is mostly correct, and the objections outlined in this essay are mostly wrong” and argue that “we should expect a clean mathematical theory of rationality and intelligence to exist”? Do you “of course know what rationality means” if you cannot settle as important a question as this? What about Bryan Caplan’s arguments that a drug addict who claims they want to stop buying drugs but can’t prevent themselves from doing so is actually acting perfectly rationally, because, in reality, their revealed preferences show that they really do want to consume drugs, and are thus rationally pursuing those goals by buying them? Caplan is a smart person expressing serious disagreement with the mainstream, intuitive perceptions of rationality and human desires; this strongly suggests that rationality is indeed, as you put it, “ambiguous or somehow defective and in need of an explicit definition.”
It wouldn’t be wrong to say that LessWrong was built to advance the study of rationality, both as it relates to humans and to AI. The very basis of this site and of the many Sequences and posts expanding upon these ideas is the notion that our understanding of rationality is currently inadequate and needs to be straightened out.
That I believe something is good doesn’t mean that I feel positive toward myself, or that I like it, or that I’m cheering for myself, or that I’m booing my past self if I changed my mind. Sometimes I may also just wonder whether something is good or bad (e.g. eating meat) which arguably makes no sense under those interpretations.
What anticipated experiences come about from the belief that something is “good” or “bad”? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time). And if there is no answer, then the concept of “moral facts” becomes essentially useless, like any other belief that pays no rent.
There is a large difference between knowing the meaning of a word, and knowing its definition. You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples. In philosophy those are somewhat famous cases of words that are hard to define, but most words from natural language could be chosen instead.
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions. Answering “Do you know where I left my keys?” doesn’t require a definition for “knowledge”. Answering “Is believing in ghosts irrational?” doesn’t require a definition of “rationality”. And answering “Is eating Bob’s lunch bad?” doesn’t require a definition of “bad”.
Attempts of finding such definitions is called philosophy, or conceptual analysis specifically. It helps with abstract reasoning by finding relations between concepts. For example, when asked explicitly, most people can’t say how knowledge and belief relate to each other (I tried). Philosophers would reply that knowledge implies belief but not the other way round, or that belief is internal while knowledge is (partly) external. In some cases knowing this is kind of important, but usually it isn’t.
What anticipated experiences come about from the belief that something is “good” or “bad”? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time).
Well, why not try to answer it yourself? I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want. I directionally agree with EY’s extrapolated volition explication of goodness (I linked to it in a neighboring comment). As he mentions, there are several philosophers who have provided similar analyses.
You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples.
It is interesting that you chose the example of “knowledge” because I think that is yet another illustration of the complete opposite of the position you are arguing for. I was not born with an intuitive understanding of Bayesianism, for example. However, I now consider anyone who hasn’t grasped Bayesian thinking (such as previous versions of me) but is nonetheless trying to seriously reason about what it means to know something to be terribly confused and to have a low likelihood of achieving anything meaningful in any non-intuitive context where formalizing/using precise meanings of knowledge is necessary. I would thus say that the vast majority of people who use ordinary words like “knowledge” don’t understand what they mean (or, to be more precise, they don’t understand the concepts that result from carving reality at its joints in a coherent manner).
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions.
I don’t care about definitions per se. The vast majority of human concepts and mental categories don’t work on the basis of necessary and sufficient conditions anyway, so an inability to supply a fully generalizable definition for something is caused much more by the fundamental failings of our inadequate language than by issues with our conceptual formation. Nevertheless, informal and non-rigorous thinking about concepts can easily lead into confusion and the reification of ultimately nonsensical ideas if they are not subject to enough critical analysis in the process.
or conceptual analysis specifically
Given my previous paragraph, I don’t think you would be surprised to hear that I find conceptual analysis to be virtually useless and a waste of resources, for basically the reasons laid out in detail by @lukeprog in “Concepts Don’t Work That Way” and “Intuitions Aren’t Shared That Way” almost 12 years ago. His (in my view incomplete) sequence on Rationality and Philosophy is as much a part of LW’s core as Eliezer’s own Sequences are, so while reasonable disagreement with it is certainly possible, I start with a very strong prior that it is correct, for purposes of our discussion.
Well, why not try to answer it yourself?
Well, I have tried to answer it myself, and after thinking about it very seriously and reading what people on all sides of the issue have thought about it, I have come to the conclusion that concepts of “moral truth” are inherently confused, pay no rent in anticipated experiences, and are based upon flaws in thinking that reveal how common-sensical intuitions are totally unmoored from reality when you get down to the nitty-gritty of it. Nevertheless, given the importance of this topic, I am certainly willing to change my mind if presented with evidence.
I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want.
That might well be evidence (in the Bayesian sense) that a given act, value, or person belongs to a certain category which we slap the label “good” onto. But it has little to do with my initial question. We have no reason to care about the property of “goodness” at all if we do not believe that knowing something is “good” gives us powerful evidence that allows us to anticipate experiences and to constrain the territory around us. Otherwise, “goodness” is just an arbitrary bag of things that is no more useful than the category of “bleggs” that is generated for no coherent reason whatsoever, or the random category “r398t”s that I just made up and contains only apples, weasels, and Ron Weasley. Indeed, we would not even have enough reason to raise the question of what “goodness” is in the first place.
To take a simple illustration of the difference between the conditions for membership in a category and the anticipated experiences resulting from “knowing” that something is a member of that category, consider groups in mathematics. The definition of a group is “a set together with a binary operation that satisfies the axioms of associativity, identity, and inverses.” But we don’t care about groups for reasons that deal only with these axioms; on the contrary, groups matter because they help model important situations in reality (such as symmetry groups in physics) and because we can tell a lot about the nature and structure of groups through mathematical reasoning. The fact that finite simple groups can be classified in a clear and concise manner is a consequence of their definition (not a formal precondition for their membership) and allows us to anticipate with extremely high (although not full) certainty that if we consider a finite simple group G, it will be isomorphic to one of the sample groups in the description above.
I don’t understand your point about anticipated experience. If I believe some action is good, I anticipate that doing that action will produce evidence (experience) that is indicative of increased welfare. That is exactly not like believing something to be “blegg”. Regarding mathematical groups, whether or not we care about them for their usefulness in physics seems not relevant for “group” to have a specific meaning. Like, you may not care about horses, but you still anticipate a certain visual experience when someone tells you they bought you a horse, it’s right outside. And for a group you’d anticipate that it turns out to satisfy associativity etc.
You need to define what you mean by good for this to make sense. If good means “what you should do” then it’s exactly the big claim Steve is arguing against. If it means something else, what is it?
I do have ideas about what people mean by “good” other than “what you should do”, but they’re complex. That’s why I think you need to define the term more for this claim to make sense.
If Steve is saying that the moral facts need to be intrinsically motivating, that is a stronger claim than “the good is what you should do”, ie, it is the claim that “the good is what you would do”. But, as cubefox points out, being intrinsically motivating isn’t part of moral realism as defined in the mainstream. (it is apparently part of moral realism as defined in LW, because of something EY said years ago). Also, since moral realism is metaethical claim, there is no need to specify the good at object level.
Once again, theories aren’t definitions.
People don’t all have to have the same moral theory. At the same time, there has to be a common semantic basis for disagreement, rather than talking past, to take place. “The good is what you should do” is pretty reasonable as a shared definition, since it is hard to dispute, but also neutral between “the good” being define personally, tribally, or universally.
Good points. I think the term moral realism is probably used in a variety of ways in the public sphere. I think the relevant sense is “will alignment solve itself because a smart machine will decide to behave in a way we like”. If there’s some vague sense of stuff everyone “should” do, but it doesn’t make them actually do it, then it doesn’t matter for this purpose.
I was (and have been) making a theory about definitions.
I think “the good is what you should do” is remarkably devoid of useful meaning. People often mean very little by “should”, are unclear both to others and themselves, and use it in different ways in different situations.
My theory is that “good” is usually defined as an emotion, not another set of words, and that emotion roughly means “I want that person on my team” (when applied to behavior), because evolution engineered us to find useful teammates, and that feeling is its mechanism for doing so.
For understanding human ethics, the important thing is that it grounds out in punishments and rewards—the good is what you should do , and if you don’t do it, you face punishment. Another thing that means is that a theory of ethics must be sufficient to justify putting people in jail. But a definition is not a theory.
If your whole theory of ethics is to rubber stamp emotions or opinions, you end up with a very superficial theory that is open to objections like the Open Question argument. Just because somebody feels it is good to do X does not mean it was necessarily is—it is an open question. If the good is your emotions , then it is a closed question...your emotions are your emotions , likewise your values are your values, and your opinions are your opinions. The openness of the question “you feel that X is good, but is it really?” is a *theoretical” reason for believing that “goodness” works more like “truth” and less like “belief”.
(And the OQA is quite likely what this passage by Nostalgebraist hints at:-
*Who shoots down the enemy soldiers while thinking, “if I had been born there, it would have been all-important for their side to win, and so I would have shot at the men on this side. However, I was born in my country, not theirs, and so it is all-important that my country should win, and that theirs should lose.
There is no reason for this. It could have been the other way around, and everything would be left exactly the same, except for the ‘values.’
I cannot argue with the enemy, for there is no argument in my favor. I can only shoot them down.)
And having gathered our team to fight the other team, we can ask ourselves whether we might actually be the baddies.
The *practical* objection kicks in when there are conflicts between subjective views.
A theory of ethics needs to justify real world actions—especially actions that impact other people , especially actions that impact other people negatively.( It’s not just about passively understanding the world, about ‘what anticipated experiences come about from the belief that something is “good” or “bad”?’)Why should someone really go to jail ,if they havent really done anything wrong? Well, if the good is what you should do, jailing people is justifiable , because the kind of ting you shouldn’t do is the kind of thing you deserve punishment for.
Of course, the open question argument doesn’t take you all the way to full strength moral realism. Less obviously, there are many alternatives to MR. Nihilism is one: you can’t argue that emotivism is true because MR is false—emotivism might be wrong because ethics is nothing. Emotivism might also be wrong because some position weaker than MR is right.
I don’t think anyone needs to define what words used in ordinary language mean because the validity of any attempt of such a definition would itself have to be checked against the intuitive meaning of the word in common usage.
I do think the meaning is indeed similar (except of supererogatory statements), but the argument isn’t affected. For example, I can believe that I shouldn’t eat meat, or that eating meat is bad, without being motivated to stop eating meat.
I have no idea what you mean by your claim if you won’t define the central term. Or I do, but I’m just guessing. I think people are typically very vague in what they mean by “good”, so it’s not adequate for analytical discussion. In this case, a vague sense of good produces only a vague sense in which “moral realism” isn’t a strong claim. I just don’t know what you mean by that.
I’d be happy to come back later and give my guesses at what people tend to mean by “good”; it’s something like “stuff people do whom I want on my team” or “actions that make me feel positively toward someone”. But it would require a lot more words to even start nailing down. And while that’s a claim about reality, it’s quite a complex, dependent, and therefore vague claim, so I’d be reluctant to call it moral realism. Although it is in one sense. So maybe that’s what you mean?
Almost all terms in natural language are vague, but that doesn’t mean they are all ambiguous or somehow defective and in need of an explicit definition. We know what words mean, we can give examples, but we don’t have definitions in our mind. Imagine you say that believing X is irrational, and I reply “I don’t believe in ‘rational realism’, I think ‘rational’ is a vague term, can you give me a definition of ‘rational’ please?” That would be absurd. Of course I know what rational means, I just can’t define it, but we humans can hardly define any natural language terms at all.
That would indeed not count as moral realism, the form of anti-realism would probably be something similar to subjectivism (“x is good” ≈ “I like X”) or expressivism (“x is good” ≈ “Yay x!”).
But I don’t think this can make reasonable sense of beliefs. That I believe something is good doesn’t mean that I feel positive toward myself, or that I like it, or that I’m cheering for myself, or that I’m booing my past self if I changed my mind. Sometimes I may also just wonder whether something is good or bad (e.g. eating meat) which arguably makes no sense under those interpretations.
I don’t think I could disagree any more strongly about this. In fact, I am kind of confused about your choice of example, because ‘rationality’ seems to me like such a clear counter to your argument. It is precisely the type of slippery concept that is portrayed inaccurately (relative to LW terminology) in mainstream culture and thus inherently requires a more rigorous definition and explanation. This was so important that “the best intro-to-rationality for the general public” (according to @lukeprog) specifically addressed the common misconception that being rational means being a Spock-like Straw Vulcan. It was so important that one of the crucial posts in the first Sequence by Eliezer spends almost 2000 words defining rationality. So important that, 14 years later, @Raemon had to write yet another post (with 150 upvotes) explaining what rationality is not, as a result of common and lasting confusions by users on this very site (presumably coming about as a result of the original posts not clarifying matters sufficiently).
What about the excellent and important post “Realism about Rationality” by Richard Ngo, which expresses “skepticism” about the mindset he calls “realism about rationality,” thus disagreeing with others who do think “this broad mindset is mostly correct, and the objections outlined in this essay are mostly wrong” and argue that “we should expect a clean mathematical theory of rationality and intelligence to exist”? Do you “of course know what rationality means” if you cannot settle as important a question as this? What about Bryan Caplan’s arguments that a drug addict who claims they want to stop buying drugs but can’t prevent themselves from doing so is actually acting perfectly rationally, because, in reality, their revealed preferences show that they really do want to consume drugs, and are thus rationally pursuing those goals by buying them? Caplan is a smart person expressing serious disagreement with the mainstream, intuitive perceptions of rationality and human desires; this strongly suggests that rationality is indeed, as you put it, “ambiguous or somehow defective and in need of an explicit definition.”
It wouldn’t be wrong to say that LessWrong was built to advance the study of rationality, both as it relates to humans and to AI. The very basis of this site and of the many Sequences and posts expanding upon these ideas is the notion that our understanding of rationality is currently inadequate and needs to be straightened out.
What anticipated experiences come about from the belief that something is “good” or “bad”? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time). And if there is no answer, then the concept of “moral facts” becomes essentially useless, like any other belief that pays no rent.
A long time ago, @Roko laid out a possible thesis of “strong moral realism” that “All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory morphism, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.” He also correctly noted that “most modern philosophers who call themselves “realists” don’t mean anything nearly this strong. They mean that that there are moral “facts”, for varying definitions of “fact” that typically fade away into meaninglessness on closer examination, and actually make the same empirical predictions as antirealism.” Roko’s post lays out clear anticipated experiences coming about from this version of moral realism; it is falsifiable, and most importantly, it is about reality because it constrains reality, if true (but, as it strongly conflicts with the Orthogonality Thesis, the vast majority of users here would strongly disbelieve is true). Something like what Roko illustrated is necessary to answer the critiques of moral anti-realists like @Steven Byrnes, who are implicitly saying that reality is not at all constrained to any system of (human-intelligible) morality.
There is a large difference between knowing the meaning of a word, and knowing its definition. You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples. In philosophy those are somewhat famous cases of words that are hard to define, but most words from natural language could be chosen instead.
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions. Answering “Do you know where I left my keys?” doesn’t require a definition for “knowledge”. Answering “Is believing in ghosts irrational?” doesn’t require a definition of “rationality”. And answering “Is eating Bob’s lunch bad?” doesn’t require a definition of “bad”.
Attempts of finding such definitions is called philosophy, or conceptual analysis specifically. It helps with abstract reasoning by finding relations between concepts. For example, when asked explicitly, most people can’t say how knowledge and belief relate to each other (I tried). Philosophers would reply that knowledge implies belief but not the other way round, or that belief is internal while knowledge is (partly) external. In some cases knowing this is kind of important, but usually it isn’t.
Well, why not try to answer it yourself? I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want. I directionally agree with EY’s extrapolated volition explication of goodness (I linked to it in a neighboring comment). As he mentions, there are several philosophers who have provided similar analyses.
It is interesting that you chose the example of “knowledge” because I think that is yet another illustration of the complete opposite of the position you are arguing for. I was not born with an intuitive understanding of Bayesianism, for example. However, I now consider anyone who hasn’t grasped Bayesian thinking (such as previous versions of me) but is nonetheless trying to seriously reason about what it means to know something to be terribly confused and to have a low likelihood of achieving anything meaningful in any non-intuitive context where formalizing/using precise meanings of knowledge is necessary. I would thus say that the vast majority of people who use ordinary words like “knowledge” don’t understand what they mean (or, to be more precise, they don’t understand the concepts that result from carving reality at its joints in a coherent manner).
I don’t care about definitions per se. The vast majority of human concepts and mental categories don’t work on the basis of necessary and sufficient conditions anyway, so an inability to supply a fully generalizable definition for something is caused much more by the fundamental failings of our inadequate language than by issues with our conceptual formation. Nevertheless, informal and non-rigorous thinking about concepts can easily lead into confusion and the reification of ultimately nonsensical ideas if they are not subject to enough critical analysis in the process.
Given my previous paragraph, I don’t think you would be surprised to hear that I find conceptual analysis to be virtually useless and a waste of resources, for basically the reasons laid out in detail by @lukeprog in “Concepts Don’t Work That Way” and “Intuitions Aren’t Shared That Way” almost 12 years ago. His (in my view incomplete) sequence on Rationality and Philosophy is as much a part of LW’s core as Eliezer’s own Sequences are, so while reasonable disagreement with it is certainly possible, I start with a very strong prior that it is correct, for purposes of our discussion.
Well, I have tried to answer it myself, and after thinking about it very seriously and reading what people on all sides of the issue have thought about it, I have come to the conclusion that concepts of “moral truth” are inherently confused, pay no rent in anticipated experiences, and are based upon flaws in thinking that reveal how common-sensical intuitions are totally unmoored from reality when you get down to the nitty-gritty of it. Nevertheless, given the importance of this topic, I am certainly willing to change my mind if presented with evidence.
That might well be evidence (in the Bayesian sense) that a given act, value, or person belongs to a certain category which we slap the label “good” onto. But it has little to do with my initial question. We have no reason to care about the property of “goodness” at all if we do not believe that knowing something is “good” gives us powerful evidence that allows us to anticipate experiences and to constrain the territory around us. Otherwise, “goodness” is just an arbitrary bag of things that is no more useful than the category of “bleggs” that is generated for no coherent reason whatsoever, or the random category “r398t”s that I just made up and contains only apples, weasels, and Ron Weasley. Indeed, we would not even have enough reason to raise the question of what “goodness” is in the first place.
To take a simple illustration of the difference between the conditions for membership in a category and the anticipated experiences resulting from “knowing” that something is a member of that category, consider groups in mathematics. The definition of a group is “a set together with a binary operation that satisfies the axioms of associativity, identity, and inverses.” But we don’t care about groups for reasons that deal only with these axioms; on the contrary, groups matter because they help model important situations in reality (such as symmetry groups in physics) and because we can tell a lot about the nature and structure of groups through mathematical reasoning. The fact that finite simple groups can be classified in a clear and concise manner is a consequence of their definition (not a formal precondition for their membership) and allows us to anticipate with extremely high (although not full) certainty that if we consider a finite simple group G, it will be isomorphic to one of the sample groups in the description above.
I don’t understand your point about anticipated experience. If I believe some action is good, I anticipate that doing that action will produce evidence (experience) that is indicative of increased welfare. That is exactly not like believing something to be “blegg”. Regarding mathematical groups, whether or not we care about them for their usefulness in physics seems not relevant for “group” to have a specific meaning. Like, you may not care about horses, but you still anticipate a certain visual experience when someone tells you they bought you a horse, it’s right outside. And for a group you’d anticipate that it turns out to satisfy associativity etc.