“There is no intangible stuff of goodness that you can divorce from life and love and happiness in order to ask why things like that are good. They are simply what you are talking about in the first place when you talk about goodness.”
And then the long arguments are about why your brain makes you think anything different.
This is less startling than your more scientific pronouncements. Are there any atheists reading this that find this (or at first found this) very counterintuitive or objectionable?
I would go further, and had the impression from somewhere that you did not go that far. Is that accurate?
I’m a cognitivist. Sentences about goodness have truth values after you translate them into being about life and happiness etc. As a general strategy, I make the queerness go away, rather than taking the queerness as a property of a thing and using it to deduce that thing does not exist; it’s a confusion to resolve, not an existence to argue over.
No, nothing, and because while religion does contain some confusion, after you eliminate the confusion you are left with claims that are coherent but false.
Morality is a specific set of values (Or, more precisely, a specific algorithm/dynamic for judging values). Humans happen to be (for various reasons) the sort of beings that value morality as opposed to valuing, say, maximizing paperclip production. It is indeed objectively better (by which we really mean “more moral”/”the sort of thing we should do”) to be moral than to be paperclipish. And indeed we should be moral, where by “should” we mean, “more moral”.
(And moral, when we actually cash out what we actually mean by it seems to translate to a complicated blob of values like happiness, love, creativity, novelty, self determination, fairness, life (as in protecting theirof), etc...)
It may appear that paperclip beings and moral beings disagree about something, but not really. The paperclippers would, once they’ve analyzed what humans actually mean by “moral”, would agree “yep, humans are more moral than us. But who cares about this morality stuff, it doesn’t maximize paperclips!”
Of course, screw the desires of the paperclippers, after all, they’re not actually moral. We really are objectively better (once we think carefully by what we mean by “better”) than them.
(note, “does something or does something not actually do a good job of fulfilling a certain value?” is an objective question. ie, “does a particular action tend to increase the expected number of paperclips?” (on the paperclipper side) or, on our side, stuff like “does a particular action tend to save more lives, increase happiness, increase fairness, add novelty...” etc etc etc is an objective question in that we can extract specific meaning from that question and can objectively (in a way the paperclippers would agree with) judge that. It simply happens to be that we’re the sorts of beings that actually care about the answer to that (as we should be), while the screwy hypothetical paperclippers are immoral and only care about paperclips.
How’s that, that make sense? Or, to summarize the summary, “Morality is objective, and we humans happen to be the sorts of beings that value morality, as opposed to valuing something else instead”
a specific algorithm/dynamic for judging values, or
a complicated blob of values like happiness, love, creativity, novelty, self determination, fairness, life (as in protecting theirof), etc.?
If it’s 1, can we say something interesting and non-trivial about the algorithm, besides the fact that it’s an algorithm? In other words, everything can be viewed as an algorithm, but what’s the point of viewing morality as an algorithm?
If it’s 2, why do we think that two people on opposite sides of the Earth are referring to the same complicated blob of values when they say “morality”? I know the argument about the psychological unity of humankind (not enough time for significant genetic divergence), but what about cultural/memetic evolution?
I’m guessing the answer to my first question is something like, morality is an algorithm whose current “state” is a complicated blob of values like happiness, love, … so both of my other questions ought to apply.
If it’s 2, why do we think that two people on opposite sides of the Earth are referring to the same complicated blob of values when they say “morality”? I know the argument about the psychological unity of humankind (not enough time for significant genetic divergence), but what about cultural/memetic evolution?
You don’t even have to do any cross-cultural comparisons to make such an argument. Considering the insights from modern behavioral genetics, individual differences within any single culture will suffice.
There is no reason to be at all tentative about this. There’s tons of cog sci data about what people mean when they talk about morality. It varies hugely (but predictably) across cultures.
Why are you using algorithm/dynamic here instead of function or partial function? (On what space, I will ignore that issue, just as you have...) Is it supposed to be stateful? I’m not even clear what that would mean. Or is function what you mean by #2? I’m not even really clear on how these differ.
You might have gotten confused because I quoted Psy-Kosh’s phrase “specific algorithm/dynamic for judging values” whereas Eliezer’s original idea I think was more like an algorithm for changing one’s values in response to moral arguments. Here are Eliezer’s own words:
I would say, by the way, that the huge blob of a computation is not just my present terminal values (which I don’t really have - I am not a consistent expected utility maximizers); the huge blob of a computation includes the specification of those moral arguments, those justifications, that would sway me if I heard them.
Others have pointed out that this definition is actually quite unlikely to be coherent: people would be likely to be ultimately persuaded by different moral arguments and justifications if they had different experiences and heard arguments in different orders etc.
Others have pointed out that this definition is actually quite unlikely to be coherent
Yes, see here for an argument to that effect by Marcello and subsequent discussion about it between Eliezer and myself.
I think the metaethics sequence is probably the weakest of Eliezer’s sequences on LW. I wonder if he agrees with that, and if so, what he plans to do about this subject for his rationality book.
I think the metaethics sequence is probably the weakest of Eliezer’s sequences on LW. I wonder if he agrees with that, and if so, what he plans to do about this subject for his rationality book.
This is somewhat of a concern given Eliezer’s interest in Friendliness!
As far as I can understand, Eliezer has promoted two separate ideas about ethics: defining personal morality as a computation in the person’s brain rather than something mysterious and external, and extrapolating that computation into smarter creatures. The former idea is self-evident, but the latter (and, by extension, CEV) has received a number of very serious blows recently. IMO it’s time to go back to the drawing board. We must find some attack on the problem of preference, latch onto some small corner, that will allow us to make precise statements. Then build from there.
defining personal morality as a computation in the person’s brain rather than something mysterious and external
But I don’t see how that, by itself, is a significant advance. Suppose I tell you, “mathematics is a computation in a person’s brain rather than something mysterious and external”, or “philosophy is a computation in a person’s brain rather than something mysterious and external”, or “decision making is a computation in a person’s brain rather than something mysterious and external” how much have I actually told you about the nature of math, or philosophy, or decision making?
This makes sense in that it is coherent, but it is not obvious to me what arguments would be marshaled in its favor. (Yudkowsky’s short formulations do point in the direction of their justifications.) Moreover, the very first line, “morality is a specific set of values,” and even its parenthetical expansion (algorithm for judging values), seems utterly preposterous to me. The controversies between human beings about which specific sets of values are moral, at every scale large and small, are legendary beyond cliche.
The controversies between human beings about which specific sets of values are moral, at every scale large and small, are legendary beyond cliche.
It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning. In other words, human brains have a common moral architecture, and disagreements are at the level of instrumental, rather than terminal, values and result from mistaken factual beliefs and reasoning errors.
You may or may not find that convincing (you’ll get to the arguments regarding that if you’re reading the sequences), but assuming that is true, then “morality is a specific set of values” is correct, though vague: more precisely, it is a very complicated set of terminal values, which, in this world, happens to be embedded solely in a species of minds who are not naturally very good at rationality, leading to massive disagreement about instrumental values (though most people do not notice that it’s about instrumental values).
It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning. In other words, human brains have a common moral architecture, and disagreements are at the level of instrumental, rather than terminal, values and result from mistaken factual beliefs and reasoning errors.
It is? That’s a worry. Consider this a +1 for “That thesis is totally false and only serves signalling purposes!”
I… think it is. Maybe I’ve gotten something terribly wrong, but I got the impression that this is one of the points of the complexity of value and metaethics sequences, and I seem to recall that it’s the basis for expecting humanity’s extrapolated volition to actually cohere.
I seem to recall that it’s the basis for expecting humanity’s extrapolated volition to actually cohere.
This whole area isn’t covered all that well (as Wei noted). I assumed that CEV would rely on solving an implicit cooperation problem between conflicting moral systems. It doesn’t appear at all unlikely to me that some people are intrinsically selfish to some degree and their extrapolated volitions would be quite different.
Note that I’m not denying that some people present (or usually just assume) the thesis you present. I’m just glad that there are usually others who argue against it!
It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning.
Maybe it’s true if you also specify “if they were fully capable of modifying their own moral intuitions.” I have an intuition (an unexamined belief? a hope? a sci-fi trope?) that humanity as a whole will continue to evolve morally and roughly converge on a morality that resembles current first-world liberal values more than, say, Old Testament values. That is, it would converge, in the limit of global prosperity and peace and dialogue, and assuming no singularity occurs and the average lifespan stays constant. You can call this naive if you want to; I don’t know whether it’s true. It’s what I imagine Eliezer means when he talks about “humanity growing up together”.
This growing-up process currently involves raising children, which can be viewed as a crude way of rewriting your personality from scratch, and excising vestiges of values you no longer endorse. It’s been an integral part of every culture’s moral evolution, and something like it needs to be part of CEV if it’s going to actually converge.
It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning.
That’s not plausible. That would be some sort of objective morality, and there is no such thing. Humans have brains, and brains are complicated. You can’t have them imply exactly the same preference.
Now, the non-crazy version of what you suggest is that preferences of most people are roughly similar, that they won’t differ substantially in major aspects. But when you focus on detail, everyone is bound to want their own thing.
It makes sense in its own terms, but it leaves the unpleasant implication that morality differs greatly between humans, at both individual and group level—and if this leads to a conflict, asking who is right is meaningless (except insofar as everyone can reach an answer that’s valid only for himself, in terms of his own morality).
So if I live in the same society with people whose morality differs from mine, and the good-fences-make-good-neighbors solution is not an option, as it often isn’t, then who gets to decide whose morality gets imposed on the other side? As far as I see, the position espoused in the above comment leaves no other answer than “might is right.” (Where “might” also includes more subtle ways of exercising power than sheer physical coercion, of course.)
...and if this leads to a conflict, asking who is right is meaningless (except insofar as everyone can reach an answer that’s valid only for himself, in terms of his own morality).
So if I live in the same society with people whose morality differs from mine, and the good-fences-make-good-neighbors solution is not an option, as it often isn’t, then who gets to decide whose morality gets imposed on the other side?
That two people mean different things by the same word doesn’t make all questions asked using that word meaningless, or even hard to answer.
If by “castle” you mean “a fortified structure”, while I mean “a fortified structure surrounded by a moat”, who will be right if we’re asked if the Chateau de Gisors is a castle? Any confusion here is purely semantic in nature. If you answer yes and I answer no, we won’t have given two answers to the same question, we’ll have given two answers to two different questions. If Psy-Kosh says that the Chateau de Gisors is a fortified structure but it is not surrounded by a moat, he’ll have answered both our questions.
Now, once this has been clarified, what would it mean to ask who gets to decide whose definition of ‘castle’ gets imposed on the other side? Do we need a kind of meta-definition of castle to somehow figure out what the one true definition is? If I could settle this issue by exercising power over you, would it change the fact that the Chateau de Gisors is not surrounded by a moat? If I killed everyone who doesn’t mean the same thing by the word ‘castle’ than I do, would the sentence “a fortified structure” become logically equivalent to the sentence “a fortified structure surrounded by a moat”?
In short, substituting the meaning of a word for the word tends to make lots of seemingly difficult problems become laughably easy to solve. Try it.
*blinks* how did I imply that morality varies? I thought (was trying to imply) that morality is an absolute standard and that humans simply happen to be the sort of beings that care about the particular standard we call “morality”. (Well, with various caveats like not being sufficiently reflective to be able to fully explicitly state our “morality algorithm”, nor do we fully know all its consequences)
However, when humans and paperclippers interact, well, there will probably be some sort of fight if one doesn’t end up with some sort PD cooperation or whatever. It’s not that paperclippers and humans disagree on anything, it’s simply, well, they value paperclips a whole lot more than lives. We’re sort of stuck with having to act in a way to prevent the hypothetical them from acting on that.
(of course, the notion that most humans seem to have the same underlying core “morality algorithm”, just disagreeing on the implications or such, is something to discuss, but that gets us out of executive summary territory, no?)
(of course, the notion that most humans seem to have the same underlying core “morality algorithm”, just disagreeing on the implications or such, is something to discuss, but that gets us out of executive summary territory, no?)
I would say that it’s a crucial assumption, which should be emphasized clearly even in the briefest summary of this viewpoint. It is certainly not obvious, to say the least. (And, for full disclosure, I don’t believe that it’s a sufficiently close approximation of reality to avoid the problem I emphasized above.)
Hrm, fair enough. I thought I’d effectively implied it, but apparently not sufficiently.
(Incidentally… you don’t think it’s a close approximation to reality? Most humans seem to value (to various extents) happiness, love, (at least some) lives, etc… right?)
Different people (and cultures) seem to put very different weights on these things.
Here’s an example:
You’re a government minister who has to decide who to hire to do a specific task. There are two applicants. One is your brother, who is marginally competent at the task. The other is a stranger with better qualifications who will probably be much better at the task.
The answer is “obvious.”
In some places, “obviously” you hire your brother. What kind of heartless bastard won’t help out his own brother by giving him a job?
In others, “obviously” you should hire the stranger. What kind of corrupt scoundrel abuses his position by hiring his good-for-nothing brother instead of the obviously superior candidate?
Speaking of executive summaries, will you offer one for your metaethics?
“There is no intangible stuff of goodness that you can divorce from life and love and happiness in order to ask why things like that are good. They are simply what you are talking about in the first place when you talk about goodness.”
And then the long arguments are about why your brain makes you think anything different.
This is less startling than your more scientific pronouncements. Are there any atheists reading this that find this (or at first found this) very counterintuitive or objectionable?
I would go further, and had the impression from somewhere that you did not go that far. Is that accurate?
I’m a cognitivist. Sentences about goodness have truth values after you translate them into being about life and happiness etc. As a general strategy, I make the queerness go away, rather than taking the queerness as a property of a thing and using it to deduce that thing does not exist; it’s a confusion to resolve, not an existence to argue over.
To be clear, if sentence X about goodness is translated into sentence Y about life and happiness etc., does sentence Y contain the word “good”?
Edit: What’s left of religion after you make the queerness go away? Why does there seem to be more left of morality?
No, nothing, and because while religion does contain some confusion, after you eliminate the confusion you are left with claims that are coherent but false.
I can do that:
Morality is a specific set of values (Or, more precisely, a specific algorithm/dynamic for judging values). Humans happen to be (for various reasons) the sort of beings that value morality as opposed to valuing, say, maximizing paperclip production. It is indeed objectively better (by which we really mean “more moral”/”the sort of thing we should do”) to be moral than to be paperclipish. And indeed we should be moral, where by “should” we mean, “more moral”.
(And moral, when we actually cash out what we actually mean by it seems to translate to a complicated blob of values like happiness, love, creativity, novelty, self determination, fairness, life (as in protecting theirof), etc...)
It may appear that paperclip beings and moral beings disagree about something, but not really. The paperclippers would, once they’ve analyzed what humans actually mean by “moral”, would agree “yep, humans are more moral than us. But who cares about this morality stuff, it doesn’t maximize paperclips!”
Of course, screw the desires of the paperclippers, after all, they’re not actually moral. We really are objectively better (once we think carefully by what we mean by “better”) than them.
(note, “does something or does something not actually do a good job of fulfilling a certain value?” is an objective question. ie, “does a particular action tend to increase the expected number of paperclips?” (on the paperclipper side) or, on our side, stuff like “does a particular action tend to save more lives, increase happiness, increase fairness, add novelty...” etc etc etc is an objective question in that we can extract specific meaning from that question and can objectively (in a way the paperclippers would agree with) judge that. It simply happens to be that we’re the sorts of beings that actually care about the answer to that (as we should be), while the screwy hypothetical paperclippers are immoral and only care about paperclips.
How’s that, that make sense? Or, to summarize the summary, “Morality is objective, and we humans happen to be the sorts of beings that value morality, as opposed to valuing something else instead”
Is morality actually:
a specific algorithm/dynamic for judging values, or
a complicated blob of values like happiness, love, creativity, novelty, self determination, fairness, life (as in protecting theirof), etc.?
If it’s 1, can we say something interesting and non-trivial about the algorithm, besides the fact that it’s an algorithm? In other words, everything can be viewed as an algorithm, but what’s the point of viewing morality as an algorithm?
If it’s 2, why do we think that two people on opposite sides of the Earth are referring to the same complicated blob of values when they say “morality”? I know the argument about the psychological unity of humankind (not enough time for significant genetic divergence), but what about cultural/memetic evolution?
I’m guessing the answer to my first question is something like, morality is an algorithm whose current “state” is a complicated blob of values like happiness, love, … so both of my other questions ought to apply.
Wei_Dai:
You don’t even have to do any cross-cultural comparisons to make such an argument. Considering the insights from modern behavioral genetics, individual differences within any single culture will suffice.
There is no reason to be at all tentative about this. There’s tons of cog sci data about what people mean when they talk about morality. It varies hugely (but predictably) across cultures.
Why are you using algorithm/dynamic here instead of function or partial function? (On what space, I will ignore that issue, just as you have...) Is it supposed to be stateful? I’m not even clear what that would mean. Or is function what you mean by #2? I’m not even really clear on how these differ.
You might have gotten confused because I quoted Psy-Kosh’s phrase “specific algorithm/dynamic for judging values” whereas Eliezer’s original idea I think was more like an algorithm for changing one’s values in response to moral arguments. Here are Eliezer’s own words:
Others have pointed out that this definition is actually quite unlikely to be coherent: people would be likely to be ultimately persuaded by different moral arguments and justifications if they had different experiences and heard arguments in different orders etc.
Yes, see here for an argument to that effect by Marcello and subsequent discussion about it between Eliezer and myself.
I think the metaethics sequence is probably the weakest of Eliezer’s sequences on LW. I wonder if he agrees with that, and if so, what he plans to do about this subject for his rationality book.
This is somewhat of a concern given Eliezer’s interest in Friendliness!
As far as I can understand, Eliezer has promoted two separate ideas about ethics: defining personal morality as a computation in the person’s brain rather than something mysterious and external, and extrapolating that computation into smarter creatures. The former idea is self-evident, but the latter (and, by extension, CEV) has received a number of very serious blows recently. IMO it’s time to go back to the drawing board. We must find some attack on the problem of preference, latch onto some small corner, that will allow us to make precise statements. Then build from there.
But I don’t see how that, by itself, is a significant advance. Suppose I tell you, “mathematics is a computation in a person’s brain rather than something mysterious and external”, or “philosophy is a computation in a person’s brain rather than something mysterious and external”, or “decision making is a computation in a person’s brain rather than something mysterious and external” how much have I actually told you about the nature of math, or philosophy, or decision making?
The linked discussion is very nice.
This is currently at +1. Is that from Yudkowsky?
(Edit: +2 after I vote it up.)
This makes sense in that it is coherent, but it is not obvious to me what arguments would be marshaled in its favor. (Yudkowsky’s short formulations do point in the direction of their justifications.) Moreover, the very first line, “morality is a specific set of values,” and even its parenthetical expansion (algorithm for judging values), seems utterly preposterous to me. The controversies between human beings about which specific sets of values are moral, at every scale large and small, are legendary beyond cliche.
It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning. In other words, human brains have a common moral architecture, and disagreements are at the level of instrumental, rather than terminal, values and result from mistaken factual beliefs and reasoning errors.
You may or may not find that convincing (you’ll get to the arguments regarding that if you’re reading the sequences), but assuming that is true, then “morality is a specific set of values” is correct, though vague: more precisely, it is a very complicated set of terminal values, which, in this world, happens to be embedded solely in a species of minds who are not naturally very good at rationality, leading to massive disagreement about instrumental values (though most people do not notice that it’s about instrumental values).
It is? That’s a worry. Consider this a +1 for “That thesis is totally false and only serves signalling purposes!”
I… think it is. Maybe I’ve gotten something terribly wrong, but I got the impression that this is one of the points of the complexity of value and metaethics sequences, and I seem to recall that it’s the basis for expecting humanity’s extrapolated volition to actually cohere.
This whole area isn’t covered all that well (as Wei noted). I assumed that CEV would rely on solving an implicit cooperation problem between conflicting moral systems. It doesn’t appear at all unlikely to me that some people are intrinsically selfish to some degree and their extrapolated volitions would be quite different.
Note that I’m not denying that some people present (or usually just assume) the thesis you present. I’m just glad that there are usually others who argue against it!
That’s exactly what I took CEV to entail.
Now this is a startling claim.
Be more specific!
Maybe it’s true if you also specify “if they were fully capable of modifying their own moral intuitions.” I have an intuition (an unexamined belief? a hope? a sci-fi trope?) that humanity as a whole will continue to evolve morally and roughly converge on a morality that resembles current first-world liberal values more than, say, Old Testament values. That is, it would converge, in the limit of global prosperity and peace and dialogue, and assuming no singularity occurs and the average lifespan stays constant. You can call this naive if you want to; I don’t know whether it’s true. It’s what I imagine Eliezer means when he talks about “humanity growing up together”.
This growing-up process currently involves raising children, which can be viewed as a crude way of rewriting your personality from scratch, and excising vestiges of values you no longer endorse. It’s been an integral part of every culture’s moral evolution, and something like it needs to be part of CEV if it’s going to actually converge.
That’s not plausible. That would be some sort of objective morality, and there is no such thing. Humans have brains, and brains are complicated. You can’t have them imply exactly the same preference.
Now, the non-crazy version of what you suggest is that preferences of most people are roughly similar, that they won’t differ substantially in major aspects. But when you focus on detail, everyone is bound to want their own thing.
Psy-Kosh:
It makes sense in its own terms, but it leaves the unpleasant implication that morality differs greatly between humans, at both individual and group level—and if this leads to a conflict, asking who is right is meaningless (except insofar as everyone can reach an answer that’s valid only for himself, in terms of his own morality).
So if I live in the same society with people whose morality differs from mine, and the good-fences-make-good-neighbors solution is not an option, as it often isn’t, then who gets to decide whose morality gets imposed on the other side? As far as I see, the position espoused in the above comment leaves no other answer than “might is right.” (Where “might” also includes more subtle ways of exercising power than sheer physical coercion, of course.)
That two people mean different things by the same word doesn’t make all questions asked using that word meaningless, or even hard to answer.
If by “castle” you mean “a fortified structure”, while I mean “a fortified structure surrounded by a moat”, who will be right if we’re asked if the Chateau de Gisors is a castle? Any confusion here is purely semantic in nature. If you answer yes and I answer no, we won’t have given two answers to the same question, we’ll have given two answers to two different questions. If Psy-Kosh says that the Chateau de Gisors is a fortified structure but it is not surrounded by a moat, he’ll have answered both our questions.
Now, once this has been clarified, what would it mean to ask who gets to decide whose definition of ‘castle’ gets imposed on the other side? Do we need a kind of meta-definition of castle to somehow figure out what the one true definition is? If I could settle this issue by exercising power over you, would it change the fact that the Chateau de Gisors is not surrounded by a moat? If I killed everyone who doesn’t mean the same thing by the word ‘castle’ than I do, would the sentence “a fortified structure” become logically equivalent to the sentence “a fortified structure surrounded by a moat”?
In short, substituting the meaning of a word for the word tends to make lots of seemingly difficult problems become laughably easy to solve. Try it.
*blinks* how did I imply that morality varies? I thought (was trying to imply) that morality is an absolute standard and that humans simply happen to be the sort of beings that care about the particular standard we call “morality”. (Well, with various caveats like not being sufficiently reflective to be able to fully explicitly state our “morality algorithm”, nor do we fully know all its consequences)
However, when humans and paperclippers interact, well, there will probably be some sort of fight if one doesn’t end up with some sort PD cooperation or whatever. It’s not that paperclippers and humans disagree on anything, it’s simply, well, they value paperclips a whole lot more than lives. We’re sort of stuck with having to act in a way to prevent the hypothetical them from acting on that.
(of course, the notion that most humans seem to have the same underlying core “morality algorithm”, just disagreeing on the implications or such, is something to discuss, but that gets us out of executive summary territory, no?)
Psy-Kosh:
I would say that it’s a crucial assumption, which should be emphasized clearly even in the briefest summary of this viewpoint. It is certainly not obvious, to say the least. (And, for full disclosure, I don’t believe that it’s a sufficiently close approximation of reality to avoid the problem I emphasized above.)
Hrm, fair enough. I thought I’d effectively implied it, but apparently not sufficiently.
(Incidentally… you don’t think it’s a close approximation to reality? Most humans seem to value (to various extents) happiness, love, (at least some) lives, etc… right?)
Different people (and cultures) seem to put very different weights on these things.
Here’s an example:
You’re a government minister who has to decide who to hire to do a specific task. There are two applicants. One is your brother, who is marginally competent at the task. The other is a stranger with better qualifications who will probably be much better at the task.
The answer is “obvious.”
In some places, “obviously” you hire your brother. What kind of heartless bastard won’t help out his own brother by giving him a job?
In others, “obviously” you should hire the stranger. What kind of corrupt scoundrel abuses his position by hiring his good-for-nothing brother instead of the obviously superior candidate?