I don’t necessarily disagree with all Dark Arts practitioners. By a Dark Arts practitioner, I just mean someone who uses rhetorical techniques to win debate points, without particular regard for the truth. What they’re defending may or may not be true.
In the case of Scott Adams, in my view, most of what he is defending is false. But that’s a different debate. In this post, I just wanted to highlight the techniques he uses. I try not take a particular position with respect to his claims; I probably don’t succeed.
I’m neither a moral relativist nor an epistemological relativist. I suspect you probably reject epistemological relativism—i.e., you probably believe there are statements (e.g., 2+2=4) that are unambiguously true or false.
Not being a moral relativist does not mean I believe there is a giant stone block with the One True Morality on it. Indeed, the existence of such a stone block is a poor account of the nature of morality for the reasons highlighted by Socrates in the Euthyphro. The nature of morality is a contentious issue, and I won’t pretend to be an expert. But having heard several arguments, I think moral relativism is untenable, mainly because it’s an unlivable thesis. Sometimes you just need to say something is straightforwardly wrong: e.g., if you torture an innocent person for hours to alleviate your boredom. Here is an argument I’m convinced by.
By a Dark Arts practitioner, I just mean someone who uses rhetorical techniques to win debate points, without particular regard for the truth.
So whether a rhetorical technique should be classified as Dark Arts is determined by the intent of the speaker?
Sometimes you just need to say something is straightforwardly wrong
That’s not a problem, you can say whatever you want. The issue is whether you should attempt to impose your morality, by force if necessary, on another human who doesn’t agree with it. In the case of conflicting moralities, which one wins? Historically, the answer to that is “the one with the bigger guns” which is… an interesting observation.
So whether a rhetorical technique should be classified as Dark Arts is determined by the intent of the speaker?
I’d agree with that for the right definition of “intent”. It’s hard to have much “art” on accident, and without the optimization for persuasion over/against truth it starts to look a lot more like “bad reasoning that isn’t necessarily obviously bad to people who also reason bad”
Besides, I really wouldn’t want to fall in the trap of saying “well yeah, everyone finds your arguments more persuasive, but you’re using Dark Arts, so it doesn’t count!”. I’d rather keep the onus on me to actually be more persuasive by calling out the flaws in the “Dark” arguments.
Oh, I see. You’re saying Dark Arts is a subtype of fallacious arguments. I am not so sure—you can engage in Dark Arts without using fallacious arguments at all.
By a Dark Arts practitioner, I just mean someone who uses rhetorical techniques to win debate points, without particular regard for the truth.
So whether a rhetorical technique should be classified as Dark Arts is determined by the intent of the speaker?
I disagree. Treating the degree of darkness as a function of intent leads to bad places.
Dark Arts = deliberate manipulation
To me, that looks like you’re starting out saying that whether or not they intend to manipulate people is irrelevant, but then ending with saying that if it’s not deliberate manipulation it doesn’t count as dark arts. Not sure what you’re getting at, I guess.
Something like that. I agree that you can engage in Dark Arts without ever doing something that can get you called out for using fallacious arguments, but I think the underlying structure is usually if not always the same. You do things that can be somewhat correlated with the truth in ways that will predictably lead to them being moved by it when in reality they shouldn’t. For example, wearing a fake lab coat while speaking may not be an explicit “logical fallacy” but the effect is still “appeal to authority”.
I don’t know—a typical Dark Arts technique would be to carefully select the facts/evidence to present (plus the facts NOT to mention) and this is not a fallacy, this is just a straight-up attempt to mislead.
Simply seeing that there exist facts/evidence that support one side isn’t enough to support the conclusion that it’s actually right. To get there you have to make the jump “there exists this set of evidence for, therefore it is true”, which is fallacious, in general, since sometimes there’s more compelling facts/evidence in the other direction and you have to check for those too.
I don’t really like the “fallacy” framework itself too much given the whole “logical fallacies as weak Bayesian evidence” thing, but I think there’s an important point to be made in that you influence people through their (perhaps implicit) reasoning processes. The idea that there is this “separate” backchannel where you can influence people with things that have nothing to do with truth (in the target’s frame) isn’t a real thing, so any framing of “Dark Arts” as “using the Dark Channel [which is dark regardless of how it’s used or what the underlying intent is]” is mistaken, and Dark Arts has to be recognized as the optimization of a message to pass through and be validated by the targets reasoning process even when it “shouldn’t”.
you have to make the jump “there exists this set of evidence for, therefore it is true”, which is fallacious
It’s not fallacious. You can check for counter-arguments, but proving a negative is pretty hard, so you can never be sure that there are no facts anywhere which overturn your theory. That is not a good reason to never come to any conclusions.
you influence people through their (perhaps implicit) reasoning processes
No, I don’t think so. There is the entire highly successful field of marketing/advertising which disagrees.
It’s not fallacious. You can check for counter-arguments, but proving a negative is pretty hard, so you can never be sure that there are no facts anywhere which overturn your theory. That is not a good reason to never come to any conclusions.
And someone wearing a lab coat is generally more credible than someone dressed like a homeless person, so if you lack the ability to evaluate the arguments yourself then you go with what you’ve got. Fallacies don’t come from nowhere, they’re Bayesian evidence, even if they’re sometimes used in ways that don’t make for great reasoning.
No, I don’t think so. There is the entire highly successful field of marketing/advertising which disagrees.
I’m saying this from the perspective of someone who has used hypnosis to get people to download and run a (benign) program called “virus.exe”, among many other things. The success of marketing/advertising is the weak version of this challenge.
The claim isn’t that marketing/advertising “doesn’t work”, it’s that Ads Don’t Work That Way. Any framing that ignores/denies that they’re dealing with a reasoning process and looks only at the individual blocks is missing the forest for the trees. Yes, you can do stuff at the tree level sometimes, but that does not negate the forest level view, and including the bigger picture is not just more complete but also more powerful.
what do you mean by “reasoning processes”? [quoted by memory]
The processes which take in information from the world and output [usually implicit] the models of the world that they act on.
For example, if someone cites statistics on how safe airplanes are but feels fear and then won’t get on the airplane, then I’m not just looking at whatever made them say “airplanes are safe” but also what made them feel fear and what made them choose to side with the fear instead of the verbal reasoning.
Ah, so a “reasoning process” is basically everything that the mind does?
If you suddenly notice a spider near you and jump away, that was the result of a “reasoning process” because the output (the jump) implied that in your model of the world spiders are scary?
If so, your original quote
you influence people through their (perhaps implicit) reasoning processes
looks quite trivial: you influence people though changing what’s happening inside their mind—well, of course.
It is nearly tautological, yes, but it’s also true, and people seem to forget that. “The Dark Arts” do not belong in the same category as involuntary drug/hormone injections. If you treat tautologies as if they’re false then you’re gonna end up making mistakes.
For example, with the spider thing, this “trivial” insight implies that if you make up your mind about whether spiders are dangerous and conclude that they are not, then you stop jumping from spiders. This is indeed what I find, and I also find that most people don’t find this to be “obvious” and are rather surprised instead when they see it happen.
People don’t forget “tautologies are trivially true”, they forget the “persuasion works through your reasoning process, not around it” because the logically necessary implications sometimes seem “absurd”. (“So, the easy way to treat any phobia is merely to “make up your mind”? Methinks it’s considerably more complicated than that.”)
While tautological statements can’t rule out any logical possibilities, they’re important because they do rule out logical impossibilities. If you try to say “that’s tautological, therefore it’s trivial and meaningless” then you’re gonna end up saying silly stuff like “So, you’re a bachelor. Are you also married?”, for example.
“Solving phobias is about making up your mind” is tautologically true. Nowhere did I say anything about the complexity of doing so, but it’s tautologically true. A phobia is “an extreme or irrational fear of or aversion to something”. This requires both a fear and conflicting judgement that the fear isn’t rational. If you fall in the lion cage, for example, fear is just a rational response to danger. If you don’t have a conflicted mind (i.e. you’ve “made up your mind”), you don’t have a phobia—just a rational fear or no fear.
If you let yourself believe false and logically impossible things like “solving phobias is not about making up one’s mind” then you’re going to end up fooling yourself into silly thing like failing to notice that “systematic desensitization” is merely an attempt to help someone makeup their mind by providing evidence a bit at a time in a safe context and hoping that it shows what you want it to show and convinces the part of the mind that perceives danger. If you fool yourself into thinking the logically impossible “not about making up your mind”, then you don’t notice the possibility of other ways of bringing about coherence, which are often better and quicker ways of doing things.
For example, a few months ago I was having this same conversation with a friend who also felt like “getting over irrational fears” was more than just “making up her mind”. Fortunately she also had what she considered to be an irrational fear of heights and I had a rock wall tall enough to scare her, so I could show her. I pointed out the holes in her “my mind is made up” logic until she could no longer hold that view, then helped her make up her mind by asking her what, exactly, she was afraid might happen and whether she could know she was safe from that outcome or whether she needed the fear. We spent a few minutes to go through a few possibilities and then covered the rest with a catch-all. After bouldering on her own later, she told me that she hadn’t realized that she was afraid even while bouldering, but that the difference jumped out at her now that the fear was gone. That was exactly “just making up her mind”, both from my perspective and hers. It was actually pretty damn simple too, even though “making up her mind” necessarily took more than an instant because there were several questions she had to think about and answer first.
In other cases it becomes even quicker and does look like a snap decision. I had another friend, for example, whose needle phobia spontaneously disappeared after having an unrelated experience with me that convinced her to accept that I’m right when I say things can be that easy. If we broaden to scope to other things you might find “more complicated than merely making up one’s mind”, then I can give a few other examples of things that felt like the inside like “just making a snap decision” and having results that match.
“Making up your mind” can be complicated sometimes, but it can also be simple. And remembering what it’s about helps keep it simple so you can end up the kind of person who doesn’t jump at [theoretically potentially poisonous creature who is nevertheless unlikely to hurt you] without ever working on that problem in particular.
The issue is whether you should attempt to impose your morality, by force if necessary, on another human who doesn’t agree with it.
The implication being moral absolutists think morality should be imposed by force? That seems far from being universally true, not least in rationalist circles.
Anyway, the point of contention isn’t which moral ideas win or lose, but which, if any, are true.
Basically, for objectivists (with respect to morals), having some other morality is wrong. For relativists it’s merely different. The former is a much stronger cause for intervention than the latter.
Also, willingness to insist on your morality is generally a sign of taking it seriously.
The correlation between moral objectivism and interventionism is probably true, but I think it’s historically contingent, and not a logical consequence of objectivism. Whether or not I think of my morality as objective (universal) or subjective (a property of myself), that’s orthogonal to what I actually think is moral.
I’m a moral relativist. My morality is that torture and murder are wrong and I am justified and, indeed, sometimes enjoined to use force to stop them. I don’t think this is an uncommon stand.
Other people are moral objectivists, but their actual morals may tell them to leave others alone except in self-defense.
It’s relevant because it determines whether the question matters. If some dude somewhere finds my behaviour immoral, I couldn’t care less. If the same dude decides he needs to do something about it, we’ll have to solve this disagreement somehow.
imposing one’s morals on another would be wrong
No, not wrong. But having a different set of consequences.
It’s relevant because it determines whether the question matters.
Then it seems clear to me that the question shouldn’t matter to you. Objectivists may be interventionists at a higher rate than relativists, but that bears no relation to which position is true.
No, not wrong. But having a different set of consequences.
That set of consequences being unpreferred, presumably. What is that if not an expression of (relative) wrongness?
Not wrongness as a property of the wine no. But given knowledge of my preference and all else being equal, would it not be wrong to give me white over red?
Empirically that is not so. There are major world religions based on the fact that everyone should hold the one true belief and accord with its god-given morality. Followers of such religions profess, and those of the evangelist variety follow through with imposing their morals on others and believing it is the right thing to do.
Somewhat more secular is, say, the belief in equal rights for women or minorities. Lots of people on both sides have strong views about forced wearing of the hajib in some muslim countries. Advocating for woman in Saudi Arabia to have the right to drive, when you don’t live in or have any connection to that region of the world is trying to enforce one’s morals on another, right?
Like, a big part of our deal is that the ‘just need to say something is straightforwardly wrong’ is something you are making up. It is just in your mind. The Paperclip Maximizer would trade the human race for a paperclip, and that isn’t ‘wrong’ in any absolute sense.
‘Wrong’ for you means something different than it does for Clippy. You can deny that that is relativism if you like, I’m not huge on labels. The key thing is that you get that there is no dif between you picking what is ‘right’ and Clippy picking what is ‘clippiest’. They are both value judgements created by moral systems.
I don’t necessarily disagree with all Dark Arts practitioners. By a Dark Arts practitioner, I just mean someone who uses rhetorical techniques to win debate points, without particular regard for the truth. What they’re defending may or may not be true.
In the case of Scott Adams, in my view, most of what he is defending is false. But that’s a different debate. In this post, I just wanted to highlight the techniques he uses. I try not take a particular position with respect to his claims; I probably don’t succeed.
I’m neither a moral relativist nor an epistemological relativist. I suspect you probably reject epistemological relativism—i.e., you probably believe there are statements (e.g., 2+2=4) that are unambiguously true or false.
Not being a moral relativist does not mean I believe there is a giant stone block with the One True Morality on it. Indeed, the existence of such a stone block is a poor account of the nature of morality for the reasons highlighted by Socrates in the Euthyphro. The nature of morality is a contentious issue, and I won’t pretend to be an expert. But having heard several arguments, I think moral relativism is untenable, mainly because it’s an unlivable thesis. Sometimes you just need to say something is straightforwardly wrong: e.g., if you torture an innocent person for hours to alleviate your boredom. Here is an argument I’m convinced by.
So whether a rhetorical technique should be classified as Dark Arts is determined by the intent of the speaker?
That’s not a problem, you can say whatever you want. The issue is whether you should attempt to impose your morality, by force if necessary, on another human who doesn’t agree with it. In the case of conflicting moralities, which one wins? Historically, the answer to that is “the one with the bigger guns” which is… an interesting observation.
I’d agree with that for the right definition of “intent”. It’s hard to have much “art” on accident, and without the optimization for persuasion over/against truth it starts to look a lot more like “bad reasoning that isn’t necessarily obviously bad to people who also reason bad”
Besides, I really wouldn’t want to fall in the trap of saying “well yeah, everyone finds your arguments more persuasive, but you’re using Dark Arts, so it doesn’t count!”. I’d rather keep the onus on me to actually be more persuasive by calling out the flaws in the “Dark” arguments.
I disagree. Treating the degree of darkness as a function of intent leads to bad places.
I’d be curious to hear why, and how you distinguish “dark arts” from merely “fallacious arguments”.
As “deliberate manipulation” vs “honest confusion”.
But you don’t see “honest confusion” and “dark arts” as mutually exclusive??
Oh, I see. You’re saying Dark Arts is a subtype of fallacious arguments. I am not so sure—you can engage in Dark Arts without using fallacious arguments at all.
Huh
To me, that looks like you’re starting out saying that whether or not they intend to manipulate people is irrelevant, but then ending with saying that if it’s not deliberate manipulation it doesn’t count as dark arts. Not sure what you’re getting at, I guess.
I should have been clearer: in the quotes above replace “intent” with “intent with respect to the mentioned ‘particular regard for the truth’”.
Something like that. I agree that you can engage in Dark Arts without ever doing something that can get you called out for using fallacious arguments, but I think the underlying structure is usually if not always the same. You do things that can be somewhat correlated with the truth in ways that will predictably lead to them being moved by it when in reality they shouldn’t. For example, wearing a fake lab coat while speaking may not be an explicit “logical fallacy” but the effect is still “appeal to authority”.
I don’t know—a typical Dark Arts technique would be to carefully select the facts/evidence to present (plus the facts NOT to mention) and this is not a fallacy, this is just a straight-up attempt to mislead.
Simply seeing that there exist facts/evidence that support one side isn’t enough to support the conclusion that it’s actually right. To get there you have to make the jump “there exists this set of evidence for, therefore it is true”, which is fallacious, in general, since sometimes there’s more compelling facts/evidence in the other direction and you have to check for those too.
I don’t really like the “fallacy” framework itself too much given the whole “logical fallacies as weak Bayesian evidence” thing, but I think there’s an important point to be made in that you influence people through their (perhaps implicit) reasoning processes. The idea that there is this “separate” backchannel where you can influence people with things that have nothing to do with truth (in the target’s frame) isn’t a real thing, so any framing of “Dark Arts” as “using the Dark Channel [which is dark regardless of how it’s used or what the underlying intent is]” is mistaken, and Dark Arts has to be recognized as the optimization of a message to pass through and be validated by the targets reasoning process even when it “shouldn’t”.
It’s not fallacious. You can check for counter-arguments, but proving a negative is pretty hard, so you can never be sure that there are no facts anywhere which overturn your theory. That is not a good reason to never come to any conclusions.
No, I don’t think so. There is the entire highly successful field of marketing/advertising which disagrees.
And someone wearing a lab coat is generally more credible than someone dressed like a homeless person, so if you lack the ability to evaluate the arguments yourself then you go with what you’ve got. Fallacies don’t come from nowhere, they’re Bayesian evidence, even if they’re sometimes used in ways that don’t make for great reasoning.
I’m saying this from the perspective of someone who has used hypnosis to get people to download and run a (benign) program called “virus.exe”, among many other things. The success of marketing/advertising is the weak version of this challenge.
The claim isn’t that marketing/advertising “doesn’t work”, it’s that Ads Don’t Work That Way. Any framing that ignores/denies that they’re dealing with a reasoning process and looks only at the individual blocks is missing the forest for the trees. Yes, you can do stuff at the tree level sometimes, but that does not negate the forest level view, and including the bigger picture is not just more complete but also more powerful.
How do you define “reasoning process”, then?
The processes which take in information from the world and output [usually implicit] the models of the world that they act on.
For example, if someone cites statistics on how safe airplanes are but feels fear and then won’t get on the airplane, then I’m not just looking at whatever made them say “airplanes are safe” but also what made them feel fear and what made them choose to side with the fear instead of the verbal reasoning.
Ah, so a “reasoning process” is basically everything that the mind does?
If you suddenly notice a spider near you and jump away, that was the result of a “reasoning process” because the output (the jump) implied that in your model of the world spiders are scary?
If so, your original quote
looks quite trivial: you influence people though changing what’s happening inside their mind—well, of course.
It is nearly tautological, yes, but it’s also true, and people seem to forget that. “The Dark Arts” do not belong in the same category as involuntary drug/hormone injections. If you treat tautologies as if they’re false then you’re gonna end up making mistakes.
For example, with the spider thing, this “trivial” insight implies that if you make up your mind about whether spiders are dangerous and conclude that they are not, then you stop jumping from spiders. This is indeed what I find, and I also find that most people don’t find this to be “obvious” and are rather surprised instead when they see it happen.
Tautologies are trivially true, I don’t know why you think that people forget that.
So, the easy way to treat any phobia is merely to “make up your mind”? Methinks it’s considerably more complicated than that.
People don’t forget “tautologies are trivially true”, they forget the “persuasion works through your reasoning process, not around it” because the logically necessary implications sometimes seem “absurd”. (“So, the easy way to treat any phobia is merely to “make up your mind”? Methinks it’s considerably more complicated than that.”)
While tautological statements can’t rule out any logical possibilities, they’re important because they do rule out logical impossibilities. If you try to say “that’s tautological, therefore it’s trivial and meaningless” then you’re gonna end up saying silly stuff like “So, you’re a bachelor. Are you also married?”, for example.
“Solving phobias is about making up your mind” is tautologically true. Nowhere did I say anything about the complexity of doing so, but it’s tautologically true. A phobia is “an extreme or irrational fear of or aversion to something”. This requires both a fear and conflicting judgement that the fear isn’t rational. If you fall in the lion cage, for example, fear is just a rational response to danger. If you don’t have a conflicted mind (i.e. you’ve “made up your mind”), you don’t have a phobia—just a rational fear or no fear.
If you let yourself believe false and logically impossible things like “solving phobias is not about making up one’s mind” then you’re going to end up fooling yourself into silly thing like failing to notice that “systematic desensitization” is merely an attempt to help someone makeup their mind by providing evidence a bit at a time in a safe context and hoping that it shows what you want it to show and convinces the part of the mind that perceives danger. If you fool yourself into thinking the logically impossible “not about making up your mind”, then you don’t notice the possibility of other ways of bringing about coherence, which are often better and quicker ways of doing things.
For example, a few months ago I was having this same conversation with a friend who also felt like “getting over irrational fears” was more than just “making up her mind”. Fortunately she also had what she considered to be an irrational fear of heights and I had a rock wall tall enough to scare her, so I could show her. I pointed out the holes in her “my mind is made up” logic until she could no longer hold that view, then helped her make up her mind by asking her what, exactly, she was afraid might happen and whether she could know she was safe from that outcome or whether she needed the fear. We spent a few minutes to go through a few possibilities and then covered the rest with a catch-all. After bouldering on her own later, she told me that she hadn’t realized that she was afraid even while bouldering, but that the difference jumped out at her now that the fear was gone. That was exactly “just making up her mind”, both from my perspective and hers. It was actually pretty damn simple too, even though “making up her mind” necessarily took more than an instant because there were several questions she had to think about and answer first.
In other cases it becomes even quicker and does look like a snap decision. I had another friend, for example, whose needle phobia spontaneously disappeared after having an unrelated experience with me that convinced her to accept that I’m right when I say things can be that easy. If we broaden to scope to other things you might find “more complicated than merely making up one’s mind”, then I can give a few other examples of things that felt like the inside like “just making a snap decision” and having results that match.
“Making up your mind” can be complicated sometimes, but it can also be simple. And remembering what it’s about helps keep it simple so you can end up the kind of person who doesn’t jump at [theoretically potentially poisonous creature who is nevertheless unlikely to hurt you] without ever working on that problem in particular.
The implication being moral absolutists think morality should be imposed by force? That seems far from being universally true, not least in rationalist circles.
Anyway, the point of contention isn’t which moral ideas win or lose, but which, if any, are true.
Actually, the point of contention is whether calling moral ideas “true” or “false” is a category error.
Agreed. Hence “if any”. So why start talking about imposing morals?
Basically, for objectivists (with respect to morals), having some other morality is wrong. For relativists it’s merely different. The former is a much stronger cause for intervention than the latter.
Also, willingness to insist on your morality is generally a sign of taking it seriously.
The correlation between moral objectivism and interventionism is probably true, but I think it’s historically contingent, and not a logical consequence of objectivism. Whether or not I think of my morality as objective (universal) or subjective (a property of myself), that’s orthogonal to what I actually think is moral.
I’m a moral relativist. My morality is that torture and murder are wrong and I am justified and, indeed, sometimes enjoined to use force to stop them. I don’t think this is an uncommon stand.
Other people are moral objectivists, but their actual morals may tell them to leave others alone except in self-defense.
I don’t disagree in any regard. I still fail to see how this is relevant to the admitted point of contention;
As an aside, I infer that you think imposing one’s morals on another would be wrong. Is that not a moral absolute itself?
It’s relevant because it determines whether the question matters. If some dude somewhere finds my behaviour immoral, I couldn’t care less. If the same dude decides he needs to do something about it, we’ll have to solve this disagreement somehow.
No, not wrong. But having a different set of consequences.
Then it seems clear to me that the question shouldn’t matter to you. Objectivists may be interventionists at a higher rate than relativists, but that bears no relation to which position is true.
That set of consequences being unpreferred, presumably. What is that if not an expression of (relative) wrongness?
If you prefer red wine over white, that is not an expression of white wine’s wrongness.
Not wrongness as a property of the wine no. But given knowledge of my preference and all else being equal, would it not be wrong to give me white over red?
You are mixing up two meanings of wrong:
morally wrong (approximately = evil)
not suited to
Serving white wine with steak might well be wrong in the not appropriate sense, but it is not wrong in the moral sense.
No. I assert that it would be (mildly) evil of you to give me white wine, given knowledge of my preference for red and equal availability.
It might be under certain moral systems. It’s probably not under other moral systems. It almost certainly depends a lot on the context.
Empirically that is not so. There are major world religions based on the fact that everyone should hold the one true belief and accord with its god-given morality. Followers of such religions profess, and those of the evangelist variety follow through with imposing their morals on others and believing it is the right thing to do.
Somewhat more secular is, say, the belief in equal rights for women or minorities. Lots of people on both sides have strong views about forced wearing of the hajib in some muslim countries. Advocating for woman in Saudi Arabia to have the right to drive, when you don’t live in or have any connection to that region of the world is trying to enforce one’s morals on another, right?
Like, a big part of our deal is that the ‘just need to say something is straightforwardly wrong’ is something you are making up. It is just in your mind. The Paperclip Maximizer would trade the human race for a paperclip, and that isn’t ‘wrong’ in any absolute sense.
‘Wrong’ for you means something different than it does for Clippy. You can deny that that is relativism if you like, I’m not huge on labels. The key thing is that you get that there is no dif between you picking what is ‘right’ and Clippy picking what is ‘clippiest’. They are both value judgements created by moral systems.