Empirical claims, preference claims, and attitude claims
What do the following statements have in common?
“Atlas Shrugged is the best book ever written.”
“You break it, you buy it.”
“Earth is the most interesting planet in the solar system.”
My answer: None of them are falsifiable claims about the nature of reality. They’re all closer to what one might call “opinions”. But what is an “opinion”, exactly?
There’s already been some discussion on Less Wrong about what exactly it means for a claim to be meaningful. This post focuses on the negative definition of meaning: what sort of statements do people make where the primary content of the statement is non-empirical? The idea here is similar to the idea behind anti-virus software: Even if you can’t rigorously describe what programs are safe to run on your computer, there still may be utility in keeping a database of programs that are known to be unsafe.
Why is it useful to be able to be able to flag non-empirical claims? Well, for one thing, you can believe whatever you want about them! And it seems likely that this pattern-matching approach works better for flagging them than a more constructive definition.
But first, a bit on the philosophy of non-empirical claims.
Let’s take a typical opinion statement: “Justin Bieber sucks”. There are a few ways we could interpret this as shorthand for a different claim. For example, maybe what the speaker really means is “I prefer not to listen to Justin Bieber’s music.” (Preference claim.) Or maybe what the speaker really means is “Of the people who have heard songs by Justin Bieber, the majority prefer not to listen to his music.” (Empirical claim.)
I don’t think shorthand interpretations like these are accurate for most people who claim that JB sucks. Instead, I suspect most people who argue this are communicating some combination of (a) negative affect towards JB and (b) tribal affiliation with fellow JB haters. I’ve taken to referring to statements like these, that are neither preference claims nor empirical claims, as “attitude claims”.
This example doesn’t mean that all “X sucks” style claims are attitude claims. Take the claim “Windows sucks”. It does seem plausible that someone who said this could be persuaded that their claim was false through empirical evidence—e.g. by a meta-analysis that compared Windows worker productivity favorably to worker productivity using other operating systems.
So if someone says Windows sucks, then whether their claim is empirical, attitudinal, or (most likely) some mixture depends on what’s going on in their head. You may be able to classify the claim with further conversation, however. If they say “even if users are happiest and most productive using Windows, it still sucks!”, that suggests the claim is almost entirely attitudinal.
Attitude claims taxonomy
I’ve been writing down attitude claims I think of or come across in my notebook. Here’s some that I’ve seen so far. Hopefully they’ll serve as good training data for your internal classifier.
Status claims.
“I’m not that smart”, if the speaker knows they’ve scored in the 95th percentile or above on a g-loaded standardized test. (This is similar to the “Windows sucks” person who still thinks Windows sucks after reading and agreeing with the meta-analysis—at this point, we can probably rule out the idea that their claim is an empirical one.)
“A betting loser has far more honor than the mass of men who live by loose and idle talk.” (Side note: I like Caplan’s writing, but the oath’s postscript is a bit ironic since it doesn’t present any empirical claims, so it’s not clear in what sense it could be “wrong”.)
Claims about social rules and the details of their implementation.
“That was uncalled for.”
“I have an obligation to help others when I help myself.”
“That’s your responsibility.”
“You don’t deserve that.”
“This is our land.”
“That’s not legitimate.”
“Even little lies are wrong.”
“If you didn’t vote, you’re not allowed to complain.”
“Saying you “fucking love” something should carry some weight.” (This example is particularly interesting, because Maddox is claiming that people who are making a certain affect claim are breaking a social rule.)
“Marriage is between a man and a woman.”
Attribution claims.
“The crash was your fault.”
“That was my idea.”
“You’re innocent.”
Affect claims. (Bit of a catchall here...)
“You’re weird.”
“Chess is overrated.”
“That’s really cool.”
“Professionalism is the most important thing in a business.”
“Marketing is evil.”
Not all of the examples I’ve found fit neatly in to one of these categories (e.g. “I can do anything I want”), and it’s pretty common to find claims that seem like mixtures of attitude and fact/preference statements. For example, if someone says “Being outrageous is the best way to be”, are they saying “I prefer to be outrageous” or “Yay outrageousness”? Probably a bit of both.
What attitudes should I have?
That’s a “should” question, i.e. a question about social rules. Unless you meant it as a shorthand for a question about how best to achieve some goal, e.g. “What attitudes should I have in order to best achieve my preferences?” Then it becomes an empirical question.
I suspect that most people can better achieve their preferences by consciously choosing and adopting attitudes rather than going with whatever defaults they grew up with or are prevalent within their social group. Attitude hacking is not trivial, so you might want to find a friend to adopt your preferred attitude with. (This isn’t anti-epistemic groupthink as long as you’re doing this for attitudes only and not for facts.)
Are attitudes bad?
That’s an attitude question.
I think to best achieve your preferences, it’s likely optimal to take some attitudes seriously, e.g. Jon Kabat-Zinn: “as long as you are breathing, there is more right with you than there is wrong, no matter how ill or how hopeless you may feel”, or Eliezer Yudkowsky: “probability theory is also a kind of Authority and I try to be ruled by it as much as I can manage.”
Unfortunately, I haven’t managed to take any attitude claims as seriously ever since I realized that they’re basically just made up. (Which is itself an attitude statement of the affect type, about the importance of attitudes.) But I’ve also felt more free to “cheat” and modify my attitudes directly in order to optimize for my preferences.
Will pointing out that social rules are social rules make people less likely to take them seriously? Probably. The ideas in this post are dangerous knowledge that shouldn’t be spread beyond rationalist circles.
If you’re like me, you may get kind of squeamish consuming attitude-heavy media (which is also produced by rationalists, by the way; see Paul Graham or Julia Galef). That’s an attitude.
Connection with Nonviolent Communication
Empirical claim: If you restrict yourself to empirical claims and preference claims when you have an argument, you and the people you argue with will be more pleased with the outcome of your arguments.
Nonviolent Communication is a philosophy that recommends replacing attitude claims like “You’re an awful neighbor” or “It’s your fault I can’t get to sleep” with empirical claims, preference claims, and requests: “Your music is playing very loudly (fact). I’m having a hard time sleeping (fact). I’d really like to be able to get to sleep (preference). Could you turn down the volume?” Presumably this works because (a) arguments over empirical claims are sometimes actually resolved and (b) if you share preferences instead of bludgeoning people with social rules, they’re more likely to empathize with you and do things to make you happy.
More thoughts
After crystallizing the fact/attitude distinction, I started trying to apply self-skepticism to empirical claims only, and just ignoring attitude claims I didn’t like. (“That’s just, like, your opinion, man.”) Carefully considering uncomfortable empirical claims is a habit that will improve my model of the world, thereby helping me achieve my preferences. (That’s what it’s all about, right?) Carefully considering uncomfortable attitude claims, not so much, except maybe if they’re from people with whom I have valued relationships that I want to debug.
Does this post describe an attitude? I actually put it and other affect-free classification schemes in to a fourth category: that of a “cognitive tool”, like a description of an algorithm, that you can take or leave as you wish.
- 5 Jun 2014 23:03 UTC; 4 points) 's comment on [Meta] The Decline of Discussion: Now With Charts! by (
You’ve just reinvented logical positivism.
Welcome to the last 3 years of Less Wrong.
I don’t know how much to trust the Wikipedia article, but logical positivism, in its strong forms, is meaningless. That is, it is based on a proposition that by its own criteria, is not verifiable. However, what is truly valuable—because I say so! -- is developing a recognition of what is verifiable and what is not. To go further and claim that unverifiable statements are therefore meaningless is to go too far.
A writer here wrote, about the statement “[JB] sucks.” And another commented, what if “JB’s music is objectively crappy music??” After this was tagged as not a rational statement, he changed the text to read “That JB’s music is crappy music according to some standard.”
It gets preposterous. Yes, the writer was correct. If there is a standard, which can be objectively applied, for “crappy music,” then one could make a claim that the music is “objectively crappy by the standard.”
But that standard itself, is it objective? How was it determined? Suppose we take a survey of his target audience, choosing 100 children in a certain age range. If the survey has a scale of 1-10, with names for each choice, with, say, 1-2 being labelled “crappy,” and we play them a song, and ask for their response, and a majority of them rate it as “crappy,” that would allow us to claim a certain kind of objective measurement (of a subjective response).
But this is not what we ordinarily mean when we say something is “crappy.” I would mean
(1) I don’t like it.
(2) We don’t like it. (I.e., me and some undefined group, maybe my friends).
(3) It doesn’t work, it’s buggy, ugly, etc.
But the expression is not objective, it doesn’t point to objective measures or standards. If we had something objective to report, we wouldn’t say it that way, except perhaps as a summary or lead-in.
Language is fluid, ordinary human speech is not mathematics. I’ll put it this way: it’s always wrong and it’s always right. That is, it is always possible to interpret it to find flaws, and always possible to find something that works.
I don’t want to say “is true,” because that would enter a completely different territory of discussion. Right now, we are talking about types of statements, and it’s a valuable inquiry.
Coming up with a definition for “meaning” was not the focus of my post… In fact, there’s no definition there, and that was on purpose.
This seems so different from all your other examples that I’m surprised you don’t comment on it. It seems to be trying to impose a rule. You might say that most of the claims are trying to promote a preference, too, but I don’t think you did.
Why did that example seem to stand out from the claims in the social rules section?
One reason: because I’d assess such an utterance very differently based on who the speaker and what the context was, unlike the other two statements you cited.
Depending on the context, this could be a threat, or an informative statement regarding a policy, and so on. If you’re a guest in somebody’s home and hear this from your host as they see you pick up a pricey vase, you might interpret it differently than if you hear at a shop from the shop’s owner, or from your friend who’s shopping with you (in this latter case, it might be an empirical claim, that you’d counter with “no, in this state shops must carry insurance and customers are not liable for unintended breakage”).
I’d also disagree with you on the characterization of utterances of the “that was uncalled for” family, and might suggest that the linguistics you deploy in your post is too impoverished to account for them properly. I have only a passing familiarity with speech act theory, Gricean linguistics or relevance theory, but they strike me as better equipped to dissolve the puzzlement you seem to experience on encountering speech of that sort.
I’m not sure what I was thinking, but now I would divide most of the examples from the social rule examples and put this yet farther off. The social rule examples seem much more clearly promoting the rules than the preference examples are promoting the preferences.
But “You break it, you buy it” is not a social rule. The speaker is not saying that of course everyone knows this is the rule, but saying that he has the right to set rules in this shop and that this is his rule. Of course, his right to legislate, and the reasonableness of this particular rule is itself a social rule, promoted by the statement, but that is not the main point.
The difference I see is that it contains a description of the rule that it is promoting. It’s similar to the “if you didn’t vote, you can’t complain.” sentence in that; it doesn’t stand out so much to me though.
Let’s tease apart what you mean in trying to distinguish “empirical” claims from “unempirical” ones. You think that “Windows sucks” is an empirical claim, while, say, “Madonna sucks” is not. What does this mean?
(1) It can’t mean that “Madonna sucks” is meaningless. We all understand the sentence perfectly well.
(2) It can’t mean that “Madonna sucks” fails to convey information about the world. Certainly it largely or entirely conveys information about the speaker’s preferences; but those preferences are themselves a part of the world. “I prefer not to listen to Justin Bieber’s music.” is an empirical claim, a worldly claim, one you can be right or wrong about, one with perfectly ordinary truth conditions; so certainly, if that is the meaning of “Justin Bieber sucks,” the latter sentence must be empirical too.
(3) Perhaps the idea is that “Windows sucks” conveys information ‘straightforwardly,’ while “Madonna sucks” only conveys information by implicature — we learn things aplenty when you assert it, but we don’t learn about what you literally asserted. But all assertions have implicatures, even paradigmatically empirical ones. And all assertions convey at least as much information about the beliefs and values of the asserter as they do about the thing asserted.
(4) It can’t mean that “Madonna sucks” isn’t making a claim. Something really is being asserted… grammatically, at least.
Perhaps it means that “Madonna sucks” does not correspond to a proposition? Intuitively, “Bob is in pain” and “Is Bob in pain?” and “Be in pain, Bob!” share a certain propositional content, . The interjection “ouch!” and the word “linoleum” and my hairstyle, on the other hand, seem to lack propositional content.
But it’s hard to see here how we could demonstrate that “Madonna sucks” is nonpropositional — it certainly seems to be asserting some fact, and if we claim to be radically mistaken in this case, it seems to put us in danger of falling into a radical skepticism about the propositional content of all our assertions.
What is being asserted? Well, at a minimum, “sucks” is being predicated of an object, “Madonna.” There is some entity such that it is the individual Madonna, and this individual sucks. Perhaps “sucks” is like “is sinful” or “is a witch,” and there is no real-world property that corresponds to it; but in that case it doesn’t follow that is not a proposition. It only follows that all propositions of the form , where “sucks” is used in the Madonna way and not the Windows way, are false propositions. The lack of a metaphysical basis for some term does not in itself force us to adopt a revisionary stance toward the term’s semantics.
(5) It can’t mean that the judgment “Madonna sucks” wasn’t arrived at as a result of weighing empirical data. The Madonna hater is performing the syllogism ‘All musicians who create music that I find routinely agonizing are bad; Madonna creates such music; therefore Madonna is bad.’ This badness is predicated because of the individual’s experiences.
(6) Similarly, it can’t mean that “Madonna sucks” is an incorrigible belief. New data could convince me that Madonna doesn’t suck after all — that she no longer sucks (because her new CD is excellent), or that she never sucked in the first place (because I mistook someone else’s music for hers, or because my music-evaluating faculties were impaired when I first listened to her).
So much for psychological incorrigibility. But perhaps the belief is ‘unfalsifiable,’ in some deeper sense? It’s not clear to me how. And this deprives us of the main criterion for distinguishing “Windows sucks” from “Madonna sucks;” for in both cases the sophisticated ethicist could argue that his/her truth-conditions for “x sucks” are straightforwardly empirical.
Well said. It seems to me that John_Maxwell_IV is trying to put more weight on the rough-and-ready “fact vs opinion” distinction than it can bear. The empirical/non-empirical divide is an (the?) important dimension of that.
I disagree. The claim “Justin Bieber sucks” conveys information about the preferences of the speaker to a greater degree than “Windows sucks”.
Sure, you can call them false, but they’re an interesting subset of false propositions that are false not because they make incorrect statements about the world but because they don’t correspond to real-world properties. And it may be useful to hack your brain to think of such a proposition as “true” self-efficacy purposes.
You would be being kinda silly though because as you say, “Madonna sucks” corresponds to no real-world property. From a purely pragmatic perspective, you experience no loss regardless of the truth value you assign to statements that have dangling pointers to things that aren’t real-world properties. So you might as well choose whatever truth value you want for the purpose of helping your brain get things done.
...backed by what evidence?
A related topic would be the use of keeping things “close to you” in conflict resolution, which is well-worn standard practice. But that’s a bit different from only using external empirical claims.
Seems to work for me, but I don’t know of any high-quality evidence on the claim.
I used to think that. Not so sure anymore.
Consider the advantages of going with the social group. The ingroup signaling of going with the herd. The institutional and social support for the defaults. Social support not just for achieving the ends, but for maintaining your attitudes and persevering along the common course.
The flip side of all those are the handicaps of consciously choosing and adopting. I also suspect that akrasia is much more common going into a societal headwind.
I suspect that if you actually run the numbers, people who go with the societal flow do better. Maybe they’re not Steve Jobs, but neither are most of those who choose to buck society.
Those can all be factors in your conscious computation of what attitude to choose. And, you can choose your herd based on their attitudes. Attitudes are by no means uniform throughout humanity or even within a single society.
True, but I’d say that this still stands:
I think you’re unfair to suggest that these are only social rules. They seem to be mostly about morality to me. One can argue that morality is a social construct, but do so in a different place if that’s what you think. Otherwise, your classification gets muddled.
What’s the difference between something being a “social rule” or being “about morality”? Is there an empirical test for determining this?
Sure.
This is one of those cases where mainstream philosophy is way ahead of LW. See the work of Walter Sinnott-Armstrong, for example.
In a nutshell, what are the main ideas of his work?
In a nutshell? Most of our Moral judgments and intuitions are much better explained by natural selection, evolutionary psychology and other natural processes than by an appeal to some ontological beings/facts/laws as many Moral Realists do. In short, Biology explains morality better than Metaphysics does.
Not to be parochial, but that sounds an awful lot like the LW consensus.
(Granted, that consensus owes as much to a filtered set of mainstream moral philosophy as it does to endogenous content. Probably more; basic ethics isn’t a major focus here.)
Yep. My standard go-to on nearest mainstream metaethical philosophy to LW is Frank Jackson’s analytic descriptivism / moral functionalism.
For the curious...
BerryPick6′s summary of Walter-Sinnott Armstrong’s moral views — “Most of our Moral judgments and intuitions are much better explained by natural selection, evolutionary psychology and other natural processes than by an appeal to some ontological beings/facts/laws” — concerns what mainstream philosophy calls “evolutionary debunking arguments” (the classic paper here is Street 2006, but the point had been made less thoroughly many times before by both scientists and philosophers).
I should clarify, though, that evolutionary debunking arguments aren’t the focus of Jackson’s metaethical work, though as a naturalist Jackson assumes the basics of evolutionary biology and psychology and their implications for the origins of our moral attitudes.
The purpose of Jackson’s analytic descriptivism is, rather, to explain why something that feels kinda like moral realism can still be true despite the universe’s lack of spooky intrinsic normativity. Analytic descriptivism is one of several approaches for grounding (moral) normative properties in natural, descriptive properties. (Other well-known approaches to this include Railton’s and, less well-developed, Foot’s.)
For more, see my April 2011 blog post on Jackson’s theory. The best explanation of Jackson’s theory is, still, Miller (2003). Luckily, the next edition of Miller’s excellent book should be available early next year.
It is without a doubt one of the most helpful and informative books I’ve ever read and I strongly recommend it to anyone with any interest at all in Metaethics.
I had no idea it was being updated, any specific word on what new content will be in it?
You mean thousands of people as smart as EY, working for thousands of years, have got ahead of EY?
Shocking, right?
Seriously though, the only reason I phrased it that way was because of the discussions going on a few weeks ago where Luke talked how LW often acted like mainstream philosophy hasn’t done anything like what this site is doing. It wasn’t meant to be accusatory or anything like that.
There are multiple different semantic values for “morality,” so it’s an ambiguous term, and the intended sense will need to be stipulated. But in most modern discussions, “social rule” is not one of those values. For instance, the rules of English grammar and of dinner etiquette are social, but not moral. And English speakers recognize that violating a social rule can be morally permissible, or even morally obligatory.
Sounds to me like your use of “morality” corresponds pretty well with the definition “social rules that are super important” or the definition “an empirical cluster of social rules that share certain characteristics”.
(One of these characteristics is that people take them super seriously, even to the point of believing that they exist outside their heads, and don’t believe that they’re “just” social rules.)
BTW, none of this is meant to be an argument for egoism—I consider myself pretty altruistic. I strongly prefer to see the preferences of others satisfied, and I frequently aim to satisfy this preferences for others’ preferences to be satisfied at the expense of my other preferences.
So we agree, at a minimum, that moral rules aren’t just ‘social rules.’ They may be a special kind of social rule. To figure that out, first explain to me: What makes a rule ‘social’? Is any rule made up by anyone at all, that pertains to interactions between people, a ‘social rule’? Or is a social rule a rule that’s employed by a whole social group? Or is it a rule that’s accepted as legitimate and binding upon a social group, by some relevant authority or consensus?
Most people don’t think that even frivolous, non-super-serious rules live inside their skulls. Baseball players don’t think baseball is magic, but they also don’t think the rules of baseball are neuronal states. (Whose skulls would the rules get to reside in? Is there a single ruleset spread across lots of brains, or does each brain have its own unique set of baseball rules?)
As for altruism, I share your preferences. So we can isolate the meta-ethical question from the normative one.
This seems like a definitional consideration. Maybe we could skip that stage. What does it matter what counts as a moral rule? My guess: moral rules are “more important” than non-moral rules. What does more important mean in this context? Maybe typical punishments/ostracism for breaking them are higher, or maybe your brain just feels like they’re more important.
Picture two people arguing over whether gays “should” be allowed to marry. Both are perfectly aware of statistics related to preferences for/against gay marriage and all other relevant information. Their model of the world is the same, so what are they arguing about?
Now let’s say there are two grown people collaborating on a fictional universe. One thinks one thing about the universe, and the other thinks another. Can you imagine them having a serious debate about what the fictional universe is “actually” like? I think it’s much more likely they would argue over what things should be like in order to make an interesting/cool universe than have an object-level argument over universe properties.
The rules of marriage are fictional like a fictional universe. In some cases, people advance very serious arguments about the “truth” of things that are fictional. This is very common for social rules/morality. I label these “attitude claims” in my post.
Suppose you’re living in WW2-era Germany, and you learn of a law against helping gypsies. You see a gypsy in need, and come to the conclusion that you’re morally obliged to help that gypsy; but you shirk your felt obligation, and decide to stay out of trouble, even though it doesn’t ‘feel right.’ You consider the obligation to help gypsies a moral rule, and don’t consider the law against helping gypsies a moral rule. Moreover, you don’t think it would be a moral rule even if you agreed with or endorsed it; you’d just be morally depraved as a result.
Is there anything counter-intuitive about the situation I’ve described? If not, then it seriously problematizes the idea that morality is just ‘social + important,’ or ‘social + praised if good, punished if bad.’ The law is more important to me, or I’d not have prioritized it over my apparent moral obligation. And it’s certainly more important to the Powers That Be. And the relation of praise/punishment to good/bad seems to be reversed here. Your heuristic gets the wrong results, if it’s meant to in any way resemble our ordinary concept of morality.
Is it wise to add this assumption in? It doesn’t seem required by the rest of your scenario, and it risks committing you to absurdity; surely if their models were 100% identical, they’d have totally identical beliefs and preferences and life-experiences, hence couldn’t disagree about the rules. It will at least take some doing to make their models identical.
Yes, very easily. Fans of works of fiction do this all the time. (They also don’t generally conceptualize orcs and elves as brain processes inside their skulls, incidentally.)
Maybe, but you’re assuming that the act of creation always feels like creation. In many cases, it doesn’t. The word ‘inspiration’ attests to the feeling of something outside yourself supplying you with the new ideas. Ancient mythologists probably felt this way about their creative act of inventing new stories about the gods; they weren’t all just bullshitting, some of them genuinely thought that the gods were communing with them via the process of invention. That’s an extreme case, but I think it’s on one end of a continuum of imaginative acts. Invention very frequently feels like discovery. (See, for instance, mathematics.)
I actually like your fictionalist model. I think it’s much more explanatory and general than trying to collapse a lot of disparate behaviors under ‘attitude claims;’ and it has the advantage that claims about fiction clearly aren’t empirical in some sense, whereas claims about attitude seem no less empirical than claims about muons or accordions.
Sure, I’ll accept that.
Identical models don’t imply identical preferences or emotions. Our brains can differ a lot even if we predict the same stuff.
Thanks.
Hm, they sure do to me, but based on this thread, maybe not to most people. I guess the anti-virus type approach was a bad one and people really wanted a crispy definition of “empirical claim” all along, eh? Or maybe it’s just a case of differing philosophical intuitions? Sounds like my fiction-based argument might have shifted your intuition some by pointing out that moral rules shared a lot of important characteristics with things you felt clearly weren’t empirical. (Which seems like associative thinking. Maybe this is how most philosophical discourse works?)
What do you think of my post as purely practical advice about which statement endorsements to hack in order to better achieve your preferences? Brushing aside consideration of what exactly constitutes an “empirical claim” and whatnot. (If rationalists should win, maybe our philosophy should be optimized for winning?)
Yes, but the two will have identical maps of their own preferences, if I’m understanding your scenario. They might not in fact have the same preferences, but they’ll believe that they do. Brains and minds are parts of the world.
Based on what you’re going for, I suspect the right heuristic is not ‘does it convey information about an attitude?’, but rather one of these:
Is its connotation more important and relevant than its denotation?
Does it purely convey factual content by implicature rather than by explicit assertion?
Does it have reasonably well-defined truth-conditions?
Is it saturated, i.e., has its meaning been fully specified or considered, with no ‘gaps’?
If I say “I’m very angry with you,” that’s an empirical claim, just as much as any claim about planetary orbits or cichlid ecology. I can be mistaken about being angry; I can be mistaken about the cause for my anger; I can be mistaken about the nature of anger itself. And although I’m presumably trying to change someone’s behavior if I’ve told him I’m angry with him, that’s not an adequate criterion for ‘empiricalness,’ since we try to change people’s behavior with purely factual statements all the time.
I agree with your suggestion that in disagreements over matters of fact, relatively ‘impersonal’ claims are useful. Don’t restrict your language too much, though; rationalists win, and winning requires that you use rhetoric and honest emotional appeals. I think the idea that normative or attitudinal claims are bad is certainly unreasonable, at least as unreasonable as being squicked out by interrogatives, imperatives, or interjections because they aren’t truth-apt. Most human communication is not, and never has been, and never will be, truth-functional.
Thing sometimes communicated by “X is crappy”: “There are two tribes, those who hate X and those who love X. These are at war. I am in the tribe that hates X. If you like X, you are Evil. If you love X, you are my comrade.”
You lost me toward the end there.
Aesthetic judgement is a two-place function: “X likes Y.” But for human Xi’s, “X1 likes Y”, “X2 likes Y”, “X3 likes Y” etc. tend to correlate with each other. So one could in principle draw a network like Network 1 in “How An Algorithm Feels From Inside”, with nodes labelled “X1 likes Y”, “X2 likes Y”, etc.; but it would be computationally infeasible to use such a network for anything, so one uses a network like Network 2 instead, with the central node labelled “Y is beautiful”. (But in reality, if you knew whether X1 likes Y, whether X2 likes Y, whether X3 likes Y, etc., there would be no question whether Y is beautiful left to ask.) This is a useful approximation, but breaks down with things lots of people like and lots of people dislike, e.g. Justin Bieber’s music. (Even then, it may be useful to use a network like Network 2 but only including a certain subgroups of humans, e.g. musicians, or people like lukeprog who’ve heard lots and lots of different music, or people with IQ above 130, or people in my social circle, or people who wear leather jackets and long hair, etc.)
Of course, the reason why how much X1 likes Y is correlated with how much X2 likes Y is not telepathy—it’s that certain causal influences act on both. So, even if you know how much Xi likes Y for all i, there are questions left to ask.
Your meaning of “attitude” seems to amount to sloppy reasoning, where one endorses entertaining an unclear thought while refusing to unpack and sharpen its meaning (or alternatively to discard it as meaningless cognitive noise). Disapproving of “attitude” in this sense can then be a moral or aesthetic or instrumental judgement (as in, “it is wrong for a human being to reduce their clarity of thought”; or “it is disgusting when one engages in avoidable sloppy cognition”; or “it is disadvantageous to compromise one’s thinking skills by not exercising them in some situations”). Such judgments are not examples of “attitude”, as they can be unpacked and clarified as needed.
Brains don’t just reason. They also make perceptual judgements, feel emotions, and feel very strongly about statements that aren’t falsifiable empirical claims.
You say “unpack and sharpen”, I hear “rationalize”. If you’re inventing the explanation after the fact of experiencing the mental activity, is the mental activity really best understood in terms of the explanation?
The point is not to invent an explanation, but to only consider explained and meaningful those things for which you understand the explanation and meaning. If no explanation is available, don’t act as if you have one, don’t trust your brain to be thinking sense when you don’t know what it’s thinking and why.
This doesn’t sound like a sanitary or reliable heuristic. I don’t have any explanation that I deeply understand for certain things that my brain do, but when I ignore my brain on those things I invariably end up in a horrible situation that is much worse than if I’d just listened to it. I didn’t even have any clear explanation for why my brain would go “AAAAH TIGER RUN!” until I read about ev-psych, but I’m quite confident that not trusting it in such situations would be a very bad move.
In the context of this discussion, there is enough time to think things over. I primarily object to letting your brain systematically and repeatedly engage in activities of unclear purpose and meaning, without stopping to reflect on what it’s doing and why, and stopping to do that if the activity appears to be pointless.
What I was attempting to say is that even under those circumstances, there are specific contexts in which I’m consciously unclear as to what I’m doing, or why my brain wants to do this, and it seems pointless after a cursory analysis, but that in those specific contexts for specific types of activities this exact pattern has repeatedly shown itself to produce reliably better results than whatever I would decide to do consciously about those things.
These are not restricted to time-constrained scenarios of pressing urgency.
However, it might not be widely applicable to just anyone in general, since it obviously depends on some subconscious knowledge of these particular activities and a ton of background requirements and given assumptions.
The gist is: There are specific cases where I noticed a pattern that my brain does things which are unclear to me, but where if I act on them I obtain reliably better results than if I do not for certain contrived edge cases. For cases that do not pattern-match to known reliable results, I prefer to think things through as recommended (or sometimes experiment if the VoI is probably larger than the higher expected cost).
This kind of experimental evaluation seems like an all right method of judging your brain, if performed correctly. What I’m not comfortable with is endorsement of the absence of judgement over one’s cognition or of not changing anything based on such judgment, no matter what situations that endorsement is restricted to.
Hmm. Well, true for me too. I wouldn’t endorse it per-se either, especially not in an ideal world with an ideal mind.
However, considering limited mental resources, limited willpower and constant internal competition for the conscious mind’s attentions, I believe that this kind of behavior is instrumentally rational considering that it works when you have a good idea of when automatic behavior produces better results and, more importantly, all the much more likely times where it doesn’t.
tldr, but:
Did it ever occur to you that maybe they simply mean what they said? That JB’s music is crappy music according to some standard? I know, far be it from a rationality community to focus on the rational communication presumably being made, instead focusing on “signalling” is what’s in style for some stupid reason.
Same goes for:
Which is a direct quote from a comment I made here long ago :)
EDIT: removed “objectively”. I keep forgetting this word causes people’s brains to explode.
I happen to like Justin Bieber’s music okay. It’s easy to sing along to–most of his songs are in my singing range–and he has a pretty boy-church-choir sort of voice (I used to be in a choir.) I’m not sure how you can define his music, or anything that is the subject of aesthetic preferences, as “objectively crappy” given that, obviously, some people find it enjoyable.
Did I seriously just get downvoted on Less Wrong for pointing out what music I like? And making the point that you can’t define something as ‘objectively crappy’, only as ‘subjectively crappy on average’ based on how many people like/dislike it–in fact, JB likely fails this test based on the sheer number of pre-teens and tweens who like his music. I think it’s just that a lot of people who aren’t tweens don’t want to signal affiliation with them. I would expect this of the commenters on a site like mlia, but not here.
To the extent that anything in aesthetics is objective, I think we can agree that most of these movies probably are, in fact, objectively crappy.
“Subjectively crappy on average” based on the sample population who has evaluated them.
Anyone who would propose “objectively crappy” isn’t expressing rationality. There is no “objectively crappy,” unless you have objective standards for “crappy,” and apply them objectively.
I think Justin Bieber sucks.
I’m not going to tell my daughter that, because it’s just my own reaction, and my daughter would kill me.
Okay, okay, she wouldn’t kill me. She’d just tell me I’m an idiot. She’d be right.
I’m training her to distinguish between judgment and fact. It’s a task, she’s eleven. She does understand, when she’s sane. But the programming is strong that opinion is Real, man. And you actually are an Idiot, Dad.
Except when I just did something she likes (which is most of the time) and she is saying You are Awesome, Dad. Hey, I think she’s Awesome, too. That’s an objective fact.
Heh!
“Justin Bieber sucks” is a subjective comment. It would be so even if every human being agreed, and, rather obviously, that’s not the case.
I upvoted you, partially because I agree with you, but also because I liked that you gave an actual real-world scenario and it helped me understand the issue more clearly.
This is my provisional position about aesthetics: aesthetics is a two-place word (“X likes Y”), but for human Xi’s, “X1 likes Y”, “X2 likes Y”, “X3 likes Y” etc. are correlated with one another. Therefore, one could draw a network like Network 1 in “Neural Categories” with the nodes labelled “X1 likes Y”, “X2 likes Y”, “X3 likes Y” etc.; but such a network would be infeasible to compute, so one can approximate it with a network like Network 2 with the central node labelled “Y is beautiful”. This is usually useful, but breaks down outside the domain of applicability of the approximation, i.e. when considering stuff that lots of people like and lots of people hate such as Justin Bieber’s music; but even then, a smaller Network 2-type network with only aesthetic judgements of a certain group of people (e.g. musicians, or people like lukeprog who’ve heard lots of different music, or people with IQ above 130, or whatever) may (or may not) be useful.
So it’s not objective, unless it is. How do you know there aren’t objective standards?
“How do you know there aren’t objective standards.”
Because “Sucks” and “Crappy” are words which relate to subjective valuation concepts. You can redefine the words to have some objective criteria, then measure his music. However, redefining words doesn’t change the original definition, it just clouds language. And p(0.98) that 98% of all people claiming he “sucks” have NOT come up with a clear objective standard using a new definition (excluding that new definition being along the lines of “me and/or my social circle do not like his music”.)
You can have a set of Objective Criteria For Evaluating Music, but that’s not what most people mean when they say his music sucks.
What do you mean by ‘subjective valuation concept’? Rationality is a ‘subjective valuation concept,’ in several senses; its metric is relativized to, established by, and finds much or all of its content in individual mental states, and it is an evaluative term whose applicability standards are likewise stipulated by a mixture of common language usage and personal preferences. What makes ‘X is rational’ more objective than ‘X sucks’?
Well, the answer is either: a) Rationality is better defined, similar to how 2+2=4 is more objective b) Rationality is not more objective than suckiness.
My gut says A, but I suspect that a random population survey would be evidence more towards B.
Now, if you’ve redefined Rationality in to a technical term, like it’s generally used here on LessWrong, AND you’re speaking in a context where your audience understands that you mean the technical term, no issue. Same as how “Bieber is crappy” communicates plenty to people who already know YOUR definition of crappy.
I would agree that the main problem is a lack of clear truth conditions for “x sucks;” the fact that it’s a claim about subjective states, and that it relies on implicature, is immaterial. But this is a problem to some extent for nearly all natural-language terms, including “x is rational” in the colloquial sense. And the problem can be resolved by stipulating truth-conditions for “x sucks” just as easily as for “x is rational.” So I think we’d agree that we should focus on getting people to taboo and clarify all their words, not just on feigning ‘objectivity’ by avoiding making any appeals to preferences or other mental states. Preferences are real.
“a lack of clear truth conditions”
That is a very useful definition, thank you :)
Or even just undefine the words and inherit their literal meanings regarding lower relative air pressure and faeces.
Says you. But if I say Trabants are crappy compared to Ferraris, aren’t I experessing something reasonably objective?
Most everyone will get what you MEAN, but that doesn’t mean that it’s ACTUALLY become objective. It’s just a colloquial usage that most people recognize, and it’s probably hazardous to your memetic health to let yourself believe that just because people understand it, that it’s literally true :)
Going a bit more extreme than a mere Trabant: If you had a car which exploded after any impact of more than 5 MPH, wiping out half a city, it would be crappy to everyone EXCEPT terrorist bombers who are going “Wow, I’ll take three!”
I am pretty sure that you two use different definitions of the term “objective”. Tabooing (a LW jargon for “defining”) “objective” might be helpful.
Stealing from RobbBB: subjective shall be those things without a clear truth condition. You can taboo the word in question (“sucks”) and replace it with a clear truth condition (“I want a fuel efficient car”), at which point it becomes object—has a clear truth condition :)
Subjective things have clear truth conditions: “I like vanilla” is true because I like vanilla. The thing is that they have truth conditions that are indexed to individuals.
You might consider that a clear truth condition, but it would be fairly complex for me to determine whether or not you’re lying, or just mistaken. Thus, while it has a truth condition, it’s not really a clear one. “Peterdjones professed to like vanilla on 17/11/2012” is much clearer, and I’d say about the limit of what we can objectively say.
You might consider it a clear truth condition, since, we strongly tend not to question such reports by default.
http://wiki.lesswrong.com/wiki/Highly_Advanced_Epistemology_101_for_Beginners
You seem deeply confused by what is meant by “truth”. Suffice to say, “not questioned by most people” has nothing to do with what I mean by the word.
“if you are not cofused by it, you don’t understand it”.
You may mean something that floats free of common intutions. I can only wish you the best of luck in arguing a theory of truth from ground zero—an intuition-free basis.
Empirical truth? I have the intuition that if I can see and touch it, its there. How can I prove that?
Mathematical truth? I have the intution that if you can prove something from intuittivle obvious axioms truth-value-preserving rules of inference, then they are true But why would the axioms be true absent intution? and what’s so specual about truth-preservation?
Etc
Etc.
Whole History of Human Thought 101.
I’ve been assuming troll for a bit, but it seems silly to wager on it since you could just lie to me. Although I suppose to YOU it wouldn’t be a lie, since your intuitions on truth make everything you say automatically true. Neat trick, but it doesn’t really work when someone can link you to an actual working, usable definition of truth. Maybe you are just very bad at reading? If so, you might want to try a different site. We use a lot of big words here.
I suppose I shouldn’t feed you, but I’m finding you a sort of adorable troll. Not that I’ll actually be responding further :)
That’s not what I am arguing a all. I am only appealing to the widespread idea that a subjects testimony about their own subjective tastes, thougnts, beliefs and preferences is correct by default. I don’t think people can subjectivey make 2+2=5, if that needs pointing out.I chose liking vanilla as an example for a reason.
That is a rather ironic comment, given that you have badly misunderstood me.
In case you need help making up your mind, I have added Peterdjones to my ignore list a month or two ago, after realizing the futility of the discussions I had had with him/her before then. Having scanned through what they wrote since, I realize that this was indeed a good choice.
Oooh, I didn’t realize there was an ignore list. Thank you indeed :)
Umm, it’s in my head :) After years on IRC and online forums I found that this to be a useful way to prevent people from getting under my skin. Once someone is classified as incapable of an intelligent discussion I find the stuff they write not nearly as annoying. YMMV.
Oh, you’re the person who doens’t believe in reality. I don’t mind you ingoring me, but you should really have chat with handoflixue.
Maybe someone could tell me what would be better evidence of what someone thinks or feels than their own reports.
The issue is that just because it is strong evidence may not make it a clear truth condition (although I suspect what one means by “clear truth condition” may be need more detail). But one obvious issue is that observed human behavior can matter a lot. Someone might claim that they really care a lot about the poor, but if they never give to charity or do anything else to assist the poor, their behavior is pretty strong evidence that their report isn’t very useful.
Someone’s individual behaviour may well be a clear truth condition, in addition to their reports, and it is still subjective because different people behave differently. “Clear truth condition” still does not equate to “objective truth condition”.
This is quite an onerous requirement, given that people disagree on that “clear truth” thing a lot.
In your example, people may disagree on what “a fuel efficient car” is. Does it include the energy required to manufacture and later dispose of the batteries? If so, what total mileage does one use to properly amortize it?
Something along the lines of “measurable with an agreed upon procedure” might be better for the group of people who can agree on the measurement procedure. Under this dentition, if no such group includes both Abd and his teen daughter, then “Justin Bieber sucks” is “objectively” a subjective comment. Specifically, everyone who agrees with the above definition of objectiveness and will apply it: “look for a group of people who agree on ways to measure musical suckiness and include both Abd and his daughter, and come up empty” will then conclude that there is no measurement procedure which can resolve their dispute, and therefore the statement under consideration is objectively subjective. Not to be confused with subjectively objective.
Well, not sure how much of the above made sense.
I like the idea that if there is no method-of-measure such that both parties can agree to that definition, then it is subjective. It nicely encapsulates my intuitive feelings on subjective vs objective, while being much more technically precise :)
EDIT: I’d go on to say that “a clear truth condition” and “an agreed upon method of measuring”, to me, work out as having the same meaning. People disagree on “truth” quite a lot, but such people are also unlikely to agree to a specific method of measuring. If they have agreed, then there is a clear truth condition. But having it spelled out was still Very Useful to me, and probably is a better way of communicating it :)
There’s a clear something condition. Elsewhere, you object to the idea tha presence of agreement, or lack of disagreement, (“not questioned by most people”) is sufficient for truth:-
http://lesswrong.com/lw/fgz/empirical_claims_preference_claims_and_attitude/7vcg
I’m glad that you found my comment helpful. It was certainly worthwhile for me trying to articulate my qualifications of the term “clear truth”.
I dislike using the word “truth” outside of its precise meaning in mathematical logic, because it is not very useful instrumentally—there is often no way to check whose interpretation is closer to “what really happened” or what would happen in every single one of many counterfactual scenarios.
For example, one of the standard things a therapist says during a couple’s counseling in response to the contradictory versions of what happened at some point in the rocky relationship is that “you have to accept that there is one partner’s truth and there is the other partner’s truth”. Both are completely sure in their version of what had transpired, and that the other partner has it wrong. Unfortunately, there is almost never a way to tell what “actually” happened, and even if there were, it would not be nearly as helpful going forward as working out real issues instead of dwelling on who said/did what and when and how this grudge can never be resolved without some major restitution.
Thank you! I shall also steal this, though in my case for more nefarious purposes. It is a useful tactic.
I don’t see why. Aren’t things like 0-60 timings objective?
That makes it a good bomb, not a good car.
http://lesswrong.com/lw/fgz/empirical_claims_preference_claims_and_attitude/7udr seems to cover it at this point :)
Not very well, though. I think Mainstream Philosophy is way ahead on this.
If there were actual Objective Standards for things, and we could know them, it would be very surprising to me that the world functions the way it does. It is far less surprising that they do not exist or that they exist outside our ability to know them.
How do you think the world would look differently if there were actual objective standards for things?
Assuming they were knowable, I think arguments over JB being bad or good could be solved in a much simpler way. Namely, by appealing to this Universal Objective Standard. Arguments about personal taste (like the JB example) would look much more like arguments over whether or not chromosomes are located in cells than what they do now, which is something of an “I’m right!” “No, I’m right!” deal...
So in a question for which there is an objective standard, we should expect to see widespread consensus among those familiar with it (so not among children, or the ignorant, but among those educated enough to understand the standard).
If it turned out that, among those we could expect to be familiar with an objective standard (if there is one), there is widespread agreement over whether or not JB was good or bad, would you concede that in this case it appears there is an objective standard?
With widesrpead disageement over muder being wrong?
I’m sorry, I don’t understand what you mean. Are you agreeing with me? Because there is disagreement over whether murder is wrong, and even if there wasn’t, I’m not sure that would be a very powerful factor in my judgement of whether or not Objective Standards exist and are knowable.
Give examples
Mackie’s Error-Theory is the first that springs to mind. One could make the case that no Non-Cognitivist theory allows us to say that ‘murder is wrong’. Various versions of Divine Command Theory would not necessarily believe that murder is wrong. I would link to the corresponding pages on SEP, but I’m terrible at the code on this site, so I’ll trust that you can find them...
That’s disagreement about whether anything is wrong, and it isn’t widespread.
You asked for examples of theories where ‘murder’ is not necessarily considered ‘wrong’. I provided you with three, of which two have been at one time or another, or are currently, very widely held. I’ve already understood, thanks to my conversation with thomblake which I linked to earlier, that we aren’t having a substantive disagreement here, so I don’t know what more you want from me.
Your comment:
..seemed to me to be a standard objection to moral objectivism on the basis of disagreement about first order ethics. I responded that there are aspects of first order ethics that are in fact agreed on by most peopleie murder is wrong, and charity is not-wrong. You then replied in terms of meta ethics. Was that a change of subject, or were you talking about metaethics all along? If the latter, why would metaethics, an academic specaism understood only by a few, affect “the way the world functions”?
I don’t actually recall referring to ethics or metaethics at all, just making an (epistemological? metaphysical? I’m not quite sure what to call it...) claim about my perceptions and beliefs of the difference being at odds with what I would expect to find in a world with Objective Standards. Do you think making that kind of statement thrusts us into a conversation about Ethics, or were just changing the subject when you brought up murder and value judgments? If the former, please let me know what I’m missing here...
A world with objective standards for some things, or nothign, or everything? You were confidently claiming that “crappy” is alwasy subjective. But there are at least some objective standards. Several examples have been given.
Hmm. That’s a good point. I’ll have to think on it for a bit and get back to you. :)
There is literally no disagreement over whether ‘unjustified killing’ is ‘justified’.
There is widespread disagreement over which acts constitute murder.
In so far as that statement is true it is a tautology akin to saying “Everyone agrees that the not A is not A”. If one tries to make it non-tautologous by say referring to specific subclasses of killing, then one is going to run into problems like sociopaths.
Yet that’s the entire crux of the issue.
“Everyone agrees that ~A is ~A” is not a tautology, any more than “Everyone agrees that second-order logic is sound.” is a tautology.
“Unjustified killing” (Murder) is already the intersection of acts which are killing and acts which are not justified. The problem is that different people have different sets of “Acts which are justified” and “Acts which are morally wrong”.
I don’t think you and JoshuaZ are having a substantive disagreement.
If you want to be pedantic, note that murder generally means unlawful or extralegal killing, not unjustified killing.
In the legal sense, murder is killing which is not legally justified. In the moral sense, murder is killing which is not morally justified.
There are certainly disagreements as to whether violations of any law are inherently immoral.
Can you provide a citation? I was under the impression that legal killing is not considered murder, even if it is not legally justified. For example, a judge might sentence a criminal to death for unjust reasons, but that would not be considered murder, even though it could be a sort of wrongful death. Or is there a more technical sense of “legally justified” at play?
The criminal code defines things very explicitly, even though sometimes circularly. For example, the USC defines murder: “Murder is the unlawful killing of a human being with malice aforethought.
The major distinction is from manslaughter: “Manslaughter is the unlawful killing of a human being without malice.”
Manslaughter is defined in such a way as that it is not an act, but rather either the unintended result of negligence or a reaction which does not constitute a decision.
The moral sense of murder includes many things not included in the legal sense, such as the execution of an innocent person.
There is disagreement over whether it even makes sense to call things ‘justified’ or ‘unjustified’, in addition to disagreement over whether actions in general can ever be ‘justified’ or ‘unjustified’.
I agree that if one where to concede that something is P, it would be very difficult for him to also assert that ~P, but I don’t really see how that’s relevant, since, as I said, there is in fact disagreement over whether killing can ever be unjustified, is ever unjustified, or whether that word even means what most people think it means.
‘Murder’ is defined as ‘unjustified killing’.
Killing is not always murder.
If one believes that acts cannot be ‘unjustified’, one does not believe in murder. (In the same sense as ‘I don’t believe in telepathy.’)
Full Disclosure: I’m still not sure I really understand how definitions and differing opinions on definitions are treated and handled here at LW, so if you could enlighten me in this area in general, I’d really appreciate it.
That being said, I’m positive I’ve seen people use the word murder even when they believed the act was justified. Obviously, had they used the words ‘unjustified killing’, there would be very little room for argument, but be that as it may, I’m still not positive that ‘murder’ has to be / is usually defined as ‘unjustified killing’.
Further, I think it is a fairly consistent position to not believe that things can be ‘unjustified’, define ‘murder’ as something like ‘killing without explicit consent of victim’ and believe in murder at the same time; I’m not seeing anything wrong with holding that kind of position.
Ideally, the sides of a debate figure out whether there is a substantive or definitional dispute. Personally, I think there is value in figuring out the most useful definition for a particular conversation, but I’m not sure if that is the local consensus.
There is pretty widespread consensus that arguing by definition is not productive in figuring out what is true.
The standard approach is to:
notice you’re having a definitional dispute
find / make up new words to refer to the two definitions under dispute
go back to the substantive discussion, without threat of equivocation
In your opinion, am I having a definitional dispute with people in this thread, or are we disagreeing about something else?
Yes, starting here. Decius is just noting that murder means “unjustified killing” and so claims about the wrongness of murder are tautological.
Ah, so I guess my dispute is not with him, rather with Peterdjones. I just don’t believe that ‘murder’, as Decius is defining it, ever happens. Also, how this is at all relevant to the matter we were discussing earlier is still somewhat unclear to me.
Anyway, thank you, this has been most helpful.
That’s a pretty clear case of using the word wrong, unless you’re getting into really fine distinctions. If you spot someone doing that, it’s probably worth pointing out that most people would be confused by using the word that way.
In a particular context, you might want to make use of the distinction between unjustified killing and unlawful killing, in which case murder would be the latter.
I’m certainly not confused when someone uses ‘murder’ without meaning ‘unjustified killing’. Is this just me?
EDIT: See the discussion me and thomblake just had
Very interesting. I’m wondering if such a classification has been discussed before or is it mostly original research?
Do you think that the discussion quality (how does one measure it?) on this forum would be improved if the participants consciously considered their statement’s taxonomy and clarified them to make their class as unambiguous as possible, or would it just make the discussion more cumbersome? For example, how do you classify your very first (meta-)claim: “None of them are falsifiable claims about the nature of reality.” Is it an opinion?
The snarky answer: It’s not a falsifiable claim.
Any claim might be falsifiable if it is adequately specified, so that it becomes testable. If a claim, as stated, isn’t falsifiable, it might become so through specification. The author hints at this with:
And some of the “different claims” may be falsifiable.
Ultimately, we could also take unfalsifiable claims as being expressions of some attitude. It’s only when we try to determine if they are “true” as applied to some reality “out there” that we run into trouble.
The value of the post is in practicing and developing the skill of ready identification of the whole class of claims that are not factual, i.e., not about reality aside from our judgments, opinions, estimations, theories, preferences, conclusions.
Original research.
I think it’s pretty rare to read what I’d consider to be unambiguous attitude statements here on LW. I guess it might be worthwhile to call them out when they happen.
Hm. Well, in the post, I argued that the attitude/fact distinction is in the head of the speaker. Which means that my attitude statement examples are examples of the kind of things people say when they make attitude statements, not attitude statements themselves. So I guess maybe you’d have to find someone who endorsed the claims and ask them if there’s any empirical evidence that would change their mind, or something like that. (But what if the Atlas Shrugged person says yes, if I read a better book, I would change my mind?)
BTW, I’m not sure how to deal with mathematical truth, so I didn’t mention it.
Classifications tend to be definitions, which are yet another category of statements.
Here we go again..
if your are worried about non-empirical statements...Hey!..why not call them non-empirical staments. “Invisible undetectable gorilla” is meaningful by a number of robust and widely used definitions of “meaning”.