Conceptual Analysis and Moral Theory
Part of the sequence: No-Nonsense Metaethics. Also see: A Human’s Guide to Words.
If a tree falls in the forest, and no one hears it, does it make a sound?
Albert: “Of course it does. What kind of silly question is that? Every time I’ve listened to a tree fall, it made a sound, so I’ll guess that other trees falling also make sounds. I don’t believe the world changes around when I’m not looking.”
Barry: “Wait a minute. If no one hears it, how can it be a sound?”
Albert and Barry are not arguing about facts, but about definitions:
...the first person is speaking as if ‘sound’ means acoustic vibrations in the air; the second person is speaking as if ‘sound’ means an auditory experience in a brain. If you ask “Are there acoustic vibrations?” or “Are there auditory experiences?”, the answer is at once obvious. And so the argument is really about the definition of the word ‘sound’.
Of course, Albert and Barry could argue back and forth about which definition best fits their intuitions about the meaning of the word. Albert could offer this argument in favor of using his definition of sound:
My computer’s microphone can record a sound without anyone being around to hear it, store it as a file, and it’s called a ‘sound file’. And what’s stored in the file is the pattern of vibrations in air, not the pattern of neural firings in anyone’s brain. ‘Sound’ means a pattern of vibrations.
Barry might retort:
Imagine some aliens on a distant planet. They haven’t evolved any organ that translates vibrations into neural signals, but they still hear sounds inside their own head (as an evolutionary biproduct of some other evolved cognitive mechanism). If these creatures seem metaphysically possible to you, then this shows that our concept of ‘sound’ is not dependent on patterns of vibrations.
If their debate seems silly to you, I have sad news. A large chunk of moral philosophy looks like this. What Albert and Barry are doing is what philosophers call conceptual analysis.1
The trouble with conceptual analysis
I won’t argue that everything that has ever been called ‘conceptual analysis’ is misguided.2 Instead, I’ll give examples of common kinds of conceptual analysis that corrupt discussions of morality and other subjects.
The following paragraph explains succinctly what is wrong with much conceptual analysis:
Analysis [had] one of two reputations. On the one hand, there was sterile cataloging of pointless folk wisdom—such as articles analyzing the concept VEHICLE, wondering whether something could be a vehicle without wheels. This seemed like trivial lexicography. On the other hand, there was metaphysically loaded analysis, in which ontological conclusions were established by holding fixed pieces of folk wisdom—such as attempts to refute general relativity by holding fixed allegedly conceptual truths, such as the idea that motion is intrinsic to moving things, or that there is an objective present.3
Consider even the ‘naturalistic’ kind of conceptual analysis practiced by Timothy Schroeder in Three Faces of Desire. In private correspondance, I tried to clarify Schroeder’s project:
As I see it, [your book] seeks the cleanest reduction of the folk psychological term ‘desire’ to a natural kind, ala the reduction of the folk chemical term ‘water’ to H2O. To do this, you employ a naturalism-flavored method of conceptual analysis according to which the best theory of desire is one that is logically consistent, fits the empirical facts, and captures how we use the term and our intuitions about its meaning.
Schroeder confirmed this, and it’s not hard to see the motivation for his project. We have this concept ‘desire’, and we might like to know: “Is there anything in the world similar to what we mean by ‘desire’?” Science can answer the “is there anything” part, and intuition (supposedly) can answer the “what we mean by” part.
The trouble is that philosophers often take this “what we mean by” question so seriously that thousands of pages of debate concern which definition to use rather than which facts are true and what to anticipate.
In one chapter, Schroeder offers 8 objections4 to a popular conceptual analysis of ‘desire’ called the ‘action-based theory of desire’. Seven of these objections concern our intuitions about the meaning of the word ‘desire’, including one which asks us to imagine the existence of alien life forms that have desires about the weather but have no dispositions to act to affect the weather. If our intuitions tell us that such creatures are metaphysically possible, goes the argument, then our concept of ‘desire’ need not be linked to dispositions to act.
Contrast this with a conversation you might have with someone from the Singularity Institute. Within 20 seconds of arguing about the definition of ‘desire’, someone will say, “Screw it. Taboo ‘desire’ so we can argue about facts and anticipations, not definitions.”5
Disputing definitions
Arguing about definitions is not always misguided. Words can be wrong:
When the philosophers of Plato’s Academy claimed that the best definition of a human was a “featherless biped”, Diogenes the Cynic is said to have exhibited a plucked chicken and declared “Here is Plato’s Man.” The Platonists promptly changed their definition to “a featherless biped with broad nails.”
Likewise, if I give a lecture on correlations between income and subjective well-being and I conclude by saying, “And that, ladies and gentlemen, is my theory of the atom,” then you have some reason to object. Nobody else uses the term ‘atom’ to mean anything remotely like what I’ve just discussed. If I ever do that, I hope you will argue that my definition of ‘morality’ is ‘wrong’ (or unhelpful, or confusing, or something).
Some unfortunate words are used in a wide variety of vague and ambiguous ways.6 Moral terms are among these. As one example, consider some commonly used definitions for ‘morally good’:
that which produces the most pleasure for the most people
that which is in accord with the divine will
that which adheres to a certain list of rules
that which the speaker’s intuitions approve of in a state of reflective equilibrium
that which the speaker generally approves of
that which our culture generally approves of
that which our species generally approves of
that which we would approve of if we were fully informed and perfectly rational
that which adheres to the policies we would vote to enact from behind a veil of ignorance
that which does not violate the concept of our personhood
that which resists entropy for as long as possible
Often, people can’t tell you what they mean by moral terms when you question them. There is little hope of taking a survey to decide what moral terms ‘typically mean’ or ‘really mean’. The problem may be worse for moral terms than for (say) art terms. Moral terms have more powerful connotations than art terms, and are thus a greater attractor for sneaking in connotations. Moral terms are used to persuade. “It’s just wrong!” the moralist cries, “I don’t care what definition you’re using right now. It’s just wrong: don’t do it.”
Moral discourse is rife with motivated cognition. This is part of why, I suspect, people resist dissolving moral debates even while they have no trouble dissolving the ‘tree falling in a forest’ debate.
Disputing the definitions of moral terms
So much moral philosophy is consumed by debates over definitions that I will skip to an example from someone you might hope would know better: reductionist Frank Jackson7:
...if Tom tells us that what he means by a right action is one in accord with God’s will, rightness according to Tom is being in accord with God’s will. If Jack tells us that what he means by a right action is maximizing expected value as measured in hedons, then, for Jack, rightness is maximizing expected value...
But if we wish to address the concerns of our fellows when we discuss the matter—and if we don’t, we will not have much of an audience—we had better mean what they mean. We had better, that is, identify our subject via the folk theory of rightness, wrongness, goodness, badness, and so on. We need to identify rightness as the property that satisfies, or near enough satisfies, the folk theory of rightness—and likewise for the other moral properties. It is, thus, folk theory that will be our guide in identifying rightness, goodness, and so on.8
The meanings of moral terms, says Jackson, are given by their place in a network of platitudes (‘clauses’) from folk moral discourse:
The input clauses of folk morality tell us what kinds of situations described in descriptive, non-moral terms warrant what kinds of description in ethical terms: if an act is an intentional killing, then normally it is wrong; pain is bad; ‘I cut, you choose’ is a fair procedure; and so on.
The internal role clauses of folk morality articulate the interconnections between matters described in ethical, normative language: courageous people are more likely to do what is right than cowardly people; the best option is the right option; rights impose duties of respect; and so on.
The output clauses of folk morality take us from ethical judgements to facts about motivation and thus behaviour: the judgement that an act is right is normally accompanied by at least some desire to perform the act in question; the realization that an act would be dishonest typically dissuades an agent from performing it; properties that make something good are the properties we typically have some kind of pro-attitude towards, and so on.
Moral functionalism, then, is the view that the meanings of the moral terms are given by their place in this network of input, output, and internal clauses that makes up folk morality.9
And thus, Jackson tosses his lot into the definitions debate. Jackson supposes that we can pick out which platitudes of moral discourse matter, and how much they matter, for determining the meaning of moral terms—despite the fact that individual humans, and especially groups of humans, are themselves confused about the meanings of moral terms, and which platitudes of moral discourse should ‘matter’ in fixing their meaning.
This is a debate about definitions that will never end.
Austere Metaethics vs. Empathic Metaethics
In the next post, we’ll dissolve standard moral debates the same way Albert and Barry should have dissolved their debate about sound.
But that is only the first step. It is important to not stop after sweeping away the confusions of mainstream moral philosophy to arrive at mere correct answers. We must stare directly into the heart of the problem and do the impossible.
Consider Alex, who wants to do the ‘right’ thing. But she doesn’t know what ‘right’ means. Her question is: “How do I do what is right if I don’t know exactly what ‘right’ means?”
The Austere Metaethicist might cross his arms and say:
Tell me what you mean by ‘right’, and I will tell you what is the right thing to do. If by ‘right’ you mean X, then Y is the right thing to do. If by ‘right’ you mean P, then Z is the right thing to do. But if you can’t tell me what you mean by ‘right’, then you have failed to ask a coherent question, and no one can answer an incoherent question.
The Empathic Metaethicist takes up a greater burden. The Empathic Metaethicist says to Alex:
You may not know what you mean by ‘right.’ You haven’t asked a coherent question. But let’s not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we’ll be able to answer your question. Then not only can we tell you what the right thing to do is, but also we can help bring your emotions into alignment with that truth… as you go on to (say) help save the world rather than being filled with pointless existential angst about the universe being made of math.
Austere metaethics is easy. Empathic metaethics is hard. But empathic metaethics is what needs to be done to answer Alex’s question, and it’s what needs to be done to build a Friendly AI. We’ll get there in the next few posts.
Next post: Pluralistic Moral Reductionism
Previous post: What is Metaethics?
Notes
1 Eliezer advises against reading mainstream philosophy because he thinks it will “teach very bad habits of thought that will lead people to be unable to do real work.” Conceptual analysis is, I think, exactly that: a very bad habit of thought that renders many people unable to do real work. Also: My thanks to Eliezer for his helpful comments on an early draft of this post.
2 For example: Jackson (1998), p. 28, has a different view of conceptual analysis: “conceptual analysis is the very business of addressing when and whether a story told in one vocabulary is made true by one told in some allegedly more fundamental vocabulary.” For an overview of Jackson’s kind of conceptual analysis, see here. Also, Alonzo Fyfe reminded me that those who interpret the law must do a kind of conceptual analysis. If a law has been passed declaring that vehicles are not allowed on playgrounds, a judge must figure out whether ‘vehicle’ includes or excludes rollerskates. More recent papers on conceptual analysis are available at Philpapers. Finally, read Chalmers on verbal disputes.
3 Braddon-Mitchell (2008). A famous example of the first kind lies at the heart of 20th century epistemology: the definition of ‘knowledge.’ Knowledge had long been defined as ‘justified true belief’, but then Gettier (1963) presented some hypothetical examples of justified true belief that many of us would intuitively not label as ‘knowledge.’ Philosophers launched a cottage industry around new definitions of ‘knowledge’ and new counterexamples to those definitions. Brian Weatherson called this the “analysis of knowledge merry-go-round.” Tyrrell McAllister called it the ‘Gettier rabbit-hole.’
4 Schroeder (2004), pp. 15-27. Schroeder lists them as 7 objections, but I count his ‘trying without desiring’ and ‘intending without desiring’ objections separately.
5 Tabooing one’s words is similar to what Chalmers (2009) calls the ‘method of elimination’. In an earlier post, Yudkowsky used what Chalmers (2009) calls the ‘subscript gambit’, except Yudkowsky used underscores instead of subscripts.
6 See also Gallie (1956).
7 Eliezer said that the closest thing to his metaethics from mainstream philosophy is Jackson’s ‘moral functionalism’, but of course moral functionalism is not quite right.
8 Jackson (1998), p. 118.
9 Jackson (1998), pp. 130-131.
References
Braddon-Mitchell (2008). Naturalistic analysis and the a priori. In Braddon-Mitchell & Nola (eds.), Conceptual Analysis and Philosophical Naturalism (pp. 23-43). MIT Press.
Chalmers (2009). Verbal disputes. Unpublished.
Gallie (1956). Essentially contested concepts. Proceedings of the Aristotelean Society, 56: 167-198.
Gettier (1963). Is justified true belief knowledge? Analysis, 23: 121-123.
Jackson (1998). From Metaphysics to Ethics: A Defense of Conceptual Analysis. Oxford University Press.
Schroeder (2004). Three Faces of Desire. Oxford University Press.
- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 203 points) (
- Train Philosophers with Pearl and Kahneman, not Plato and Kant by 6 Dec 2012 0:42 UTC; 116 points) (
- What Is Moral Realism? by 22 May 2018 15:49 UTC; 72 points) (EA Forum;
- Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by 5 Nov 2011 11:06 UTC; 69 points) (
- Pluralistic Moral Reductionism by 1 Jun 2011 0:59 UTC; 64 points) (
- 28 Sep 2011 9:30 UTC; 52 points) 's comment on Concepts Don’t Work That Way by (
- What is Metaethics? by 25 Apr 2011 16:53 UTC; 46 points) (
- End-Relational Theory of Meta-ethics: A Dialogue by 28 Jun 2016 20:11 UTC; 38 points) (EA Forum;
- My intentions for my metaethics sequence by 30 Aug 2011 16:52 UTC; 22 points) (
- 9 Aug 2011 20:10 UTC; 18 points) 's comment on Theory of Knowledge (rationality outreach) by (
- 21 May 2011 15:20 UTC; 18 points) 's comment on What bothers you about Less Wrong? by (
- 26 Feb 2012 19:31 UTC; 17 points) 's comment on Selfism and Partiality by (
- Scientist vs. philosopher on conceptual analysis by 20 Sep 2011 15:10 UTC; 17 points) (
- 16 May 2011 19:13 UTC; 14 points) 's comment on What we’re losing by (
- Why study the cognitive science of concepts? by 3 Dec 2011 23:56 UTC; 12 points) (
- 7 Dec 2012 16:53 UTC; 8 points) 's comment on Train Philosophers with Pearl and Kahneman, not Plato and Kant by (
- 21 Sep 2011 18:42 UTC; 8 points) 's comment on You’ll be who you care about by (
- 12 Aug 2013 22:57 UTC; 6 points) 's comment on Common sense as a prior by (
- 8 Jun 2011 6:59 UTC; 4 points) 's comment on Pluralistic Moral Reductionism by (
- 14 Mar 2021 5:56 UTC; 4 points) 's comment on My attempt to find the meaning of existence by (
- 30 Jun 2011 6:11 UTC; 2 points) 's comment on A Defense of Naive Metaethics by (
- 8 Jun 2011 6:22 UTC; 1 point) 's comment on Pluralistic Moral Reductionism by (
- 29 Jun 2011 21:50 UTC; 1 point) 's comment on A Defense of Naive Metaethics by (
- 8 Jul 2011 15:05 UTC; 0 points) 's comment on A Defense of Naive Metaethics by (
- 26 Sep 2013 17:44 UTC; 0 points) 's comment on What makes us think _any_ of our terminal values aren’t based on a misunderstanding of reality? by (
It almost annoys me, but I feel compelled to vote this up. (I know groundbreaking philosophy is not yet your intended purpose but) I didn’t learn anything, I remain worried that the sequence is going to get way too ambitious, and I remain confused about where it’s ultimately headed. But the presentation is so good—clear language, straightforward application of LW wisdom, excellent use of hyperlinks, high skimmability, linked references, flattery of my peer group—that I feel I have to support the algorithm that generated it.
Most of your comment looks as though it could apply just as well to the most upvoted post on LW ever (edit: second-most-upvoted), and that’s good enough for me. :)
There are indeed many LW regulars, and especially SI folk, who won’t learn anything from several posts in this series. On the other hand, I think that these points haven’t been made clear (about morality) anywhere else. I hope that when people (including LWers) start talking about morality with the usual conceptual-analysis assumptions, you can just link them here and dissolve the problem.
Also, it sounds like you agree with everything in this fairly long post. If so, yours is faint criticism indeed. :)
*Second most upvoted post. I was a bit sad that Generalizing From One Example apparently wasn’t the top post anymore because I really liked it, and while I also liked Diseased Thinking I just didn’t like it quite as much. Nope, not the case, Generalizing From One Example is still at the top. Though I do hope it will eventually be replaced by a post that fully deserves to.
Oops, thanks for the correction. I had to pull from memory because the ‘Top’ link doesn’t work in my browser (Chrome on Mac). It just lists an apparently random selection of posts.
Look for the date range (“Links from”) in the sidebar—you want “All Time”.
Yes, we’re fixing the placement of this control in the redesign.
Hey, lookie there!
This comment is for anyone who is confused about where the ‘no-nonsense metaethics’ sequence is going.
First, I had to write a bunch of prerequisites. More prerequisites are upcoming:
Intuitions and Philosophy
The Neuroscience of Desire
The Neuroscience of Pleasure
Inferring Our Desires
Heading Toward: No-Nonsense Metaethics
What is Metaethics?
Stage One of the sequence intends to solve or dissolve many of the central problems of mainstream metaethics. Stage one includes this post and a few others to come later. This is my solution to “much of metaethics” promised earlier. The “much of” refers to mainstream metaethics, not to Yudkowskian metaethics.
Stage Two of the sequence intends to catch everybody up with the progress on Yudkowskian metaethics that has been made by a few particular brains (mostly at SI) in the last few years but hasn’t been written down anywhere yet.
Stage Three of the sequence intends to state the open problems of Yudkowskian metaethics as clearly as possible so that rationalists can make incremental progress on them, ala Gowers’ Polymath Project or Hilbert’s problems. (Unfortunately, problems in metaethics are not as clearly defined as problems in math.)
Same here.
Looking back at your posts in this sequence so far, it seems like it’s taken you four posts to say “Philosophers are confused about meta-ethics, often because they spend a lot of time disputing defintions.” I guess they’ve been well-sourced, which is worth something. But it seems like we’re still waiting on substantial new insights about metaethics, sadly.
I admit it’s not very fun for LW regulars, but a few relatively short and simple posts is probably the bare minimum you can get away with while still potentially appealing to bright philosopher or academic types, who will be way more hesitant than your typical contrarian to dismiss an entire field of philosophy as not even wrong. I think Luke’s doing a decent job of making his posts just barely accessible/interesting to a very wide audience.
No, he said quite a lot more. E.g. why philosophers do that, why it is a bad thing, and what to do about it if we don’t want to fall into the same trap. This is all neccessary ground work for his final argument.
If the state of metaethics were such that most people would already agree on these fundamentals then you would have a point, but lukeprog’s premise is that it’s not.
Seeing as lots of people seemed to benefit even from the ‘What is Metaethics’ post, I’m not too worried that LW regulars won’t learn much from a few of the posts in this series. If you already grok ‘Austere Metaethics’, then you’ll have to wait a few posts for things to get interesting. :)
An interesting phenomenon I’ve noticed recently is that sometimes words do have short exact definitions that exactly coincide with common usage and intuition. For example, after Gettier scenarios ruined the definition of knowledge as “Justified true belief”, philosophers found a new definition:
(where “always” and “never” are defined to be some appropriate significance level)
Now it seems to me that this definition completely nails it. There’s not one scenario I can find where this definition doesn’t return the correct answer. (EDIT: Wrong! See great-grandchild by Tyrrell McAllister) I now feel very silly for saying things like “‘Knowledge’ is a fuzzy concept, hard to carve out of thingspace, there’s is always going to be some scenario that breaks your definition.” It turns out that it had a nice definition all along.
It seems like there is a reason why words tend to have short definitions: the brain can only run short algorithms to determine whether an instance falls into the category or not. All you’ve got to do to write the definition is to find this algorithm.
Yep. Another case in point of the danger of replying, “Tell me how you define X, and I’ll tell you the answer” is Parfit in Reason and Persons concluding that whether or not an atom-by-atom duplicate constructed from you is “you” depends on how you define “you”. Actually it turns out that there is a definite answer and the answer is knowably yes, because everything Parfit reasoned about “indexical identity” is sheer physical nonsense in a world built on configurations and amplitudes instead of Newtonian billiard balls.
PS: Very Tarskian and Bayesian of them, but are you sure they didn’t say, “A belief in X is knowledge if one would never have it whenever not-X”?
I’m thinking of Robert Nozick’s definition. He states his definition thus:
P is true
S believes that P
If it were the case that (not-P), S would not believe that P
If it were the case that P, S would believe that P
(I failed to remember condition 1, since 2 & 3 ⇒ 1 anyway)
There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.
For example, here is a counterexample to Nozick’s definition as you present it. Suppose that I have irrationally decided to believe everything written in a certain book B and to believe nothing not written in B. Unfortunately for me, the book’s author, a Mr. X, is a congenital liar. He invented almost every claim in the book out of whole cloth, with no regard for the truth of the matter. There was only one exception. There is one matter on which Mr. X is constitutionally compelled to write and to write truthfully: the color of his mother’s socks on the day of his birth. At one point in B, Mr. X writes that his mother was wearing blue socks when she gave birth to him. This claim was scrupulously researched and is true. However, there is nothing in the text of B to indicate that Mr. X treated this claim any differently from all the invented claims in the book.
In this story, I am S, and P is “Mr. X’s mother was wearing blue socks when she gave birth to him.” Then:
P is true. (Mr. X’s mother really was wearing blue socks.)
S believes that P. (Mr. X claimed P in B, and I believe everything in B.)
If it were the case that (not-P), S would not believe that P. (Mr. X only claimed P in B because that was what his scrupulous research revealed. Had P not been true, Mr. X’s research would not have led him to believe it. And, since he is incapable of lying about this matter, he would not have put P in B. Therefore, since I don’t believe anything not in B, I would not have come to believe P.)
If it were the case that P, S would believe that P. (Mr. X was constitutionally compelled to write truthfully about what the color of his mother’s socks were when he was born. In all possible worlds in which his mother wore blue socks, Mr. X’s scrupulous research would have discovered it, and Mr. X would have reported it in B, where I would have read it, and so believed it.)
And yet, the intuitions on which Gettier problems play would say that I don’t know P. I just believe P because it was in a certain book, but I have no rational reason to trust anything in that book.
ETA: And here’s a counterexample from the other direction — that is, an example of knowledge that fails to meet Nozick’s criteria.
Suppose that you sit before an upside-down cup, under which there is a ping-pong ball that has been painted some color. Your job is to learn the color of the ping-pong ball.
You employ the following strategy: You flip a coin. If the coin comes up heads, you lift up the cup and look at the ping-pong ball, noting its color. If the coin comes up tails, you just give up and go with the ignorance prior.
Suppose that, when you flip the coin, it comes up heads. Accordingly, you look at the ping-pong ball and see that it is red. Intuitively, we would say that you know that the ping-pong ball is red.
Nonetheless, we fail to meet Nozick’s criterion 4. Had the coin come up tails, you would not have lifted the cup, so you would not have come to believe that the ball is red, even if this were still true.
Wham! Okay, I’m reverted to my old position. “Knowledge” is a fuzzy word.
ETA: Or at least a position of uncertainty. I need to research how counterfactuals work.
Yes. An excellent illustration of ‘the Gettier rabbit-hole.’
There is an entire chapter in Pearl’s Causality book devoted to the rabbit-hole of defining what ‘actual cause’ means. (Note: the definition given there doesn’t work, and there is a substantial literature discussing why and proposing fixes).
The counterargument to your post is that some seemingly fuzzy concepts actually have perfect intuitive consensus (e.g. almost everyone will classify any example as either concept X or not concept X the same way). This seems to be the case with ‘actual cause.’ As long as intuitive consensus continues to hold, the argument goes, there is hope of a concise logical description of it.
Maybe the concept of “infinity” is a sort of success story. People said all sorts of confused and incompatible things about infinity for millennia. Then finally Cantor found a way to work with it sensibly. His approach proved to be robust enough to survive essentially unchanged even after the abandonment of naive set theory.
But even that isn’t an example of philosophers solving a problem with conceptual analysis in the sense of the OP.
Thanks for the Causality heads-up.
Can you name an example or two?
Well, as I said, ‘actual cause’ appears to be one example. The literature is full of little causal stories where most people agree that something is an actual cause of something else in the story—or not. Concepts which have already been formalized include concepts which are both used colloquially in “everyday conversation” and precisely in physics (e.g. weight/mass).
One could argue that ‘actual cause’ is in some sense not a natural concept, but it’s still useful in the sense that formalizing the algorithm humans use to decide ‘actual cause’ problems can be useful for automating certain kinds of legal reasoning.
The Cyc project is a (probably doomed) example of a rabbit-hole project to construct an ontology of common sense. Lenat has been in that rabbit-hole for 27 years now.
Now, if only someone would give me a hand out of this rabbit-hole before I spend all morning in here ;).
Well, of course Bayesianism is your friend here. Probability theory elegantly supersedes the qualitative concepts of “knowledge”, “belief” and “justification” and, together with an understanding of heuristics and biases, nicely dissolves Gettier problems, so that we can safely call “knowledge” any assignment of high probability to a proposition that turns out to be true.
For example, take the original Gettier scenario. Since Jones has 10 coins in his pocket, P(man with 10 coins gets job) is bounded from below by P(Jones gets job). Hence any information that raises P(Jones gets job) necessarily raises P(man with 10 coins gets job) to something even higher, regardless of whether (Jones gets job) turns out to be true.
The psychological difficulty here is the counterintuitiveness of the rule P(A or B) >= P(A), and is in a sense “dual” to the conjunction fallacy. Just as one has to remember to subtract probability as burdensome details are introduced, one also has to remember to add probability as the reference class is broadened. When Smith learns the information suggesting Jones is the favored candidate, it may not feel like he is learning information about the set of all people with 10 coins in their pocket, but he is.
In your example of the book by Mr. X, we can observe that, because Mr. X was constitutionally compelled to write truthfully about his mother’s socks, your belief about that is legitimately entangled with reality, even if your other beliefs aren’t.
I agree that, with regard to my own knowledge, I should just determine the probability that I assign to a proposition P. Once I conclude that P has a high probability of being true, why should I care whether, in addition, I “know” P in some sense?
Nonetheless, if I had to develop a coherent concept of “knowledge”, I don’t think that I’d go with “‘knowledge’ [is] any assignment of high probability to a proposition that turns out to be true.” The crucial question is, who is assigning the probability? If it’s my assignment, then, as I said, I agree that, for me, the question about knowledge dissolves. (More generally, the question dissolves if the assignment was made according to my prior and my cognitive strategies.)
But Getteir problems are usually about some third person’s knowledge. When do you say that they know something? Suppose that, by your lights, they have a hopelessly screwed-up prior — say, an anti-Laplacian prior. So, they assign high probability to all sorts of stupid things for no good reason. Nonetheless, they have enough beliefs so that there are some things to which they assign high probability that turn out to be true. Would you really want to say that they “know” those things that just happen to be true?
That is essentially what was going on in my example with Mr. X’s book. There, I’m the third person. I have the stupid prior that says that everything in B is true and everything not in B is false. Now, you know that Mr. X is constitutionally compelled to write truthfully about his mother’s socks. So you know that reading B will legitimately entangle my beliefs with reality on that one solitary subject. But I don’t know that fact about Mr. X. I just believe everything in B. You know that my cognitive strategy will give me reliable knowledge on this one subject. But, intuitively, my epistemic state seems so screw-up that you shouldn’t say that I know anything, even though I got this one thing right.
ETA: Gah. This is what I meant by “down the rabbit-hole”. These kinds of conversations are just too fun :). I look forward to your reply, but it will be at least a day before I reply in turn.
ETA: Okay, just one more thing. I just wanted to say that I agree with your approach to the original Gettier problem with the coins.
If you want to set your standard for knowledge this high, I would argue that you’re claiming nothing counts as knowledge since no one has any way to tell how good their priors are independently of their priors.
I’m not sure what you mean by a “standard for knowledge”. What standard for knowledge do you think that I have proposed?
You’re talking about someone trying to determine whether their own beliefs count as knowledge. I already said that the question of “knowledge” dissolves in that case. All that they should care about are the probabilities that they assign to propositions. (I’m not sure whether you agree with me there or not.)
But you certainly can evaluate someone else’s prior. I was trying to explain why “knowledge” becomes problematic in that situation. Do you disagree?
I think that while what you define carves out a nice lump of thingspace, it fails to capture the intuitive meaning of the word probability. If I guess randomly that it will rain tomorrow and turn out to be right, then it doesn’t fit intuition at all to say I knew that it would rain. This is why the traditional definition is “justified true belief” and that is what Gettier subverts.
You presumably already know all this. The point is that Tyrrell McAllister is trying (to avoid trying) to give a concise summary of the common usage of the word knowledge, rather than to give a definition that is actually useful for doing probability or solving problems.
Here, let me introduce you to my friend Taboo...
;)
That’s a very interesting thought. I wonder what leads you to it.
With the caveat that I have not read all of this thread:
*Are you basing this on the fact that so far, all attempts at analysis have proven futile? (If so, maybe we need to come up with more robust conditions.)
*Do you think that the concept of ‘knowledge’ is inherently vague similar (but not identical) to the way terms like ‘tall’ and ‘bald’ are?
*Do you suspect that there may be no fact of the matter about what ‘knowledge’ is, just like there is no fact of the matter about the baldness of the present King of France? (If so, then how do the competent speakers apply the verb ‘to know’ so well?)
If we could say with confidence that conceptual analysis of knowledge is a futile effort, I think that would be progress. And of course the interesting question would be why.
It may just be simply that non-technical, common terms like ‘vehicle’ and ‘knowledge’ (and of course others like ‘table’) can’t be conceptually analyzed.
Also, experimental philosophy could be relevant to this discussion.
Let me expand on my comment a little: Thinking about the Gettier problem is dangerous in the same sense in which looking for a direct proof of the Goldbach conjecture is dangerous. These two activities share the following features:
When the problem was first posed, it was definitely worth looking for solutions. One could reasonably hope for success. (It would have been pretty nice if someone had found a solution to the Gettier problem within a year of its being posed.)
Now that the problem has been worked on for a long time by very smart people, you should assign very low probability to your own efforts succeeding.
Working on the problem can be addictive to certain kinds of people, in the sense that they will feel a strong urge to sink far more work into the problem than their probability of success can justify.
Despite the low probability of success for any given seeker, it’s still good that there are a few people out there pursuing a solution.
But the rest of us should spend on our time on other things, aside from the occasional recreational jab at the problem, perhaps.
Besides, any resolution of the problem will probably result from powerful techniques arising in some unforeseen quarter. A direct frontal assault will probably not solve the problem.
So, when I called the Gettier problem “dangerous”, I just meant that, for most people, it doesn’t make sense to spend much time on it, because they will almost certainly fail, but some of us (including me) might find it too strong a temptation to resist.
Contemporary English-speakers must be implementing some finite algorithm when they decide whether their intuitions are happy with a claim of the form “Agent X knows Y”. If someone wrote down that algorithm, I suppose that you could call it a solution to the Gettier problem. But I expect that the algorithm, as written, would look to us like a description of some inscrutably complex neurological process. It would not look like a piece of 20th century analytic philosophy.
On the other hand, I’m fairly confident that some piece of philosophy text could dissolve the problem. In short, we may be persuaded to abandon the intuitions that lie at the root of the Gettier problem. We may decide to stop trying to use those intuitions to guide what we say about epistemic agents.
Both of your Gettier scenarios appear to confirm Nozick’s criteria 3 and 4 when the criteria are understood as criteria for a belief-creation strategy to be considered a knowledge-creation strategy applicable to a context outside of the contrived scenario. Taking your scenarios one by one.
You have described the strategy of believing everything written in a certain book B. This strategy fails to conform to Nozick’s criteria 3 and 4 when considered outside of the contrived scenario in which the author is compelled to tell the truth about the socks, and therefore (if we apply the criteria) is not a knowledge creation strategy.
There are actually two strategies described here, and one of them is followed conditional on events occurring in the implementation of the other. The outer strategy is to flip the coin to decide whether to look at the ball. The inner strategy is to look at the ball. The inner strategy conforms to Nozick’s criteria 3 and 4, and therefore (if we apply the criteria) is a knowledge creation strategy.
In both cases, the intuitive results you describe appear to conform to Nozick’s criteria 3 and 4 understood as described in the first paragraph. Nozick’s criteria 3 and 4 (understood as above) appear moreover to play a key role in making sense of our intuitive judgment in both the scenarios. That is, it strikes me as intuitive that the reason we don’t count the belief about the socks as knowledge is that it is the fruit of a strategy which, as a general strategy, appears to us to violate criteria 3 and 4 wildly, and only happens to satisfy them in a particular highly contrived context. And similarly, it strikes me as intuitive that we accept the belief about the color as knowledge because we are confident that the method of looking at the ball is a method which strongly satisfies criteria 3 and 4.
The problem with conversations about definitions is that we want our definitions to work perfectly even in the least convenient possible world.
So imagine that, as a third-person observer, you know enough to see that the scenario is not highly contrived — that it is in fact a logical consequence of some relatively simple assumptions about the nature of reality. Suppose that, for you, the whole scenario is in fact highly probable.
On second thought, don’t imagine that. For that is exactly the train of thought that leads to wasting time on thinking about the Getteir problem ;).
A large part of what was highly contrived was your selection of a particular true, honest, well-researched sentence in a book otherwise filled with lies, precisely because it is so unusual. In order to make it not contrived, we must suppose something like, the book has no lies, the book is all truth. Or we might even need to suppose that every sentence in every book is the truth. In such a world, then the contrivedness of the selection of a true sentence is minimized.
So let us imagine ourselves into a world in which every sentence in every book is true. And now we imagine someone who selects a book and believes everything in it. In this world, this strategy, generalized (to pick a random book and believe everything in it) becomes a reliable way to generate true belief. In such a world, I think it would be arguable to call such a strategy a genuine knowledge-creation strategy. In any case, it would depart so radically from your scenario (since in your scenario everything in the book other than that one fact is a lie) that it’s not at all clear how it would relate to your scenario.
I’m not sure that I’m seeing your point. Are you saying that
One shouldn’t waste time on trying to concoct exceptionless definitions — “exceptionless” in the sense that they fit our intuitions in every single conceivable scenario. In particular, we shouldn’t worry about “contrived” scenarios. If a definition works in the non-contrived cases, that’s good enough.
… or are you saying that
Nozick’s definition really is exceptionless. In every conceivable scenario, and for every single proposition P, every instance of someone “knowing” that P would conform to every one of Nozick’s criteria (and conversely).
… or are you saying something else?
Nozick apparently intended his definition to apply to single beliefs. I applied it to belief-creating strategies (or procedures, methods, mechanisms) rather than to individual beliefs. These strategies are to be evaluated in terms of their overall results if applied widely. Then I noticed that your two Gettier scenarios involved strategies which, respectively, violated and conformed to the definition as I applied it.
That’s all. I am not drawing conclusions (yet).
I’m reminded of the Golden Rule. Since I would like if everyone would execute “if (I am Jiro) then rob”, I should execute that as well.
It’s actually pretty hard to define what it means for a strategy to be exceptionless, and it may be subject to a grue/bleen paradox.
I thought it sounded contrived at first, but then remembered there are tons of people who pick a book and believe everything they read in it, reaching many false conclusions and a few true ones.
I always thought the “if it were the case” thing was just a way of sweeping the knowledge problem under the rug by restricting counterexamples to “plausible” things that “would happen”. It gives the appearance of a definition of knowledge, while simply moving the problem into the “plausibility” box (which you need to use your knowledge to evaluate).
I’m not sure it’s useful to try to define a binary account of knowledge anyway though. People just don’t work like that.
A different objection, following Eliezer’s PS, is that:
Between me and a red box, there is a wall with a hole. I see the red box through the hole, and therefore know that the box is red. I reason, however, that I might have instead chosen to sit somewhere else, and I would not have been able to see the red box through the hole, and would not believe that the box is red.
Or more formally: If I know P, then I know (P or Q) for all Q, but:
P ⇒ Believes (P)
does not imply
(P v Q) ⇒ Believes (P v Q)
This is a more realistic, and hence better, version of the counterexample that I gave in my ETA to this comment.
I’m genuinely surprised. Condition 4 seems blatantly unnecessary and I had thought analytic philosophers (and Nozick in particular) more competent than that. Am I missing something?
Your hunch is right. Starting on page 179 of Nozick’s Philosophical explanations, he address counterexamples like the one that Will Sawin proposed. In response, he gives a modified version of his criteria. As near as I can tell, my first counterexample still breaks it, though.
Yes. In the next post, I’ll be naming some definitions for moral terms that should be thrown out, for example those which rest on false assumptions about reality (e.g. “God exists.”)
I don’t think the brain usually makes this determination by looking at things that are much like definitions.
I think this isn’t the usual sense of ‘knowledge’. It’s too definite. Do I know there’s a website called less wrong, for example? Not for sure. It might have ceased to exist while I’m typing this—I have no present confirmation. And of course any confirmation only lasts as long as you look at it.
Knowledge is that state where one can make predictions about a subject which are better than chance. Of course this definition has its own flaws, doubtless....
Hey Luke,
Thanks again for your work. You are by far the greatest online teacher I’ve ever come across (though I’ve never seen you teach face-to-face). you are concise, clear, direct, empathetic, extremely thorough, tactful and accessible. I am in awe of your abilities. You take the fruit that is at the top of the tree and gently place it into my straining arms! Sorry for the exuberant worship but I really want to express my gratitude for your efforts. They definitely aren’t wasted on me.
Some thoughts on this and related LW discussions. They come a bit late—apols to you and commentators if they’ve already been addressed or made in the commentary:
1) Definitions (this is a biggie).
There is a fair bit of confusion on LW, it seems to me, about just what definitions are and what their relevance is to philosophical and other discussion. Here’s my understanding—please say if you think I’ve gone wrong.
If in the course of philosophical discussion, I explicitly define a familiar term, my aim in doing so is to remove the term from debate—I fix the value of a variable to restrict the problem. It’d be good to find a real example here, but I’m not convinced defining terms happens very often in philosophical or other debate. By way of a contrived example, one might want to consider, in evaluating some theory, the moral implications of actions made under duress (a gun held to the head) but not physically initiated by an external agent (a jostle to the arm). One might say, “Define ‘coerced action’ to mean any action not physically initiated but made under duress” (or more precise words to the effect). This done, it wouldn’t make sense simply to object that my conclusion regarding coerced actions doesn’t apply to someone physically pushed from behind—I have stuipulated for the sake of argument I’m not talking about such cases. (in this post, you distinguish stipulation and definition—do you have in mind a distinction I’m glossing over?)
Contrast this to the usual case for conceptual analyses, where it’s assumed there’s a shared concept (‘good’, ‘right’, ‘possible’, ‘knows’, etc), and what is produced is meant to be a set of necessary and sufficient conditions meant to capture the concept. Such an analysis is not a definition. Regarding such analyses, typically one can point to a particular thing and say, eg, “Our shared concept includes this specimen, it lacks a necessary condition, therefore your analysis is mistaken”—or, maybe “Intuitively, this specimen falls under our concept, it lacks...”. Such a response works only if there is broad agreement that the specimen falls under the concept. Usually this works out to be the case.
I haven’t read the Jackson book, so please do correct me if you think I’ve misunderstood, but I take it something like this is his point in the paragraphs you quote. Tom and Jack can define ‘right action’ to mean whatever they want it to. In so doing, however, we cease to have any reason to think they mean by the term what we intuitively do. Rather, Jackson is observing, what Tom and Jack should be doing is saying that rightness is that thing (whatever exactly it is) which our folk concepts roughly converge on, and taking up the task of refining our understanding from there—no defining involved.
You say,
Well, not quite. The point I take it is rather that there simply are ‘folk’ platitudes which pick-out the meanings of moral terms—this is the starting point. ‘Killing people for fun is wrong’, ‘Helping elderly ladies across the street is right’ etc, etc. These are the data (moral intuitions, as usually understood). If this isn’t the case, there isn’t even a subject to discuss. Either way, it has nothing to do with definitions.
Confusion about definitions is evident in the quote from the post you link to. To re-quote:
Possibly the problem is that ‘sound’ has two meanings, and the disputants each are failing to see that the other means something different. Definitions are not relevant here, meanings are. (Gratuitous digression: what is “an auditory experience in a brain”? If this means something entirely characterizable in terms of neural events, end of story, then plausibly one of the disputants would say this does not capture what he means by ‘sound’ - what he means is subjective and ineffable, something neural events aren’t. He might go on to wonder whether that subjective, ineffable thing, given that it is apparently created by the supposedly mind-independent event of the falling of a tree, has any existence apart from his self (not to be confused with his brain!). I’m not defending this view, just saying that what’s offered is not a response but rather a simple begging of the question against it. End of digression.)
2) In your opening section you produce an example meant to show conceptual analysis is silly. Looks to me more like a silly attempt at an example of conceptual analysis. If you really want to make your case, why not take a real example of a philosophical argument -preferably one widely held in high regard at least by philosophers? There’s lots of ’em around.
3) In your section The trouble with conceptual analysis, you finally explain,
As explained above, philosophical discussion is not about “which definition to use” -it’s about (roughly, and among other things) clarifying our concepts. The task is difficult but worthwhile because the concepts in question are important but subtle.
If you don’t have the patience to do philosophy, or you don’t think it’s of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don’t think that in doing this latter thing you’ll address the question philosophy is interested in, or that you’ve said anything at all so far to show philosophy isn’t worth doing. In this connection, one of the real benefits of doing philosophy is that it encourages precision and attention to detail in thinking. You say Eliezer Yudkowsky ”...advises against reading mainstream philosophy because he thinks it will ‘teach very bad habits of thought that will lead people to be unable to do real work.‘” The original quote continues, ”...assume naturalism! Move on! NEXT!” Unfortunately Eliezer has a bad habit of making unclear and undefended or question-begging assertions, and this is one of them. What are the bad habits, and how does philosophy encourage them? And what precisely is meant by ‘naturalism’? To make the latter assertion and simultaneously to eschew the responsibility of articulating what this commits you to is to presume you can both have your cake and eat it too. This may work in blog posts -it wouldn’t pass in serious discussion.
(Unlike some on this blog, I have not slavishly pored through Eliezer’s every post. If there is somewhere a serious discussion of the meaning of ‘naturalism’ which shows how the usual problems with normative concepts like ‘rational’ can successfully be navigated, I will withdraw this remark).
You’re tacitly defining philosophy as an endeavor that “doesn’t involve facts or anticipations,” that is, as something not worth doing in the most literal sense. Such “philosophy” would be a field defined to be useless for guiding one’s actions. Anything that is useless for guiding my actions is, well, useless.
The question of what is worth doing is of course profoundly philosophical. You have just assumed an answer.: that what is worth doing is achieving your aims efficiently and what is not worth doing is thinking about whether you have good aims, or which different aims you should have. (And anything that influences your goals will most certainly influence your expected experiences).
We’ve been over this: either “good aims” and “aims you should have” imply some kind of objective value judgment, which is incoherent, or they merely imply ways to achieve my final aims more efficiently, and we are back to my claim above as that is included under the umbrella of “guiding my actions.”
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
As far as objective value, I simply don’t understand what anyone means by the term. And I think lukeprog’s point could be summed up as, “Trying to figure out how each discussant is defining their terms is not really ‘doing philosophy’; it’s just the groundwork necessary for people not to talk past each other.”
As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can’t figure out what anticipations X entails, I will just respond, “So what?”
To unite the two themes: The ultimate definition would tell me why to care.
In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call “morality”, by and large.
It’s “Do unto others...”, but abstracted a bit, so that we really mean “Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you.”
Omega puts you in a room with a big red button. “Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don’t press it I punch you on the nose and you get no money. They have a similar button which they can use to kill you and get 10 dollars. You can’t communicate with them. In fact they think they’re the only person being given the option of a button, so this problem isn’t exactly like Prisoner’s dilemma. They don’t even know you exist or that their own life is at stake.”
“But here’s the offer I’m making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they’ll be identifying themself.
“Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances.”
Given the above scenario, you’ll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.
I would indeed it prefer if other people had certain moral sentiments. I don’t think I ever suggested otherwise.
Not quite my point. I’m not talking about what your preferences would be. That would be subjective, personal. I’m talking about what everyone’s meta-ethical preferences would be, if self-consistent, and abstracted enough.
My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.
That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
Then why not just call it “universal morality”?
It’s called that too. Are you just objecting as to what we are calling it?
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you’re saying in what sense it’s supposed to be objective.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively. And who’s going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that’s a debate for another day.
I’m getting a bad vibe here, and no longer feel we’re having the same conversation
“Person or group that decides”? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody “decides”, or everyone does. And if they don’t reach the same decision, then there’s no single objective morality—but even i so perhaps there’s a limited set of coherent metaethical positions, like two or three of them.
I think my post was inspired more by TDT solutions to Prisoner’s dilemma and Newcomb’s box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.
I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.
You’re right, I think I’m confused about what you were talking about, or I inferred too much. I’m not really following at this point either.
One thing, though, is that you’re using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God’s will, as a way to get along with others, etc. That’ll tend to cause some confusion. A good heuristic is, “Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it).”
I’m not.
An ethic may say:
I should support same-sex marriage. (SSM-YES)
or perhaps:
I should oppose same-sex marraige (SSM-NO)
The reason for this position is the meta-ethic:
e.g.
Because I should act to increase average utility. (UTIL-AVERAGE)
Because I should act to increase total utility. (UTIL-TOTAL)
Because I should act to increase total amount of freedom (FREEDOM-GOOD)
Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE)
Because I should obey the will of our voters (DEMOCRACY-GOOD)
Because I should do what God commands. (OBEY-GOD).
But some metaethical positions are invalid because of false assumptions (e.g. God’s existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person’s own measuring).
So, NO, I don’t speak necessarily about Collective Greatest Happiness Utilitarianism. I’m NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a “Greatest happiness utilitarianism”) I’m speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have—an Attractor of metaethical positions.
That’s very contestable. It has frequently argued here that preferences can be inferred from behaviour; it’s also been argued that introspection (if that is what you mean by “subjectively”) is not a reliable guide to motivation.
This is the whole demonstrated preference thing. I don’t buy it myself, but that’s a debate for another time. What I mean by subjectively is that I will value one person’s life more than another person’s life, or I could think that I want that $1,000,000 more than a rich person wants it, but that’s just all in my head. To compare utility functions and work from demonstrated preference usually—not always—is a precursor to some kind of authoritarian scheme. I can’t say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well, because the gains from co-operation are remote, and there are many realistic instances where selfish action produces immediate rewards (cd the Prudent Predatory objection Rand’s egoistic ethics).
OTOH, since many people are selfish, they are made to care by having legal and social sanctions against excessively selfish behaviour.
I wasn’t talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise; and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics—which overlaps with but doesn’t equal altruism, same as it overlaps but doesn’t equal selfishness.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can’t.
This by itself isn’t a reason that can force someone to care—you can’t make a rock care about anything, but that’s not a problem with your argument. But it’s something that leads to different expectations about the world, namely what Amanojack was asking for.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don’t approach it, i expect more war and other devastation.
Although it usually doesn’t.
I think that you version of altruism is a straw man, and that what most people mean by altruism isn’t very different from co operation.
Or, as I call it, universalisability.
That argument doesn’t have to be made at all. Morality can stand as a refutation of the claim that anticipiation of experience is of ultimate importance. And it can be made differently: if you rejig your values, you can expect to antipate different experiences—it can be a self-fulffilling prophecy and not merely passive anticipation.
There is an argument from self interest, but it is tertiary to the two arguments I mentioned above.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Solipsism is an ontological stance: in short, “there is nothing out there but my own mind.” I am saying something slightly different: “To speak of there being something/nothing out there is meaningless to me unless I can see why to care.” Then again, I’d say this is tautological/obvious in that “meaning” just is “why it matters to me.”
My “position” (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I’m sure it will be.
I’m not a naturalist. I’m not skeptical of “objective” because of such reasons; I am skeptical of it merely because I don’t know what the word refers to (unless it means something like “in accordance with consensus”). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you’ll call it (I mean them all synonymously).
If after engaging in such discourse I am not able to do that, I will eventually want to ask, “So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don’t want?”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
Whose language ? What language? If you think all language is a problem, what do you intend to replace it with?
It refers to the stuff that doesn’t go away when you stop believing in it.
Note the bold.
English, and all the rest that I know of.
Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.
If so, I suggest “permanent” as a clearer word choice.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s Intentional Stance.
Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.
To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
For some value of “incoherent”. Personally, I find it useful to strike out the word and replace it with something more precise, such a “semantically meaningless”, “contradictory”, “self-underminng” etc.
I take the position that while we may well have evolved with different values, they wouldn’t be morality. “Morality” is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
I’m not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call “values”. Hence, our values are moral.
But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn’t about values, it’s about life and death and happiness and sadness and many other things beside.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here. Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we’re talking about the social sciences, that’s another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I’d be open to hear a different view.
I didn’t say this—just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
1) that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
2) that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour. You seem to be in need of a narrow, sipulative definition of naturalistic.
You introduced the word “basic” there. It might be the case that goals disappear on a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren’t basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour is a good basis for supposing them to be natural by that usage.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.
Oh, that’s the philosopher’s definition of naturalistic. OTOH, you could just adopt the scientists version and scan their brain.
Well, alright, please tell me: what is a Utility Function, that it can be inferred from a brain scan? How’s this supposed to work, in broad terms?
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’ (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there’s no easy naturalistic solution.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
*that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are, and *that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist
-though the apparent contradiction in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them. From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says is more incisive and clear than what I came up with. I took a different tack, which is maybe moot given your admission to being a solipsist:
For what it may be worth, here’s what I had:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but anyway the idea at least is not incoherent. And showing otherwise would take doing some philosophy.
What they generally mean is “not subjective”. You might object that non-subjective value is contradictory, but that is not the same as objecting that it is incomprehensible, since one has to understand the meanings of individual terms to see a contradiction.
As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.
I’m not saying non-subjective value is contradictory, just that I don’t know what it could mean. To me “value” is a verb, and the noun form is just a nominalization of the verb, like the noun “taste” is a nominalization of the verb “taste.” Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music, etc. I didn’t understand what she meant either.
But before I would even want to revise my aims and goals, I’d have to anticipate something different than I do now. What does “some of your beliefs may be wrong by objective standards” make me anticipate that would motivate me to change my goals? (This is the same as the question in the other comment: What penalty do I suffer by having the “wrong” moral sentiments?)
I don’t see the force to that argument. “Believe” is a verb and “belief” is a nominalisation. But beliefs can be objectively right or wrong—if they belong to the appropriate subject area.
It is possible for aesthetics(and various other things) to be un-objectifiable whilst morality (and various other things) are objectifiable.
Why?
You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It’s not an ultimate. But morality is an ultimate because there is no more important value than a moral value.
If there is no personal gain from morality, that doesn’t mean you shouldn’t be moral. You should be moral by the definition of “moral”and “should”. It’s an analytical truth. It is for selfishness to justify itself in the face of morality, not vice versa.
First of all, I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality.” It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably. In the end, nothing else matters to me (nor, I expect, anyone else—if they understand what I’m getting at here).
So you disagree with EY about making beliefs pay rent? Like, maybe some beliefs don’t pay rent but are still important? I just don’t see how that makes sense.
This seems circular.
What if I say, “So what?”
How do you know that?
If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.
You say that like that’s a bad thing. I said it was analytical and analytical truths would be expected to sound tautologous or circular.
So it’s still true. Not caring is not refutation.
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.
Well, what use is your belief in “objective value”?
Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like “true” and “refute.” I would substitute “useful” and “show people why it is not useful,” respectively.
I meant the second part: “but when you really drill down there are only beliefs that predict my experience more reliably or less reliably” How do you know that?
What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn’t: Morality is, by definition.
Then I have a bridge to sell you.
And would it be true that it is non-useful? Since to assert P is to assert “P is true”, truth is a rather hard thing to eliminate. One would have to adopt the silence of Diogenes.
That’s what I was responding to.
Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.
You just eliminated it: If to assert P is to assert “P is true,” then to assert “P is true” is to assert P. We could go back and forth like this for hours.
But you still haven’t defined objective value.
Dictionary says, “Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.”
How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining “value” differently, how?
It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences. I have given that counterargument several times.
I think moral values are ultimate because I can;t think of a valid argument of the form “I should do because ”. Please give an example of a pangalactic value that can be substituted for ,
Yeah,. but it sitll comes back to truth. If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to “no, that’s not true”.
By being (relatively) uninfluenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.
You haven’t remotely established that as an identity. It is true that some people some of the time arrive at values through feelings. Others arrive at them (or revise them) through facts and thinking.
“Values can be defined as broad preferences concerning appropriate courses of action or outcomes”
I missed this:
I’ll just decide not to follow the advice, or I’ll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don’t need to use the word “true” or any equivalent to do that. I can just say it didn’t work.
People have been known to follow really bad advice, sometimes to their detriment and suffering a lot of pain along the way.
Some people have followed excessively stringent diets to the point of malnutrition or death. (This isn’t intended as a swipe at CR—people have been known to go a lot farther than that.)
People have attempted (for years or decades) to shut down their sexual feelings because they think their God wants it.
Any word can be eliminated in favour of a definitions or paraphrase. Not coming out with an equivalent—showing that you have dispensed with the concept—is harder. Why didn’t it work? You’re going to have to paraphrase “Because it wasn’t true” or refuse to answer.
The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It’s impossible to show you’ve dispensed with any concept, except to show that it isn’t useful for what you’re doing. That is what I’ve done. I’m non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.
Truth is useful for whatever you want to do with it. If people can collect stamps for the sake of collecting stamps, they can collect truths for the sake of collecting truths.
Sounding like religion would not render something incomprehensible...but it could easilly provoke an “I don’t like it” reaction, which is then dignified with the label “incoherent” or whatever.
I agree, if you mean things like, “If I now believe that she is really a he, I don’t want to take ‘her’ home anymore.”
Neither can I. I just don’t draw the same conclusion. There’s a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I’m not sure why you would think it is veiled disagreement, seeing as lukeprog’s whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of “incoherent to me” or someone else, so it’s not some kind of damning word. It simply means the message is not getting through to me. That could be your fault, my fault, or English’s fault, and I don’t really care which it is, but it would be preferable for something to actually make it across the inferential gap.)
EDIT: Oops, posted too soon.
So basically you are saying that preferences can change because of facts/beliefs, right? And I agree with that. To give a more mundane example, if I learn Safeway doesn’t carry egg nog and I want egg nog, I may no longer want to go to Safeway. If I learn that egg nog is bad for my health, I may no longer want egg nog. If I believe health doesn’t matter because the Singularity is near, I may want egg nog again. If I believe that egg nog is actually made of human brains, I may not want it anymore.
At bottom, I act to get enjoyment and/or avoid pain, that is, to win. What actions I believe will bring me enjoyment will indeed vary depending on my beliefs. But it is always ultimately that winning/happiness/enjoyment/fun//deliciousness/pleasure that I am after, and no change in belief can change that. I could take short-term pain for long-term gain, but that would be because I feel better doing that than not.
But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn’t make my want for deliciousness “uninfluenced by personal feelings.”
In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn’t lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus “objective values.” But not for 1, because personal feelings are, well, personal.
If so, then it seems I am back to my initial response to lukeprog and ensuing brief discussion. In short, if it is only the belief in objective facts that is wrong, then I wouldn’t want to call that morality, but more just self-help, or just what the whole rest of LW is. It is not that someone could be wrong about their preferences/values 1, but preferences/values 2.
“incoherence” means several things. Some of them, such a self-contradiction are as objective as anything. You seem to find morality meaningless in some personal sense. Looking at dictionaries doesn’t seem to work for you. Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word “good” used a lot, unless they were raised by wolves. So that’s why I see complaints of incoherence as being disguised disagreement.
If you say so. That doesn’t make morality false, meaningless or subjective. It makes you an amoral hedonist.
Perhaps not completley, but that sill leaves some things as relatively more objective than others.
Then your categories aren’t exhaustive, because preferences can also be defined to include universalisable values alongside personal whims. You may be making the classic of error of taking “subjective” to mean “believed by a subject”
The problem isn’t that I don’t know what it means. The problem is that it means many different things and I don’t know which of those you mean by it.
I have moral sentiments (empathy, sense of justice, indignation, etc.), so I’m not amoral. And I am not particularly high time-preference, so I’m not a hedonist.
If you mean preferences that everyone else shares, sure, but there’s no stipulation in my definitions that other people can’t share the preferences. In fact, I said, “(though they may be universal or semi-universal).”
It’d be a “classic error” to assume you meant one definition of subjective rather than another, when you haven’t supplied one yourself? This is about the eight time in this discussion that I’ve thought that I can’t imagine what you think language even is.
I doubt we have any disagreement, to be honest. I think we only view language very, radically differently. (You could say we have a disagreement about language.)
What “moral” means or what “good” means/?
No, that isn’t the problem. It has one basic meaning, but there are a lot of different theories about it. Elsewhere you say that utilitarianism renders objective morality meaningful. A theory of X cannot render X meaningful, but it can render X plausible.
But you theorise that you only act on them(and that nobody ever acts but) toincrea se your pleasure.
I don’t see the point in stipulating that preferences can’t be shared. People who believe they can be just have to find another word. Nothing is proven.
I’ve quoted the dictionary derfinition, and that’s what I mean.
“existing in the mind; belonging to the thinking subject rather than to the object of thought ( opposed to objective). 2. pertaining to or characteristic of an individual; personal; individual: a subjective evaluation. 3. placing excessive emphasis on one’s own moods, attitudes, opinions, etc.; unduly egocentric”
I think language is public, I think (genuine) disagreements about meaning can be resolved with dictionaries, and I think you shouldn’t assume someone is using idiosyncratic definitions unless they give you good reason.
Objective truth is what you should believe even if you don’t. Objective values are the values you should have even if you have different values.
Where the groundwork is about 90% of the job...
That has been answered several times. You are assuming that instrumental value is ultimate value, and it isn’t.
Imagine you are arguing with someone who doesn’t “get” rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don’t, you can’t. Even good arguments will fail to work on some people.
You should care about morality because it is morality. Morality defines (the ultimate kind of) “should”.
“What I should do” =def “what is moral”.
Nor everyone does get that , which is why “don’t care” is “made to care” by various sanctions.
“Should” for what purpose?
I certainly agree there. The question is whether it is more useful to assign the label “philosophy” to groundwork+theory or just the theory. A third possibility is that doing enough groundwork will make it clear to all discussants that there are no (or almost no) actually theories in what is now called “philosophy,” only groundwork, meaning we would all be in agreement and there is nothing to argue except definitions.
I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims. It seems you’re saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I’m wrong). This is what makes me curious about why you think I would care. The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it’s all semantic confusion, and because I don’t want to sound dismissive or obstinate in continuing to say, “So what?”
Believing in truth is what rational people do.
Which is good because...?
Correct.
I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn’t care about truth at all, the process probably isn’t going to work.
I think that horse has bolted. Inasmuch as you don’t care about truth per se. you have advertised yourself as being irrational.
Winning is what rational people do. We can go back and forth like this.
It benefits me, because I enjoy helping people. See, I can say, “So what?” in response to “You’re wrong.” Then you say, “You’re still wrong.” And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for “being wrong.”
Sure, people usually argue whether something is “true or false” because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don’t care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed—very unusually—is claimed to not have any effect on such things, “true” and “false” become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can’t, I will happily discard them.
So you say. I can think of two arguments against that: people acquire true beliefs that aren’t immediately useful, and untrue beliefs can be pleasing.
I never said they had to be “immediately useful” (hardly anything ever is). Untrue beliefs might be pleasing, but when people are arguing truth and falsehood it is not in order to prove that the beliefs they hold are untrue so that they can enjoy believing them, so it’s not an objection either.
You still don’t have a good argument to the effect that no one cares about truth per se.
A lot of people care about truth, even when (I suspect) they diminish their enjoyment needlessly by doing so, so no argument there. In the parent I’m just continuing to try to explain why my stance might sound weird. My point from farther above, though, is just that I don’t/wouldn’t care about “truth” in those rare and odd cases where it is already part of the premises that truth or falsehood will not affect me in any way.
I think ‘usually” is enough qualification, especially considering that he says ‘makes a difference’ and not ’completely determines”
Hmm. I sounds to me like a kind of methodological twist on logical positivism...just don’t bother with things that don’t have empirical consequences.
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post [anticipations] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post [anticipations] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but here too it seems you have to do quite a bit of philosophy to get the conclusion that the idea of an objective value judgement is incoherent.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of the complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about the idea, they really should look into its implications if they want to avoid inadvertent contradictions in their world-views. That means doing some philosophy.
You say that objective values are incoherent, but you offer no argument for it. Presenting philosophical claims without justification isn’t something different to philosophy, or something better. It isn’t good rationality either. Rationality is as rationality does.
By incoherent I simply mean “I don’t know how to interpret the words.” So far no one seems to want to help me do that, so I can only await a coherent definition of objective ethics and related terms. Then possibly an argument could start. (But this is all like deja vu from the recent metaethics threads.)
Can you interpret the word “morality is subjective”? How about the the words “morality is not subjective”?
“Morality is subjective”: Each person has their own moral sentiments.
“Morality is not subjective”: Each person does not have their own moral sentiments. Or there is something more than each person’s moral sentiments that is worth calling “moral.” <--- But I ask, what is that “something more”?
OK. That is not what “subjective” means. What it means is that if something is subjective, an opinion is guaranteed to be correct or the last word on the matter just because it is the person’s opinion. And “objective” therefore means that it is possible for someone to be wrong in their opinion.
I don’t claim moral sentiments are correct, but simply that a person’s moral sentiment is their moral sentiment. They feel some emotions, and that’s all I know. You are seeming to say there is some way those emotions can be correct or incorrect, but in what sense? Or probably a clearer way to ask the question is, “What disadvantage can I anticipate if my emotions are incorrect?”
An emotion, such as a feeling of elation or disgust, is not correct or incorrect per se; but an emotion per se is no basis for a moral sentiment, because moral sentiment has to be about something. You could think gay marriage is wrong because homosexuality disgusts you, or you could feel serial-killing is good because it elates you, but that doesn’t mean the conclusions you are coming to are right. It may be a cast iron fact that you have those particular sentiments, but that says nothing about the correctness of their content, any more than any opinion you entertain is automatically correct.
ETA The disadvantages you can expect if your emotions are incorrect include being in the wrong whilst feeling you are in the right. Much as if you are entertaining incorrect opinions.
What if I don’t care about being wrong (if that’s really the only consequence I experience)? What if I just want to win?
Then you are, or are likely to be, morally in the wrong. That is of course possible. You can choose to do wrong. But it doesn’t constitute any kind of argument. Someone can elect to ignore the roundness of the world for some perverse reason, but that doesn’t make ”!he world is round” false or meaningless or subjective.
Indeed it is not an argument. Yet I can still say, “So what?” I am not going to worry about something that has no effect on my happiness. If there is some way it would have an effect, then I’d care about it.
The difference is, believing “The world is round” affects whether I win or not, whereas believing “I’m morally in the wrong” does not.
That is apparently true in your hypothetical, but it’s not true in the real world. Just as the roundness of the world has consequences, the wrongness of an action has consequences. For example, if you kill someone, then your fate is going to depend (probabilistically) on whether you were in the right (e.g. he attacked and you were defending your life) or in the wrong (e.g. you murdered him when he caught you burgling his house). The more in the right you were, then, ceteris paribus, the better your chances are.
You’re interpreting “I’m morally in the wrong” to mean something like, “Other people will react badly to my actions,” in which case I fully agree with you that it would affect my winning. Peterdjones apparently does not mean it that way, though.
Actually I am not. I am interpreting “I’m morally wrong” to mean something like, “I made an error of arithmetic in an area where other people depend on me.”
An error of arithmetic is an error of arithmetic regardless of whether any other people catch it, and regardless of whether any other people react badly to it. It is not, however, causally disconnected from their reaction, because, even though an error of arithmetic is what it is regardless of people’s reaction to it, nevertheless people will probably react badly to it if you’ve made it in an area where other people depend on you. For example, if you made an error of arithmetic in taking a test, it is probably the case that the test-grader did not make the same error of arithmetic and so it is probably the case that he will react badly to your error. Nevertheless, your error of arithmetic is an error and is not merely getting-a-different-answer-from-the-grader. Even in the improbable case where you luck out and the test grader makes exactly the same error as you and so you get full marks, nevertheless, you did still make that error.
Even if everyone except you wakes up tomorrow and believes that 3+4=6, whereas you still remember that 3+4=7, nevertheless in many contexts you had better not switch to what the majority believe. For example, if you are designing something that will stand up, like a building or a bridge, you had better get your math right, you had better correctly add 3+4=7 in the course of designing the edifice if that sum is ever called on calculating whether the structure will stand up.
If humanity divides into two factions, one faction of which believes that 3+4=6 and the other of which believes that 3+4=7, then the latter faction, the one that adds correctly, will in all likelihood over time prevail on account of being right. This is true even if the latter group starts out in the minority. Just imagine what sort of tricks you could pull on people who believe that 3+4=6. Because of the truth of 3+4=7, eventually people who are aware of this truth will succeed and those who believe that 3+4=6 will fail, and over time the vast majority of society will once again come to accept that 3+4=7.
And similarly with morality.
Nothing’s jumping out at me that would seriously impact a group’s effectiveness from day to day. I rarely find myself needing to add three and four in particular, and even more rarely in high-stakes situations. What did you have in mind?
Suppose you think that 3+4=6.
I offer you the following deal: give me $3 today and $4 tomorrow, and I will give you a 50 cent profit the day after tomorrow, by returning to you $6.50. You can take as much advantage of this as you want. In fact, if you like, you can give me $3 this second, $4 in one second, and in the following second I will give you back all your money plus 50 cents profit—that is, I will give you $6.50 in two seconds.
Since you think that 3+4=6, you will jump at this amazing deal.
I find that most people who believe absurd things still have functioning filters for “something is fishy about this”. I talked to a person who believed that the world was going to end in 2012, and I offered to give them a dollar right then in exchange for a hundred after the world didn’t end, but of course they didn’t take it: something was fishy about that.
Also, dollars are divisible: someone who believes that 3+4=6 may not believe that 300+400=600.
If he isn’t willing to take your trade, then his alleged belief that the world will end in 2012 is weak at best. In contrast, if you offer to give me $6.50 in exchange for $3 plus $3, then I will take your offer, because I really do believe that 3+3=6.
On the matter of divisibility, you are essentially proposing that someone with faulty arithmetic can effectively repair the gap by translating arithmetic problems away from the gap (e.g. by realizing that 3 dollars is 300 pennies and doing arithmetic on the pennies). But in order for them to do this consistently they need to know where the gap is, and if they know that, then it’s not a genuine gap. If they realize that their belief that 3+4=6 is faulty, then they don’t really believe it. In contrast, if they don’t realize that their belief that 3+4=6 is faulty, then they won’t consistently translate arithmetic problems away from the gap, and so my task becomes a simple matter of finding areas where they don’t translate problems away from the gap, but instead fall in.
Are you saying that you would not be even a little suspicious and inclined to back off if someone said they’d give you $6.50 in exchange for $3+$3? Not because your belief in arithmetic is shaky, but because your trust that people will give you fifty cents for no obvious reason is nonexistent and there is probably something going on?
I’m not denying that in a thought experiment, agents that are wrong about arithmetic can be money-pumped. I’m skeptical that in reality, human beings that are wrong about arithmetic can be money-pumped on an interesting scale.
In my hypothetical, we can suppose that they are perfectly aware of the existence of the other group. That is, the people who think that 3+4=7 are aware of the people who think that 3+4=6, and vice versa. This will provide them with all the explanation they need for the offer. They will think, “this person is one of those people who think that 3+4=7”, and that will explain to them the deal. They will see that the others are trying to profit off them, but they will believe that the attempt will fail, because after all, 3+4=6.
As a matter of fact, in my hypothetical the people who believe that 3+4=6 would be just as likely to offer those who believe that 3+4=7 a deal in an attempt to money-pump them. Since they believe that 3+4=6, and are aware of the belief of the others, they might offer the others the following deal: “give us $6.50, and then the next day we will give you $3 and the day after $4.” Since they believe that 3+4=6, they will think they are ripping the others off.
The thought experiment wasn’t intended to be applied to humans as they really are. It was intended to explain humans as they really are by imagining a competition between two kinds of humans—a group that is like us, and a group that is not like us. In the hypothetical scenario, the group like us wins.
And I think you completely missed my point, by the way. My point was that arithmetic is not merely a matter of agreement. The truth of a sum is not merely a matter of the majority of humanity agreeing on it. If more than half of humans believed that 3+4=6, this would not make 3+4=6 be true. Arithmetic truth is independent of majority opinion (call the view that arithmetic truth is a matter of consensus within a human group “arithmetic relativism” or “the consensus theory of arithmetic truth”). I argued for this as follows: suppose that half of humanity—nay, more than half—believed that 3+4=6, and a minority believed that 3+4=7. I argued that the minority with the latter belief would have the advantage. But if consensus defined arithmetic truth, that should not be the case. Therefore consensus does not define arithmetic truth.
My point is this: that arithmetic relativism is false. In your response, you actually assumed this point, because you’ve been assuming all along that 3+4=6 is false, even though in my hypothetical scenario a majority of humanity believed it is true.
So you’ve actually assumed my conclusion but questioned the argument that I used to argue for the conclusion.
And this, in turn, was to illustrate a more general point about consensus theories and relativism. The context was a discussion of morality. I had been interpreted as advocating what amounts to a consensus theory of morality, and I was trying to explain why may specific claims do not entail a consensus theory of morality, but are also compatible with a theory of morality as independent of consensus.
I agree with this, if that makes any difference.
In sum, you seem to be saying that morality involves arithmetic, and being wrong about arithmetic can hurt me, so being wrong about morality can hurt me.
There’s no particular connection between morality and arithmetic that I’m aware of. I brought up arithmetic to illustrate a point. My hope was that arithmetic is less problematic, less apt to lead us down philosophical blind allies, so that by using it to illustrate a point I wasn’t opening up yet another can of worms.
Then you basically seem to be saying I should signal a certain morality if I want to get on well in society. Well I do agree.
Whether someone is judged right and wrong by others has consequences, but the people doing the judging might be wrong. It is still an error to make morality justify itself in terms of instrumental utility, since there are plenty of examples of things that are instrumentally right but ethically wrong, like improved gas chambers.
Actually being in the right increases your probability of being judged to be in the right. Yes, the people doing the judging may be wrong, and that is why I made the statement probabilistic. This can be made blindingly obvious with an example. Go to a random country and start gunning down random people in the street. The people there will, with probability so close to 1 as makes no real difference, judge you to be in the wrong, because you of course will be in the wrong.
There is a reason why people’s judgment is not far off from right. It’s the same reason that people’s ability to do basic arithmetic when it comes to money is not far off from right. Someone who fails to understand that $10 is twice $5 (or rather the equivalent in the local currency) is going to be robbed blind and his chances of reproduction are slim to none. Similarly, someone whose judgment of right and wrong is seriously defective is in serious trouble. If someone witnesses a criminal lunatic gun down random people in the street and then walks up to him and says, “nice day”, he’s a serious candidate for a Darwin Award. Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool. And so, if you go to a random country and start killing people randomly, you will be neutralized by the locals quickly. That’s a prediction. Moral thought has predictive power.
The only reason anyone can get away with the mass murder that you allude to is that they have overwhelming power on their side. And even they did it in secret, as I recall learning, which suggests that powerful as they were, they were not so powerful that they felt safe murdering millions openly.
Morality is how a human society governs itself in which no single person or organized group has overwhelming power over the rest of society. It is the spontaneous self-regulation of humanity. Its scope is therefore delimited by the absence of a person or organization with overwhelming power. Even though just about every place on Earth has a state, since it is not a totalitarian state there are many areas of life in which the state does not interfere, and which are therefore effectively free of state influence. In these areas of life humanity spontaneously self-regulates, and the name of the system of spontaneous self-regulation is morality.
It sounds to me like you’re describing the ability to recognize danger, not evil, there.
Say that your hypothetical criminal lunatic manages to avoid the police, and goes about his life. Later that week, he’s at a buffet restaurant, acting normally. Is he still evil? Assuming nobody recognizes him from the shooting, do you expect the other people using the buffet to react unusually to him in any way?
It’s not either/or. There is no such thing as a bare sense of danger. For example, if you are about to drive your car off a cliff, hopefully you notice in time and stop. In that case, you’ve sensed danger—but you also sensed the edge of a cliff, probably with your eyes. Or if you are about to drink antifreeze, hopefully you notice in time and stop. In that case, you’ve sensed danger—but you’ve also sensed antifreeze, probably with your nose.
And so on. It’s not either/or. You don’t either sense danger or sense some specific thing which happens to be dangerous. Rather, you sense something that happens to be dangerous, and because you know it’s dangerous, you sense danger.
Chances are higher than average that if he was a criminal lunatic a few days ago, he is still a criminal lunatic today.
Obviously not, because if you assume that people fail to perceive something, then it follows that they will behave in a way that is consistent with their failure to perceive it. Similarly, if you fail to notice that the antifreeze that you’re drinking is anything other than fruit punch, then you can be expected to drink it just as if it were fruit punch.
My point was that in the shooting case, the perception of danger is sufficient to explain bystanders’ behavior. They may perceive other things, but that seems mostly irrelevant.
You said:
This claim appears to be incompatible with your expectation that people will not notice your hypothetical murderer when they encounter him acting according to social norms after committing a murder, given that he’s supposedly still evil.
People perceive danger because they perceive evil, and evil is dangerous.
It is not irrelevant that they perceive a specific thing (such as evil) which is dangerous. Take away the perception of the specific thing, and they have no basis upon which to perceive danger. Only Spiderman directly perceives danger, without perceiving some specific thing which is dangerous. And he’s fictional.
I was referring to the standard, common ability to recognize evil. I was saying that someone who does not have that ability will be cut out of the gene pool (not definitely—probabilistically, his chances of surviving and reproducing are reduced, and over the generations the effect of this disadvantage compounds).
People who fail to recognize that the guy is that same guy from before are not thereby missing the standard human ability to recognize evil.
Except when the evil guys take over, Then you are in trouble if you oppose them.
That doesn’t affect my point. If there are actual or conceptual circumstances where instrumental good diverges from moral good, the two cannot be equated.
Why would it be wrong if they do? You theory of morality seems to be in need of another theory of morality to justify it.
Which is why the effective scope of morality is limited by concentrated power, as I said.
I did not equate moral good with instrumental good in the first place.
I didn’t say it would be wrong. I was talking about making predictions. The usefulness of morality in helping you to predict outcomes is limited by concentrated power.
On the contrary, my theory of morality is confirmed by the evidence. You yourself supplied some of the evidence. You pointed out that a concentration of power creates an exception to the prediction that someone who guns down random people will be neutralized. But this exception fits with my theory of morality, since my theory of morality is that it is the spontaneous self-regulation of humanity. Concentrated power interferes with self-regulation.
You say:
...but you also say...
..which seems to imply that you are still thinking of morality as something that has to pay its way instrumentally, by making useful predictions.
It’s a conceptual truth that power interferes with spontaneous self-regulation: but that isn’t the point. The point is not that you have a theory that makes predictions, but whether it is a theory of morality.
It is dubious to say of any society that the way it is organised is ipso facto moral. You have forestalled the relativistic problem by saying that socieites must self organise for equality and justice, not any old way, which takes it as read that equality and justice are Good Things. But an ethical theory must explain why they are good, not rest on them as a given.
“Has to”? I don’t remember saying “has to”. I remember saying “does”, or words to that effect. I was disputing the following claim:
This is factually false, considered as a claim about the real world.
I am presenting the hypothesis that, under certain constraints, there is no way for humanity to organize itself but morally or close to morally and that it does organize itself morally or close to morally. The most important constraint is that the organization is spontaneous, that is to say, that it does not rely on a central power forcing everyone to follow the same rules invented by that same central power. Another constraint is absence of war, though I think this constraint is already implicit in the idea of “spontaneous order” that I am making use of, since war destroys order and prevents order.
Because humans organize themselves morally, it is possible to make predictions. However, because of the “no central power” constraint, the scope of those predictions is limited to areas outside the control of the central power.
Fortunately for those of us who seek to make predictions on the basis of morality, and also fortunately for people in general, even though the planet is covered with centralized states, much of life still remains largely outside of their control.
is that a stipulative definition(“morality” =def “spontaneous organisation”) or is there some independent standard of morality on which it based?
What about non-centralised power? What if one fairly large group—the gentry, men, citizens, some racial group, have power over another in a decentralised way?
And what counts as a society? Can an Athenian slave-owner state that all citizens in their society are equal, and, as for slaves, they are not members of their society.
ETA: Actually, it’s worse than that. Not only are there examples of non-centralised power,there are cases where centralised power is on the side of angels and spontaneous self-organisation on the the other side; for instance the Civil Rights struggle, where the federal government backed equality, and the opposition was from the grassroots.
The Civil Rights struggle was national government versus state government, not government versus people. The Jim Crow laws were laws created by state legislatures, not spontaneous laws created by the people.
There is, by the way, such a thing as spontaneous law created by the people even under the state. The book Order Without Law is about this. The “order” it refers to is the spontaneous law—that is, the spontaneous self-government of the people acting privately, without help from the state. This spontaneous self-government ignores and in some cases contradicts the state’s official, legislated law.
Jim Crow was an example of official state law, and not an example of spontaneous order.
Plenty of things that happened weren’t sanctioned by state legislatures, such as discrimination by private lawyers, hassling of voters during registration drives, and the assassination of MLK
But law isn’t morality. There is such a thing as a laws that apply only to certain people, and which support privilege and the status quo rather than equality and justice.
Legislation distorts society and the distortion ripples outward. As for the assassination, that was a single act. Order is a statistical regularity.
I didn’t say it was. I pointed out an example of spontaneous order. It is my thesis that spontaneous order tends to be moral. Much order is spontaneous, so much order is moral, so you can make predictions on the basis of what is moral. That should not be confused with a claim that all order is morality, that all law is morality, which is the claim that you are disputing and a claim I did not make.
From it’s primordial state of equality...? I can see how a society that starts equal might self organise to stay that way. But I don’t think they start equal that often.
The fact that you are amoral does not mean there is anything wrong with morality, and is not an argument against it. You might as well be saying “there is a perfectly good rational argument that the world is round, but I prefer to be irrational”.
That doesn’t constitute an argument unless you can explain why your winning is the only thing that should matter.
Yeah, I said it’s not an argument. Yet again I can only ask, “So what?” (And this doesn’t make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, “We both would be disgusted at the prospect of killing a dog for no reason.” But you seem to be saying there is something more.)
The wordings “affect my winning” and “matter” mean the same thing to me. I take “The world is round” seriously because it matters for my actions. I do not see how “I’m morally in the wrong”* matters for my actions. (Nor how “I’m pan-galactically in the wrong” matters. )
*EDIT: in the sense that you seem to be using it (quite possibly because I don’t know what that sense even is!).
So being wrong and not caring you are in the wrong is not the same as being right.
Yes. I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.
But they don’t mean the same thing. Morality matters more than anything else by definition. You don’t prove anything by adopting an idiosyncratic private language.
The question is whether mattering for your actions is morally justifiable.
Yet I still don’t care, and by your own admission I suffer not in the slightest from my lack of caring.
Zorg says that getting pangalacticism right matters more than anything. He cannot tell us why it matters, but boy it really does matter.
Which would be? If you refer me to the dictionary again, I think we’re done here.
The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?
You have not succeeded in showing that winning is the most important thing.
I’ve never argued (a), I’m still arguing (actually just informing you) that the words “objective morality” are meaningless to me, and I’m still arguing (c) but only in the sense that it is equivalent to (b): in other words, I can only await some argument that morality is objective. (But first I’d need a definition!)
I’m using the word winning as a synonym for “getting what I want,” and I understand the most important thing to mean “what I care about most.” And I mean “want” and “care about” in a way that makes it tautological. Keep in mind I want other people to be happy, not suffer, etc. Nothing either of us have argued so far indicates we would necessarily have different moral sentiments about anything.
You are not actually being all that informative, since there remains a distinct supsicion that when you say some X is meaningless-to-you, that is a proxy for I-don’t-agree-with-it. I notice throughout these discussions that you never reference accepted dictiionary definitions as a basis for meaningfullness, but instead always offer some kind of idiosyncratic personal testimony.
What is wrong with dictionary definitions?
That doesn’t affect anything. You still have no proof for the revised version.
Other people out there in the non-existent Objective World?
I don’t think moral anti-realists are generally immoral people. I do think it is an intellectual mistake, whether or not you care about that.
Zorg said the same thing about his pan-galactic ethics.
Did you even read the post we’re commenting on?
Wait, you want proof that getting what I want is what I care about most?
Read what I wrote again.
Read.
“Changing your aims” is an action, presumably available for guiding with philosophy.
Upvoted for thoughtfulness and thoroughness.
I’m using ‘definition’ in the common sense: “the formal statement of the meaning or significance of a word, phrase, etc.” A stipulative definition is a kind of definition “in which a new or currently-existing term is given a specific meaning for the purposes of argument or discussion in a given context.”
A conceptual analysis of a term using necessary and sufficient conditions is another type of definition, in the common sense of ‘definition’ given above. Normally, a conceptual analysis seeks to arrive at a “formal statement of the meaning or significance of a word, phrase, etc.” in terms of necessary and sufficient conditions.
Using my dictionary usage of the term ‘define’, I would speak (in my language) of conceptual analysis as a particular way of defining a term, since the end result of a conceptual analysis is meant to be a “formal statement of the meaning or significance of a word, phrase, etc.”
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
And I do think my opening offers an accurate example of conceptual analysis. Albert and Barry’s arguments about the computer microphone and hypothetical aliens are meant to argue about their intuitive concepts of ‘sound’, and what set of necessary and sufficient conditions they might converge upon. That’s standard conceptual analysis method.
The reason this process looks silly to us (when using a non-standard example like ‘sound’) is that it is so unproductive. Why think Albert and Barry have the same concept in mind? Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning? And, let’s say we arrive at a messy set of 6 necessary and sufficient conditions for the intuitive meaning of the term. Is that going to be as useful for communication as one we consciously chose because it carved-up thingspace well? I doubt it. The IAU’s definition of ‘planet’ is more useful than the messy ‘folk’ definition of ‘planet’. Folk intuitions about ‘planet’ evolved over thousands of years and different people have different intuitions which may not always converge. In 2006, the IAU used modern astronomical knowledge to carve up thingspace in a more useful and informed way than our intuitions do.
Vague, intuitively-defined concepts are useful enough for daily conversation in many cases, and wherever they break down due to divergent intuitions and uses, we can just switch to stipulation/tabooing.
Yes. I’m going to argue about facts and anticipations. I’ve tried to show (a bit) in this post and in this comment about why doing (certain kinds of) conceptual analysis aren’t worth it. I’m curious to hear your answers to my many-questions paragraph about the use of conceptual analysis, above.
I’ve skipped responding to many parts of your comment because I wanted to ‘get on the same page’ about a few things first. Please re-raise any issues you’d like a response on.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with your comments.
Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I’d be interested to know if this seems wrong.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU’s definition for ‘planet’, I fail to see why clarifying our intuitive concepts is a good use of all that brain power. Such work might theoretically have some value for the psychology of concepts and for linguistics, and yet I suspect neither science would miss philosophy if philosophy went away. Indeed, scientific psychology is often said to have ‘debunked’ conceptual analysis because concepts are not processed in our brains in terms of necessary and sufficient conditions.
But I’m not sure I’m reading you correctly. Why do you think its useful to devote all that brainpower to clarifying our intuitive concepts of things?
I think that where we differ is on ‘intuitive concepts’ -what I would want to call just ‘concepts’. I don’t see that stipulative definitions replace them. Scenario (3), and even the IAU’s definition, illustrate this. It is coherent for an astronomer to argue that the IAU’s definition is mistaken. This implies that she has a more basic concept -which she would strive to make explicit in arguing her case- different than the IAU’s. For her to succeed in making her case -which is imaginable- people would have to agree with her, in which case we would have at least partially to share her concept. The IAU’s definition tries to make explicit our shared concept -and to some extent legislates, admittedly- but it is a different sort of animal than what we typically use in making judgements.
Philosophy doesn’t impact non-philosophical activities often, but when it does the impact is often quite big. Some examples: the influence of Mach on Einstein, of Rousseau and others on the French and American revolutions, Mill on the emancipation of women and freedom of speech, Adam Smith’s influence on economic thinking.
I consider though that the clarification is an end in itself. This site proves -what’s obvious anyway- that philosophical questions naturally have a grip on thinking people. People usually suppose the answer to any given philosophical question to be self-evident, but equally we typically disagree about what the obvious answer is. Philosophy is about elucidating those disagreements.
Keeping people busy with activities which don’t turn the planet into more non-biodegradeable consumer durables is fine by me. More productivity would not necessarily be a good thing (...to end with a sweeping undefended assertion.).
OTOH, there is a class of fallacies (the True Scotsman argument, tendentious redefinition, etc),which are based on getting stipulative definitions wrong. Getting them right means formalisation of intution or common usage or something like that.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I would be interested to hear if this seems wrong.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with what you’re saying. Suppose
we have two people, Albert and Barry we have one thing, a car, X, of determinate interior volume we have one sentence, S: “X is a subcompact”. Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
To point people to some additional references on conceptual analysis in philosophy. Audi’s (1983, p. 90) “rough characterization” of conceptual analysis is, I think, standard: “Let us simply construe it as an attempt to provide an illuminating set of necessary and sufficient conditions for the (correct) application of a concept.”
Or, Ramsey’s (1992) take on conceptual analysis: “philosophers propose and reject definitions for a given abstract concept by thinking hard about intuitive instances of the concept and trying to determine what their essential properties might be.”
Sandin (2006) gives an example:
This is precisely what Albert and Barry are doing with regard to ‘sound’.
Audi (1983). The Applications of Conceptual Analysis. Metaphilosophy 14: 87-106.
Ramsey (1992). Prototypes and Conceptual Analysis. Topoi, 11: 59-70.
Sandin (2006). Has psychology debunked conceptual analysis? Metaphilosophy, 37: 26-33.
Eliezer does have a post in which he talks about doing what you call conceptual analysis more-or-less as you describe and why it’s worthwhile. Unfortunately, since that’s just one somewhat obscure post whereas he talks about tabooing words in many of his posts, when LWrongers encounter conceptual analysis, their cached thought is to say “taboo your words” and dismiss the whole analysis as useless.
The ‘taboo X’ reply does seem overused. It is something that is sometimes best to just ignore when you don’t think it aids in conveying the point you were making.
When I try that, I tend to get down-votes and replies complaining that I’m not responding to their arguments.
I don’t know the specific details of the instances in question. One thing I am sure about, however, is that people can’t downvote comments that you don’t make. Sometimes a thread is just a lost cause. Once things get polarized it often makes no difference at all what you say. Which is not to say I am always wise enough to steer clear of arguments. Merely that I am wise enough to notice when I do make that mistake. ;)
I do not think that he is describing conceptual analysis. Starting with a word vs. starting with a set of objects makes all the difference.
In the example he does start with a word, namely ‘art’, then uses our intuition to get a set of examples. This is more-or-less how conceptual analysis works.
But he’s not analyzing “art”, he’s analyzing the set of examples, and that is all the difference.
I disagree. Suppose after proposing a definition of art based to the listed examples, someone produced another example that clearly satisfied our intuitions of what constituted art but didn’t satisfy the definitions. Would Eliezer:
a) say “sorry despite our intuitions that example isn’t art by definition”, or
b) conclude that the example was art and there was a problem with the definition?
I’m guessing (b).
He’s not trying to define art in accord with on our collective intuitions, he’s trying to find the simplest boundary around a list of examples based on an individual’s intuitions.
I would argue that the list of examples in the article is abbreviated for simplicity. If there is no single clear simple boundary between the two sets, one can always ask for more examples. But one asks an individual and not all of humanity.
I would argue he’s trying to find the simplest coherent extrapolation of our intuitions.
Why do we even care about what specifically Eliezer Yudkowsky was trying to do in that post? Isn’t “is it more helpful to try to find the simplest boundary around a list or the simplest coherent explanation of intuitions?” a much better question?
Focus on what matters, work on actually solving problems instead of trying to just win arguments.
The answer to your question is “it depends on the situation”. There are some situations in which are intuitions contain some useful, hidden information which we can extract with this method. There are some situation in which our intuitions differ and it makes sense to consider a bunch of separate lists.
But, regardless, it is simply the case that when Eliezer says
“Perhaps you come to me with a long list of the things that you call “art” and “not art”″
and
“It feels intuitive to me to draw this boundary, but I don’t know why—can you find me an intension that matches this extension? Can you give me a simple description of this boundary?”
he is not talking about “our intuitions”, but a single list provided by a single person.
(It is also the case that I would rather talk about that than whatever useless thing I would instead be doing with my time.)
Eliezer’s point in that post was that there are more and less natural ways to “carve reality at the joints.” That however much we might say that a definition is just a matter of preference, there are useful definitions and less useful ones. The conceptual analysis lukeprog is talking about does call for the rationalist taboo, in my opinion, but simply arguing about which definition is more useful as Eliezer does (if we limit conceptual analysis to that) does not.
This work is useful. Understanding how people conceptualize and categorize is the starting point for epistemology. If Wittgenstein hadn’t asked what qualified as a game, we might still be trying to define everything in terms of necessary and sufficient conditions.
I largely disagree, for these reasons.
Wasn’t the whole point of Wittgenstein’s observation that the question of whether something can be a vehicle without wheels is pretty much useless?
(I’ll reiterate some standard points, maybe someone will find them useful.)
The explicit connection you make between figuring out what is right and fixing people’s arguments for them is a step in the right direction. Acting in this way is basically the reason it’s useful to examine the physical reasons behind your own decisions or beliefs, even though such reasons don’t have any normative power (that your brain tends to act a certain way is not a very good argument for acting that way). Understanding these reasons can point you to a step where the reasoning algorithm was clearly incorrect and can be improved in a known way, thus giving you an improved reasoning algorithm that produces better decisions or beliefs (while the algorithm, both original and improved, remains normatively irrelevant and far from completely understood).
In other words, given that you have tools for making normative decisions that sometimes work, you should seek out as many opportunities for usefully applying them as you can find. If they don’t tell you what you should do, perhaps they can tell you how you should be thinking about what you should do. In particular, you should seek opportunities for applying them to their own operation, so that they start working better.
Of course, you’ll need tools for making normative decisions about the appropriate methods of improvement for a person’s reasoning, and here we hit a wall (on the way to a more rigorous method), because we typically only have our own intuitions to go on. Also, the way you’d like to improve other person’s reasoning can be different from the way that person would like their reasoning improved, which makes the ideas of “Alex-right” or “human-right” even more difficult to designate than just “right” (and perhaps much less useful).
I appreciate that this is a theoretical problem. Have you seen any evidence that this or is not a problem in our particular world?
People tend to prefer “just being told the answer”, where forcing them to work through problem sets teaches them better.
~~~~~
People dislike articulating answers to rhetorical questions regarding what seems obvious, as this would force them to admit to being surprised by an eventual conclusion, which is a state that can be emotionally uncomfortable, yet the discomfort is linked with embedding it in their memory and it also forces them to face the reality that neighboring beliefs need updating in light of the surprising conclusion because the conclusion was a surprise to them.
The above sentence is steeped in my theory behind a phenomenon that you may have better competing theories for, that people dislike rhetorical questions. Note that other theories are obvious but not entirely competitive with mine.
META: I have divided my posts with tildes because what seemed in my own mind a minute ago to be two roughly equivalent answers to Nisan’s question has unraveled into different qualities of response on my part, this is surprising to me and if there is anything to learn from it I only found it out by trying my fingertips at typing an answer to the question. The tildes also represent that I empathize with anyone downvoting this comment because everything below the tildes is too wordy and low quality; my first response (above the tildes) I think is really insightful.
META-META:I’ve been bemused by my inability to predict how others perceive my comments, but I’ve recently noticed a pattern: meta comments like this one are likely to get uniform positive or negative response (I’m still typing it out and sticking out my neck [in the safety of pseudonymity] as they are often well received), and I’d appreciate advice on how I could or should have written this post differently for it to be better if it is flawed as I suspect it is. One thing I am trying out for the first time are the META and META-META tags. Is there a better (or more standardized) way to do this?
The first sentence seems banal, the second interesting. I suspect this is like the take five minutes technique, you thought better because you thought longer. The second paragraph after the tildes seems unnecessary to me.
Thanks.
Upvoted for lucidity, but Empathetic Metaethics sounds more like the whole rest of LessWrong than metaethics specifically.
If there are supposed to be any additional connotations to Empathetic Metaethics it would make me very wary. I am wary of the connotation that I need someone to help me decide whether my feelings align with the Truth. I always assumed this site is called LessWrong because it generally tries to avoid driving readers to any particular conclusion, but simply away from misguided ones, so they can make their own decisions unencumbered by bias and confusion.
Austere-san may come off as a little callous, but Empathetic-san comes off as a meddler. I’d still rather just be a friendly Mr. Austere supplemented with other LW concepts, especially from the Human’s Guide to Words sequence. After all, if it is just confusion and bias getting in the way, all there is to do is to sweep those errors away. Any additional offer of “help” in deciding what it is “right” for me to feel would tingle my Spidey sense pretty hard.
We are trying to be ‘less wrong’ because human brains are so far from ideal at epistemology and at instrumental rationality (‘agency’). But it’s a standard LW perspective to assert that there is a territory, and some maps of (parts of) it are right and others are wrong. And since we are humans, it helps to retrain our emotions: “Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts.”
I’d rather call this “self-help” than “meta-ethics.” Why self-help? Because...
...even if my emotions are “wrong,” why should I care? In this case, the answer can only be that it will help me derive more satisfaction out of life if I get it “right”, which seems to fall squarely under the purview of self-help.
Of course we can draw the lines between meta-ethics and self-help in various ways, but there is so much baggage in the label “ethics” that I’d prefer to get away from it as soon as possible.
As a larger point, separate from the context of lukeprog’s particular post:
What you assumed above will not always be possible. If models M0...Mn are all misguided, and M(n+1) isn’t, driving readers away from misguided models necessarily drives them to one particular conclusion, M(n+1).
I’m not sure what this means. Could you elaborate?
What I imagine you to mean seems similar to the sentiment expressed in the first comment to this blog post. That comment seems to me to be so horrifically misguided that I had a strong physiological response to reading it. Basically the commenter thought that since he doesn’t experience himself as following rules of formulating thoughts and sentences, he doesn’t follow them. This is a confusion of the map and territory that stuck in my memory for some reason, and your comment reminded me of it because you seem to be expressing a very strong faith in the accuracy of how things seem to you.
Feel free to just explain yourself without feeling obligated to read a random blog post or telling me how I am misreading you, which would be a side issue.
I think my response to lukeprog above answers this in a way, but it’s more just a question of what we mean by “help me decide.” I’m not against people helping me be less wrong about the actual content of the territory. I’m just against people helping me decide how to emotionally respond to it, provided we are both already not wrong about the territory itself.
If I am happy because I have plenty of food (in the map), but I actually don’t (in the territory), I’d certainly like to be informed of that. It’s just that I can handle the transition from happy to “oh shit!” all by myself, thank you very much.
In other words, my suspicion of anyone calling themselves an Empathetic Metaethicist is that they’re going to try to slide in their own approved brand of ethics through the back door. This is also a worry I have about CEV. Hopefully future posts will alleviate this concern.
If you mean that in service of my goal of satisfying my actual desires, there is more of a danger of being misled when getting input from others as to whether my emotions are a good match for reality than when getting input as to whether reality matches my perception of it, I tentatively agree.
If you mean that getting input from others as to whether my emotions are a good match for reality has a greater cost than benefit, I disagree assuming basic advice filters similar to those used when getting input as to whether reality matches my perception of it. As per above, there will all else equal be a lower expected payoff for me getting advice in this area, even though the advantages are similar.
If you mean that there is a fundamental difference in kind between matching perception to reality and emotions to perceptions that makes getting input an act that is beneficial in the former case and corrosive in the latter, I disagree.
I have low confidence regarding what emotions are most appropriate for various crises and non-crises, and suspect what I think of as ideal are at best local peaks with little chance of being optimal. In addition, what I think of as optimal emotional responses are likely to be too resistant to exceptions. E.g., if one is trapped in a mine shaft the emotional response suitable for typical cases of being trapped is likely to consume too much oxygen.
I’m generally open to ideas regarding what my emotions should be in different situations, and how I can act to change my emotions.
A lot of the issue with things like conceptual analysis, I think, is that people do them badly, and then others have to step in and waste even more words to correct them. If the worst three quarters of philosophers suddenly stopped philosophizing, the field would probably progress faster.
Agreed as literally stated, and also agree with your implication: this is especially true for philosophy in addition to other fields in which this is also true.
“other fields in which this is also true” is intentionally ambiguous, half implying that this is basically true for all other fields and half implying it’s only true for a small subset, as I’m undecided as to which is the case.
net negative productivity programmer
Those aren’t definitions of ‘morally good’. They are theories of the morally good. I seriously doubt that there are any real philosophers that are confused about the distinction.
Right, but part of each of these theories is that using one set of definitions for moral terms is better than using another set of definitions, often for reasons similar to the network-style conceptual analysis proposed by Jackson.
If you are saying that meta-ethical definitions can never be perfectly neutral wrt a choice between ethical theories, then I have to agree. Every ethical theory comes dressed in a flattering meta-ethical evening gown that reveals the nice stuff but craftily hides the ugly bits.
But that doesn’t mean that we shouldn’t at least strive for neutrality. Personally, I would prefer to have the definition of “morally good” include consequential goods, deontological goods, and virtue goods. If the correct moral theory can explain this trinity in terms of one fundamental kind of good, plus two derived goods, well that is great. But that work is part of normative ethics, not meta-ethics. And it certainly is not accomplished by imposing a definition.
I’m doing a better job of explaining myself over here.
All of those already include the pre-theoretic notion of “good”.
Correct. Which is why I think it is a mistake if they are not accounted for in the post-theoretic notion.
But then confusion about definitions is actually confusion about theories.
The idea that people by default have no idea at all what moral language is hard to credit, whether claimed of people in general, or claimed by individuals of themselves. Everyone, after all, is brought up from an early age with a great deal of moral exhortation, to do Good things and refrain from Naughty things. Perhaps not everybody gets very far along the Kohlberg scale, but no one is starting from scratch. People may not be able to articulate a clear definition, or not the kind of definition one would expect from a theory, but that does not mean one needs a theory of metaethics to give a meaning to “moral”.
No. One only needs a theory of metaethics to prevent philosophers from giving it a disastrously wrong meaning.
exactly what I wanted to say!
Alternative hypothesis: it will teach good habits of thought that will allow people to recognise bad amateur philosophy.
It is unlikely that you will gain these “good habits of thought” allowing you to recognize “bad amateur philosophy” from reading mainstream philosophy when much of mainstream philosophy consists of what (I assume) you’re calling “bad amateur philosophy”.
No, much of it is bad professional philosophy. It’s like bad amateur philosophy except that students are forced to pretend it matters.
No. I’m calling the Sequences bad amateur philosophy.
If that’s the case, I’d like to hear your reasoning behind this statement.
A significant number of postings don’t argue towards a discernible point.
A significant number of postings don’t argue their point cogently.
Lack of awareness of standard counterarguments, and alternative theories.
Lack of appropriate response to objections.
None of this has anything to do with which answers are right or wrong. It is a form of the fallacy of grey to argue that since no philosophy comes up with definite answers, then it’s all equally a failure. Philosophy isn’t trying to be science, so it isn’t broken science.
A quick way of confirming this point might be to attempt to summarize the Less Wrong theory of ethics.
Particularly the ones written as dialogues. I share Massimo Pigliuccis frustration
3 and 4. There’s an example here. A poster makes a very pertinent objection tithe main post. No one responds, and the main post is to this day bandied around as establishing the point. Things don’t work like that. If someone returns your serve, you’re supposed to hit back, not walk off the court and claim the prize.
A knowledge of philosophy doesn’t give you a basis of facts to build on,but it does load your brain with a network of argument and counterargument, and can prevent you wasting time by mounting elaborate defences of claims to which there are well known objections.
It seems to me that there are two views of philosophy that are useful here: one of them I’ll term perspective, or a particular way of viewing the world, and the other one is comparative perspectives. That term is deliberately modeled after comparative religion because I think the analogy is useful; typically, one develops the practice of one’s own religion and the understanding of other religions.
It seems to me that the Sequences are a useful guide for crystallizing the ‘LW perspective’ in readers, but are not a useful guide for placing the ‘LW perspective’ in the history of perspectives. (For that, one’s better off turning to lukeprog, who has a formal education in philosophy.) Perhaps there are standard criticisms other perspectives make of this perspective, but whether or not that matters depends on whether you want to argue about this perspective or inhabit this perspective. If the latter, a criticism is not particularly interesting, but a patch is interesting.
That is to say, I think comparative perspectives (i.e. studying philosophy formally) has value, but it’s a narrow kind of value and like most things the labor involved should be specialized. I also think that the best guide to philosophy X for laymen and the best guide to philosophy X for philosophers will look different, and Eliezer’s choice to optimize for laymen was wise overall.
Most of the content in the sequences isn’t new as such, but it did draw from many different sources, most of which were largely confined to academia. In synthesis, the product is pretty original. To the best of my knowledge, the LessWrong perspective/community has antecedents but not an obvious historical counterpart.
In that light, I’d expect the catalyzing agent for such a perspective to be the least effective such agent that could successfully accomplish the task. (Or: to be randomly selected from the space of all possible effective agents, which is quite similar in practice.) We are the tool-users not because hominids are optimized for tool use, but because we were the first ones to do so with enough skill to experience a takeoff of civilization. So it’s pretty reasonable to expect the sequences to be a little wibbly.
To continue your religious metaphor, Paul wrote in atrocious Greek, had confusingly strong opinions about manbeds, and made it in to scripture because he was instrumental in building the early church communities. Augustine persuasively developed a coherent metaphysic for the religion that reconciled it with the mainstream Neoplatonism of the day, helping to clear the way for a transition from persecuted minority to dominant memeplex, but is considered a ‘doctor of the church’ rather than an author of scripture because he was operating within and refining a more established culture.
The sequences were demonstrably effective in crystallizing a community, but are probably a lot less effective in communicating outside that community. TAG’s objections may be especially relevant if LessWrong is to transition from a ‘creche’ online environment and engage in dialogue with cultural power brokers- a goal of the MIRI branch at a minimum.
I wish I had more than one upvote to give this comment; entirely agreed.
Thank you! The compliment works just as well.
..and its not too iimportant what the community is crystallized around? Believing in things you can’t justify or explain is something that an atheist community can safely borrow from religion?
Of course it’s important. What gives you another impression?
It’s not clear to me where you’re getting this. To be clear, I think that the LW perspective has different definitions of “believe,” “justify,” and “explain” from traditional philosophy, but I don’t think that it gets its versions from religion. I also think that atheism is a consequence of LW’s epistemology, not a foundation of it. (As a side note, the parts of religion that don’t collapse when brought into a robust epistemology are solid enough to build on, and there’s little to be gained by turning your nose up at their source.)
In this particular conversation, the religion analogy is used primarily in a social and historical sense. People believe things; people communicate and coordinate on beliefs. How has that communication and coordination happened in the past, and what can we learn from that?
We can learn that “all for the cause, whatever it is” is a failure of rationality.
I think the LW perspective has the same definitions...but possibly different theories from the various theories of traditional philosophy. (It also looks like LW has a different definition if “definition”, which really confuses things)
Religious epistemology—dogmatism+vagueness—is just the problem
Entirely agreed.
I don’t see the dogmatism you’re noticing—yes, Eliezer has strong opinions on issues I don’t think he should have strong opinions on, but those strong opinions are only weakly transmitted to others and you’ll find robust disagreement. Similarly, the vagueness I’ve noticed tends to be necessary vagueness, in the sense of “X is an open problem, but here’s my best guess at how X will be solved. You’ll notice that it’s fuzzy here, there, and there, which is why I think the problem is still open.”
So what actually is the LessWrongian theory of ethics?
And, assuming you don’t know....why are there people who believe it, for some value of believe?
In order to answer this question, I’m switching to the anthropology of moral belief and practice (as lukeprog puts it here).
I don’t think there’s a single agreed-upon theory. The OP is part of lukeprog’s sequence where he put forward a theory of meta-ethics he calls pluralistic moral reductionism, which he says here is not even an empathetic theory of meta-ethics, let alone applied ethics. Eliezer’s sequence on meta-ethics suffers from the flaw that it’s written ‘in character,’ and was not well-received. If you look at survey results, you see that the broadest statements we can make are things like “overall people here lean towards consequentialism.”
Ok. You can’t summarize it unambiguously either. So why do people believe it?
From Vaniver’s comment:
What “it” are you speaking of?
The lesswrongian theory of ethics, If you don’t believe there is such a singular entity, you couldn’t say so...I’m hardly going to disagree.
I doubt you’ll find anyone here seriously saying that we’ve found a definitive theory of metaethics. That is our eventual goal, yes, but right now, there are at best several competing theories. No absolutely correct theory has even been proposed, much less endorsed by the majority of LW. So the answer to your question (“Why do people believe it?”) is, as far as I can tell, “They don’t.” My question, however, is why you think this is something really bad, as opposed to something just slightly bad.
If you look upthread, youll see that what I think is really bad is advising people not to study mainstream philosophy.
I also think it bad to call philosophy diseased for not being able to solve problems you can’t solve either.
And it might be an idea to add a warning to the metaethics sequences: “Before reading these million words, please note that they don’t go anywhere”.
“Crystalising” you team clarifying, or defending.
Communicating the content of a claim is of llimited use, unless you can make it persuasive. That in turn, requires defending it against alternatives. So the function you are trying to separate are actually very interconnected.
(Another disanalogy between philosophy and religion is that philosophy is less holistic, working more at the claim level)
I mean clarifying. I use that term because some people look at the Sequences and say “but that’s all just common sense!”. In some ways it is, but in other ways a major contribution of the Sequences is to not just let people recognize that sort of common sense but reproduce it.
I understand that clarification and defense are closely linked, and am trying to separate intentionality more than I am methodology.
I consider ‘stoicism’ to be a ‘philosophy,’ but I notice that Stoics are not particularly interested in debating the finer points of abstractions, and might even consider doing so dangerous to their serenity relative to other activities. A particularly Stoic activity is negative visualization- the practice of imagining something precious being destroyed, to lessen one’s anxiety about its impermanence through deliberate acceptance, and to increase one’s appreciation of its continued existence.
One could see this as an unconnected claim put forth by Stoics that can be evaluated on its own merits (we could give a grant to a psychologist to test whether or not negative visualization actually works), but it seems to me that it is obvious that in the universe where negative visualization works, Stoics would notice and either copy the practice from its inventors or invent it themselves, because Stoicism is fundamentally about reducing anxiety and achieving serenity, and this seems amenable to a holistic characterization. (The psychologist might find that negative visualization works differently for Stoics than non-Stoics, and might actually only be a good idea for Stoics.)
Your example of “a philosophy” is pretty much a religion. by current standard. By philosophy I meant the sort of thing typified by current anglophone philosophy.
That may be the disjunction. Current anglophone philosophy is basically the construction of an abstract system of thought, valued for internal rigor and elegance but largely an intellectual exercise. Ancient Greek philosophies were eudaimonic- instrumental constructions designed to promote happiness. Their schools of thought, literal schools where one could go, were social communities oriented around that goal. The sequences are much more similar to the latter (‘rationalists win’ + meetups), although probably better phrased as utilitarian rather than eudaimonic. Yudkowsky and Sartre are basically not even playing the same game.
I’m delighted to hear that Clippie and Newcombs box are real-world, happiness promoting issues!
Clippy is pretty speculative, but analogies to Newcomb’s problem come up in real-world decision-making all the time; it’s a dramatization of a certain class of problem arising from decision-making between agents with models of each other’s probable behavior (read: people that know each other), much like how the Prisoner’s Dilemma is a dramatization of a certain type of coordination problem. It doesn’t have to literally involve near-omniscient aliens handing out money in opaque boxes.
Does it? It seems to me that once Omega stops being omniscient and becomes, basically, your peer in the universe, there is no argument not to two-box in Newcomb’s problem.
Seems to me like you only transformed one side of the equation, so to speak. Reallife Newcomblike problems don’t involve Omega, but they also don’t (mainly) involve highly contrived thought-experiment-like choices regarding which we are not prepared to model each other.
That seems to me to expand the Newcomb’s Problem greatly—in particular, into the area where you know you’ll meet Omega and can prepare by modifying your internal state. I don’t want to argue definitions, but my understanding of the Newcomb’s Problem is much narrower. To quote Wikipedia,
and that’s clearly not the situation of Joe and Kate.
Perhaps, but it is my understanding that an agent who is programmed to avoid reflective inconsistency would find the two situations equivalent. Is there something I’m missing here?
I don’t know what “an agent who is programmed to avoid reflective inconsistency” would do. I am not one and I think no human is.
Reflective inconsistency isn’t that hard to grasp, though, even for a human. All it’s really saying is that a normatively rational agent should consider the questions “What should I do in this situation?” and “What would I want to pre-commit to do in this situation?” equivalent. If that’s the case, then there is no qualitative difference between Newcomb’s Problem and the situation regarding Joe and Kate, at least to a perfectly rational agent. I do agree with you that humans are not perfectly rational. However, don’t you agree that we should still try to be as rational as possible, given our hardware? If so, we should strive to fit our own behavior to the normative standard—and unless I’m misunderstanding something, that means avoiding reflective inconsistency.
I don’t consider them equivalent.
Fair enough. I’m not exactly qualified to talk about this sort of thing, but I’d still be interested to hear why you think the answers to these two ought to be different. (There’s no guarantee I’ll reply, though!)
Because reality operates in continuous time. In the time interval between now and the moment when I have to make a choice, new information might come in, things might change. Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.
Curiously, this particular claim is true only because Lumifer’s primary claim is false. An ideal CDT agent released at time T with the capability to self modify (or otherwise precommit) will as rapidly as possible (at T + e) make a general precommitment to the entire class of things that can be regretted in advance only for the purpose of influencing decisions made after (T + e) (but continue with two-boxing type thinking for the purpose of boxes filled before T + e).
Curiously enough, I made no claims about ideal CDT agents.
True. CDT is merely a steel-man of your position that you actively endorsed in order to claim prestigious affiliation.
The comparison is actually rather more generous than what I would have made myself. CDT has no arbitrary discontinuity between at p=1 and p=(1-e) for example.
That said, the grandparent’s point applies just as well regardless of whether we consider CDT, EDT, the corrupted Lumifer variant of CDT or most other naive but not fundamentally insane decision algorithms. In the general case there is a damn good reason to make an abstract precommittment as soon as possible. UDT is an exception only because such precomittment would be redundant.
What, on your view, is the argument for not two-boxing with an omniscient Omega?
How does that argument change with a non-omniscient but skilled predictor?
If Omega is omniscient the two actions (one- and two-boxing) each have a certain outcome with the probability of 1. So you just pick the better outcome. If Omega is just a skilled predictor, there is no certain outcome so you two-box.
You are facing a modified version of Newcomb’s Problem, which is identical to standard Newcomb except that Omega now has 99% predictive accuracy instead of ~100%. Do you one-box or two-box?
Two-box. From my point of view it’s all or nothing (and it has to be not ~100%, but exactly 100%).
You get $1000 with 99% probability and $1001000 with 1% probability, for a final expected value of $101090. A one-boxer gets $1000000 with 99% probability and $0 with 1% probability, with a final expected value of $990000. Even with probabilistic uncertainties, you would still have been comparatively better off one-boxing. And this isn’t just limited to high probabilities; theoretically any predictive power better than chance causes Newcomb-like situations.
In practice, this tends to go away with lower predictive accuracies because the relative rewards aren’t high enough to justify one-boxing. Nevertheless, I have little to no trouble believing that a skilled human predictor can reach accuracies of >80%, in which case these Newcomb-like tendencies are indeed present.
No, I don’t think so.
Let’s do things in temporal order.
Step 1: Omega makes a prediction and puts money into boxes.
What’s the prediction and what’s in the boxes?
Assuming you are a two-boxer, there is a 99% chance that there is nothing in Box B (and $1000 in Box A, as always), along with a 1% chance that Box B contains $1000000. If we’re going with the most likely scenario, there is nothing in Box B.
In the classic Newcomb’s Problem Omega moves first before I can do anything. Step 1 happens before I made any choices.
If Omega is a good predictor, he’ll predict my decision, but there is nothing I can do about it. I don’t make a choice to be a “two-boxer” or a “one-boxer”.
I can make a choice only after step 1, once the boxes are set up and unchangeable. And after step 1 everything is fixed so you should two-box.
This is true both for the 99% and 100% accurate predictor, isn’t it? Yet you say you one-box with the 100% one.
Please answer me this:
What does 99% accuracy mean to you exactly, in this scenario? If you know that Omega can predict you with 99% accuracy, what reality does this correspond to for you? What do you expect to happen different, compared to if he could predict you with, say, 50% accuracy (purely chance guesses)?
Actually, let’s make it more specific: suppose you do this same problem 1000 times, with a 99% Omega, what amount of money do you expect to end up with if you two-box? And what if you one-box?
The reason I am asking is that it appears to me like, the moment Omega stops being perfectly 100% accurate, you really stop believing he can predict you at all. It’s like, if you’re given a Newcomblike problem that involves “Omega can predict you with 99% accuracy”, you don’t actually accept this information (and are therefore solving a different problem).
It’s unsafe to guess at another’s thoughts, and I could be wrong. But I simply fail to see, based on the things you’ve said, how the “99% accuracy” information informs your model of the situation at all.
Yes, because 100% is achievable only through magic. Omniscience makes Omega a god and you can’t trick an omniscient god.
That’s why there is a discontinuity between P=1 and P=1-e—we leave the normal world and enter the realm of magic.
In the frequentist framework this means that if you were to fork the universe and make 100 exact copies of it, in 99 copies Omega would be correct and in one of them he would be wrong.
In the Bayesian framework probabilities are degrees of belief and the local convention is to think of them as betting odds, so this means I should be indifferent which side to take of a 1 to 99 bet on the correctness of Omega’s decision.
The question is badly phrased because it ignores the temporal order and so causality.
If you become omniscient for a moment and pick 1000 people who are guaranteed to two-box and 1000 people who are guaranteed to one-box, the one-box people will, of course, get more money from a 99% Omega. But it’s not a matter of their choice, you picked them this way.
Not at all. I mentioned this before and I’ll repeat it again: there is no link between Omega’s prediction and the choice of a standard participant in the Newcomb’s Problem. The standard participant does not have any advance information about Omega with his boxes and so cannot pre-commit to anything. He only gets to do something after the boxes become immutable.
At the core, I think, the issue is of causality and I’m not comfortable with the acausal manoeuvres that LW is so fond of.
I asked what it means to you. Not sure why I got an explanation of bayesian vs frequentist probability.
You seem to believe precommitment is the only thing that makes your choice knowable to Omega in advance. But Omega got his track record of 99% accurate predictions somehow. Whatever algorithms are ultimately responsible for your choice, they—or rather their causal ancestors—exist in the world observed by Omega at the time he’s filling his boxes. Unless you believe in some kind of acausal choicemaking, you are just as “committed” if you’d never heard of Newcomb’s problem. However, from within the algorithms, you may not know what choice you’re bound to make until you’re done computing. Just as a deterministic chess playing program is still choosing a move, even if the choice in a given position is bound to be, say, Nf4-e6.
Indeed, your willingness (or lack thereof) to believe that, whatever the output of your thinking, Omega is 99% likely to have predicted it, is probably going to be a factor in Omega’s original decision.
To me personally? Pretty much nothing, an abstract exercise with numbers. As I said before (though the post was heavily downvoted and probably invisible by now), I don’t expect to meet Omega and his boxes in my future, so I don’t care much, certainly not enough to pre-commit.
Or are you asking what 1% probability means to me? I suspect I have a pretty conventional perception of it.
No, that’s not the issue. We are repeating the whole line of argument I went through with dxu and TheOtherDave during the last couple of days—see e.g. this and browse up and down this subthread. Keep in mind that some of my posts there were downvoted into invisibility so you may need to click on buttons to open parts of the subthread.
Sigh. I wasn’t asking if you care. I meant more something like this:
Feynman doesn’t believe the number, but this is what it means to him: if he were to take the number seriously, this is the reality he thinks it would correspond to. That’s what I meant when I asked “what does this number mean to you”. What reality the “99% accuracy” (hypothetically) translates to for you when you consider the problem. What work it’s doing in your model of it, never mind if it’s a toy model.
Suppose you—or if you prefer, any not-precommitted participant—faces Omega, who presents his already-filled boxes, and the participant chooses to either one-box or two-box. Does the 99% accuracy mean you expect to afterwards find that Omega predicted that choice in 99 out of 100 cases on average? If so, can you draw up expected values for either choice? If not, how else do you understand that number?
OK, I re-read it and I think I see it.
I think the issue lies in this “after” word. If determinism, then you don’t get to first have a knowable-to-Omega disposition to either one-box or two-box, and then magically make an independent choice after Omega fills the boxes. The choice was already unavoidably part of the Universe before Stage 1, in the form of its causal ancestors, which are evidence for Omega to pick up to make his 99% accurate prediction. (So the choice affected Omega just fine, which is why I am not very fond of the word “acausal”). The standard intuition that places decisionmaking in some sort of a causal void removed from the rest of the Universe doesn’t work too well when predictability is involved.
Yep, that’s another way to look at causality issue. I asked upthread if the correctness of the one-boxing “solution” implies lack of free will and, in fact, depends on the lack of free will. I did not get a satisfying answer (instead I got condescending nonsense about my corrupt variation of an ideal CDT agent).
If “the choice was already unavoidably part of the Universe before Stage” then it is not a “choice” as I understand the word. In this case the whole problem disappears since if the choice to one-box or two-box is predetermined, what are we talking about, anyway?
As is often the case, Christianity already had to deal with this philosophical issue of a predetermined choice—see e.g. Calvinism.
Still wouldn’t mind getting a proper answer to my question...
And well, yeah, if you believe in a nondeterministic, acausal free will, then we may have an unbridgable disagreement. But even then… suppose we put the issue of determinism and free will completely aside for now. Blackbox it.
Imagine—put on your “take it seriously” glasses for a moment if I can have your indulgence—that a sufficiently advanced alien actually comes to Earth and in many, many trials establishes a 99% track record of predicting people’s n-boxing choices (to keep it simple, it’s 99% for one-boxers and also 99% for two-boxers).
Imagine also that, for whatever reason, you didn’t precommit (maybe sufficiently reliable precommitment mechanisms are costly and inconvenient, inflation ate into the value of the prize, and the chance of being chosen by Omega for participation is tiny. Or just akrasia, I don’t care). And then you get chosen for participation and accept (hey, free money).
What now? Do you have a 99% expectation that, after your choice, Omega will have predicted it correctly? Does that let you calculate expected values? If so, what are they? If not, in what way are you different from the historical participants who amounted to the 99% track record Omega’s built so far (= participants already known to have found themselves predicted 99% of the time)?
Or are you saying that an Omega like that can’t exist in the first place. In which case how is that different—other than in degree—from whenever humans predict other humans with better than chance accuracy?
But let me ask that question again, then. Does the correctness of one-boxing require determinism, aka lack of free will?
Let’s get a bit more precise here. There are two ways you can use this term. One is with respect to the future, to express the probability of something that hasn’t happened yet. The other is with respect to lack of knowledge, to express that something already happened, but you just don’t know what it is.
The meanings conveyed by these two ways are very different. In particular, when looking at Omega’s two boxes, there is no “expected value” in the first sense. Whatever happened already happened. The true state of nature is that one distribution of money between the boxes has the probability of 1 -- it happened—and the other distribution has the probability of 0 -- it did not happen. I don’t know which one of them happened, so people talk about expected values in the sense of uncertainty of their beliefs, but that’s quite a different thing.
So after Stage 1 in reality there are no expected values of the content of the boxes—the boxes are already set and immutable. It’s only my knowledge that’s lacking. And in this particular setting it so happens that I can make my knowledge not matter at all—by taking both boxes.
You approach also seems to have the following problem. Essentially, Omega views all people as divided into two classes: one-boxers and two-boxers. If belonging to such class is unchangeable (see predestination), the problem disappears since you can do nothing about it. However if you can change which class you belong to (e.g. before the game starts), you can change it after Stage 1 as well. So the optimal solution looks to be to get yourself into the one-boxing class before the game, but the, once Stage 1 happens, switch to the two-boxing class. And if you can’t pull off this trick, well, why do you think you can change classes at all?
I don’t think so, which is the gist of my last post—I think all it requires is taking Omega’s track record seriously. I suppose this means I prefer EDT to CDT—it seems insane to me to ignore evidence, past performance showing that 99% of everyone who’s two-boxed so far got out with much less money.
No more than a typical coin is either a header or a tailer. Omega can simply predict with high accuracy if it’s gonna be heads or tails on the next, specific occasion… or if it’s gonna be one or two boxes, already accounting for any tricks. Imagine you have a tell, like in poker, at least when facing someone as observant as Omega.
All right, I’m done here. Trying to get a direct answer to my question stopped feeling worthwhile.
The fact that you think these things are the same thing is the problem. Determinism does not imply lack of “choice”, not in any sense that matters.
To be absolutely clear:
No, one-boxing does not require lack of free will.
But it should also be obvious that for omega to predict you requires you to be predictable. Determinism provides this for the 100% accurate case. This is not any kind of contradiction.
No “changing” is required. You can’t “change” the future any more than you can “change” the past. You simply determine it. Whichever choice you decide to make is the choice you were always going to make, and determines the class you are, and always were in.
Yes, I understand that. I call that lack of choice and absence of free will. Your terminology may differ.
Just so I’m clear: when you call that a lack of choice, do you mean to distinguish it from anything? That is, is there anything in the world you would call the presence of choice? Does the word “choice,” for you, have a real referent?
Sure. I walk into a ice cream parlour, which flavour am I going to choose? Can you predict? Can anyone predict with complete certainty? If not, I’ll make a choice.
This definition of choice is empty. If I can’t predict which flavour you will buy based on knowing what flavours you like or what you want, you aren’t choosing in any meaningful sense at all. You’re just arbitrarily, capriciously, picking a flavour at random. Your “choice” doesn’t even contribute to your own benefit.
You keep thinking that and I’ll enjoy the delicious ice cream that I chose.
If it’s delicious, then any observer who knows what you consider delicious could have predicted what you chose. (Unless there are a few flavours that you deem exactly equally delicious, in which case it makes no difference, and you are choosing at random between them.)
Oh, no, it does make a difference for my flavour preferences are not stable and depend on a variety of things like my mood, the season, the last food I ate, etc. etc.
And all of those things are known by a sufficiently informed observer...
Show me one.
No need. It only needs to be possible for
to be true!
So how do you know what’s possible? Do you have data, by any chance? Pray tell!
Are you going to assert that your preferences are stored outside your brain, beyond the reach of causality? Perhaps in some kind of platonic realm?
Mood—check, that shows up in facial expressions, at least.
Season—check, all you have to do is look out the window, or look at the calendar.
Last food you ate—check, I can follow you around for a day, or just scan your stomach.
This line of argument really seems futile. Is it so hard to believe that your mind is made of parts, just like everything else in the universe?
So, show me.
OK. Thanks for clarifying.
So then just decide to one-box. You aren’t something outside of physics; you are part of physics and your decision is as much a part of physics as anything else. Your decision to one-box or two-box is determined by physics, true, but that’s not an excuse for not choosing! That’s like saying, “The future is already set in stone; if I get hit by a car in the street, that’s what was always going to happen. Therefore I’m going to stop looking both ways when I cross the street. After all, if I get hit, that’s what physics said was going to happen, right?”
Errr… can I? nshepperd says
so I don’t see anything I can do. Predestination is a bitch.
It’s not an excuse, it’s a reason. Que sera, sera—what will be will be. I don’t understand what is that “choosing” you speak of :-/
Yes, that’s what you are telling me. It’s just physics, right?
Um, not my decision again. It was predetermined whether I would look both ways or not.
Choosing is deliberation, deliberation is choosing. Just consider the alternatives (one-box, two-box) and do the one that results in you having more money.
The keyword here is decide. Just because you were always going to make that choice doesn’t mean you didn’t decide. You weighed up the costs and benefits of each option, didn’t you?
It really isn’t hard. Just think about it, then take one box.
Clearly thats two boxing. Omega already made his choice, so if he thought I’d two box, I’ll get;
-One box: nothing -two boxing: the small reward
if Omega thought I’d one box: -One box:big reward -two box: big reward + small reward
Two boxing results in more money no matter how Omega thought I’d chose.
Missing the Point: now a major motion picture.
Is that the drumbeat of nshepperd’s head against the desk that I hear..? :-D
What if I try to predict what Omega does, and do the opposite?
That would mean that either 1) there are some strategies I am incapable of executing, or 2) Omega can’t in principle predict what I do, since it is indirectly predicting itself.
Alternatively, what if instead of me trying to predict Omega, we run this with transparent boxes and I base my decision on what I see in the boxes, doing the opposite of what Omega predicted? Again, Omega is indirectly predicting itself.
I don’t see how this is relevant, but yes, in principle it’s impossible to predict the universe perfectly. On account of the universe + your brain is bigger than your brain. Although, if you live in a bubble universe that is bigger than the rest of the universe, whose interaction with the rest of the universe is limited precisely to your chosen manipulation of the connecting bridge; basically, if you are AIXI, then you may be able to perfectly predict the universe conditional on your actions.
This has pretty much no impact on actual newcomb’s though, since we can just define such problems away by making omega do the obvious thing to prevent such shenanigans (“trolls get no money”). For the purpose of the thought experiment, action-conditional predictions are fine.
IOW, this is not a problem with Newcomb’s. By the way, this has been discussed previously.
You’ve now destroyed the usefulness of Newcomb as a potentially interesting analogy to the real world. In real world games, my opponent is trying to infer my strategy and I’m trying to infer theirs.
If Newcomb is only about a weird world where omega can try and predict the player’s actions, but the player is not allowed to predict omega’s, then its sort of a silly problem. Its lost most of its generality because you’ve explicitly disallowed the majority of strategies.
If you allow the player to pursue his own strategy, then its still a silly problem, because the question ends up being inconsistent (because if omega plays omega, nothing can happen).
In real world games, we spend most our time trying to make action-conditional predictions. “If I play Foo, then my opponent will play Bar”. There’s no attempting to circularly predict yourself with unconditional predictions. The sensible formulation of Newcomb’s matches that.
(For example, transparent boxes: Omega predicts “if I fill both boxes, then player will ___” and fills the boxes based on that prediction. Or a few other variations on that.)
In many (probably most?) games we consider the opponents strategy, not simply their next move. Making moves in an attempt to confuse your opponent’s estimation of your own strategy is a common tactic in many games.
Your “modified Newcomb” doesn’t allow the chooser to have a strategy- they aren’t allowed to say “if I predict Omega did X, I’ll do Y.” Its a weird sort of game where my opponent takes my strategy into account, but something keeps me from considering my opponents.
Can’t Omega follow the strategy of ‘Trolls get no money,’ which by assumption is worse for you? I feel like this would result in some false positives, but perhaps not—and the scenario says nothing about the people who don’t get to play in any case.
No, because that’s fighting the hypothetical. Assume that he doesn’t do that.
It is actually approximately the opposite of fighting the hypothetical. It is managing the people who are trying to fight the hypothetical. Precise wording of the details of the specification can be used to preempt such replies but for casual defininitions that assume good faith sometimes explicit clauses for the distracting edge cases need to be added.
It is fighting the hypothetical because you are not the only one providing hypotheticals. I am too; I’m providing a hypothetical where the player’s strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept. Saying “no, you can’t use that strategy” is fighting the hypothetical.
Moreover, the strategy “pick the opposite of what I predict Omega does” is a member of a class of strategies that have the same problem; it’s just an example of such a strategy that is particularly clear-cut, and the fact that it is clear-cut and blatantly demonstrates the problem with the scenario is the very aspect that leads you to call it trolling Omega. “You can’t troll Omega” becomes equivalent to “you can’t pick a strategy that makes the flaw in the scenario too obvious”.
If your goal is to show that Omega is “impossible” or “inconsistent”, then having Omega adopt the strategy “leave both boxes empty for people who try to predict me / do any other funny stuff” is a perfectly legitimate counterargument. It shows that Omega is in fact consistent if he adopts such strategy. You have no right to just ignore that counterargument.
Indeed, Omega requires a strategy for when he finds that you are too hard to predict. The only reason such a strategy is not provided beforehand in the default problem description is because we are not (in the context of developing decision theory) talking about situations where you are powerful enough to predict Omega, so such a specification would be redundant. The assumption, for the purpose of illuminating problems with classical decision theory, is that Omega has vastly more computational resources than you do, so that the difficult decision tree that presents the problem will obtain.
By the way, it is extremely normal for there to be strategies you are “incapable of executing”. For example, I am currently unable to execute the strategy “predict what you will say next, and counter it first”, because I can’t predict you. Computation is a resource like any other.
If you are suggesting that Omega read my mind and think “does this human intend to outsmart me, Omega”, then sure he can do that. But that only takes care of the specific version of the strategy where the player has conscious intent.
If you’re suggesting “Omega figures out whether my strategy is functionally equivalent to trying to outsmart me”, you’re basically claiming that Omega can solve the halting problem by analyzing the situation to determine if it’s an instance of the halting problem, and outputting an appropriate answer if that is the case. That doesn’t work.
That still requires that he determine that I am too hard to predict, which either means solving the halting problem or running on a timer. Running on a timer is a legitimate answer, except again it means that there are some strategies I cannot execute.
I thought the assumption is that I am a perfect reasoner and can execute any strategy.
Why would this be the assumption?
There’s your answer.
I don’t see how omega running his simulation on a timer makes any difference for this, but either way this is normal and expected. Problem resolved.
Not at all. Though it may be convenient to postulate arbitrarily large computing power (as long as Omega’s power is increased to match) so that we can consider brute force algorithms instead of having to also worry about how to make it efficient.
(Actually, if you look at the decision tree for Newcomb’s, the intended options for your strategy are clearly supposed to be “unconditionally one-box” and “unconditionally two-box”, with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega. And indeed the decision tree explicitly states that your state of knowledge is identical whether omega fills or doesn’t fill the box.)
It’s me who has to run on a timer. If I am only permitted to execute 1000 instructions to decide what my answer is, I may not be able to simulate Omega.
Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.
I know what problem Omega is trying to solve. If I am a perfect reasoner, and I know that Omega is, I should be able to predict Omega without actually having knowledge of Omega’s internals.
Deciding which branch of the decision tree to pick is something I do using a process that has, as a step, simulating Omega. It is tempting to say “it doesn’t matter what process you use to choose a branch of the decision tree, each branch has a value that can be compared independently of why you chose the branch”, but that’s not correct. In the original problem, if I just compare the branches without considering Omega’s predictions, I should always two-box. If I consider Omega’s predictions, that cuts off some branches in a way which changes the relative ranking of the choices. If I consider my predictions of Omega’s predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.
But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match. The fact that Omega is vastly more intelligent and computationally powerful than you is a fundamental premise of the problem. This is what stops you from magically “predicting him”.
Look, in Newcomb’s problem you are not supposed to be a “perfect reasoner” with infinite computing time or whatever. You are just a human. Omega is the superintelligence. So, any argument you make that is premised on being a perfect reasoner is automatically irrelevant and inapplicable. Do you have a point that is not based on this misunderstanding of the thought experiment? What is your point, even?
It’s already arbitrary large. You want that expanded to match arbitrarily large?
Asking “which box should you pick” implies that you can follow a chain of reasoning which outputs an answer about which box to pick.
My decision making strategy is “figure out what Omega did and do the opposite”. It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting). And Omega goes first, so we never get to the point where I try my decision strategy and don’t halt.
(And if you’re going to respond with “then Omega knows in advance that your decision strategy doesn’t halt”, how’s he going to know that?)
Furthermore, there’s always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega’s choice was.
That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.
When I say “arbitrarily large” I do not mean infinite. You have some fixed computing power, X (which you can interpret as “memory size” or “number of computations you can do before the sun explodes the next day” or whatever). The premise of newcomb’s is that Omega has some fixed computing power Q * X, where Q is really really extremely large. You can increase X as much as you like, as long as Omega is still Q times smarter.
Which does not even remotely imply being a perfect reasoner. An ordinary human is capable of doing this just fine.
Two points: If Omega’s memory is Q times large than yours, you can’t fit a simulation of him in your head. So predicting by simulation is not going to work. Second, If Omega has Q times as much computing time as you, you can try to predict him (by any method) for X steps, at which point the sun explodes. Naturally, Omega simulates you for X steps, notices that you didn’t give a result before the sun explodes, so leaves both boxes empty and flies away to safety.
Only under the artificial irrelevant-to-the-thought-experiment conditions that require him to care whether you’ll one-box or two-box after standing in front of the boxes for millions of years thinking about it. Whether or not the sun explodes, or Omega himself imposes a time limit, a realistic Omega only simulates for X steps, then stops. No halting-problem-solving involved.
In other words, if “Omega isn’t a perfect predictor” means that he can’t simulate a physical system for an infinite number of steps in finite time then I agree but don’t give a shit. Such a thing is entirely unneccessary. In the thought experiment, if you are a human, you die of aging after less than 100 years. And any strategy that involves you thinking in front of the boxes until you die of aging (or starvation, for that matter) is clearly flawed anyway.
This example is less stupid since it is not based on trying to circularly predict yourself. But in this case Omega just makes action-conditional predictions and fills the boxes however he likes.
It sounds like your decision making strategy fails to produce a useful result. That is unfortunate for anyone who happens to attempt to employ it. You might consider changing it to something that works.
“Ha! What if I don’t choose One box OR Two boxes! I can choose No Boxes out of indecision instead!” isn’t a particularly useful objection.
No, Nshepperd is right. Omega imposing computation limits on itself solves the problem (such as it is). You can waste as much time as you like. Omega is gone and so doesn’t care whether you pick any boxes before the end of time. This is a standard solution for considering cooperation between bounded rational agents with shared source code.
When attempting to achieve mutual cooperation (essentially what Newcomblike problems are all about) making yourself difficult to analyse only helps against terribly naive intelligences. ie. It’s a solved problem and essentially useless for all serious decision theory discussion about cooperation problems.
This contradicts the accuracy stated at the beginning. Omega can’t leave both boxes empty for people who try to adopt a mixed strategy AND also maintain his 99.whatever accuracy on one-boxers.
And even if Omega has way more computational than I do, I can still generate a random number. I can flip a coin thats 60⁄40 one-box, two-box. The most accurate Omega can be, then, is to assume I one box.
He can maintain his 99% accuracy on deterministic one-boxers, which is all that matters for the hypothetical.
Alternatively, if we want to explicitly include mixed strategies as an available option, the general answer is that Omega fills the box with probability = the probability that your mixed strategy one-boxes.
All of this is very true, and I agree with it wholeheartedly. However, I think Jiro’s second scenario is more interesting, because then predicting Omega is not needed; you can see what Omega’s prediction was just by looking in (the now transparent) Box B.
As I argued in this comment, however, the scenario as it currently is is not well-specified; we need some idea of what sort of rule Omega is using to fill the boxes based on his prediction. I have not yet come up with a rule that would allow Omega to be consistent in such a scenario, though, and I’m not sure if consistency in this situation would even be possible for Omega. Any comments?
Previous discussions of Transparent Newcomb’s problem have been well specified. I seem to recall doing so in footnotes so as to avoid distraction.
The problem (such as it is) is that there is ambiguity between the possible coherent specifications, not a complete lack. As your comment points out there are (merely) two possible situations for the player to be in and Omega is able to counter-factually predict the response to either of them, with said responses limited to a boolean. That’s not a lot of permutations. You could specify all 4 exhaustively if you are lazy.
IF (Two box when empty AND One box when full) THEN X
IF …
Any difficulty here is in choosing the set of rewards that most usefully illustrate the interesting aspects of the problem.
I’d say that about hits the nail on the head. The permutations certainly are exhaustively specifiable. The problem is that I’m not sure how to specify some of the branches. Here’s all four possibilities (written in pseudo-code following your example):
IF (Two box when empty And Two box when full) THEN X
IF (One box when empty And One box when full) THEN X
IF (Two box when empty And One box when full) THEN X
IF (One box when empty And Two box when full) THEN X
The rewards for 1 and 2 seem obvious; I’m having trouble, however, imagining what the rewards for 3 and 4 should be. The original Newcomb’s Problem had a simple point to demonstrate, namely that logical connections should be respected along with causal connections. This point was made simple by the fact that there’s two choices, but only one situation. When discussing transparent Newcomb, though, it’s hard to see how this point maps to the latter two situations in a useful and/or interesting way.
Option 3 is of the most interest to me when discussing the Transparent variant. Many otherwise adamant One Boxers will advocate (what is in effect) 3 when first encountering the question. Since I advocate strategy 2 there is a more interesting theoretical disagreement. ie. From my perspective I get to argue with (literally) less-wrong wrong people, with a correspondingly higher chance that I’m the one who is confused.
The difference between 2 and 3 becomes more obviously relevant when noise is introduced (eg. 99% accuracy Omega). I choose to take literally nothing in some situations. Some think that is crazy...
In the simplest formulation the payoff for three is undetermined. But not undetermined in the sense that Omega’s proposal is made incoherent. Arbitrary as in Omega can do whatever the heck it wants and still construct a coherent narrative. I’d personally call that an obviously worse decision but for simplicity prefer to define 3 as a defect (Big Box Empty outcome).
As for 4… A payoff of both boxes empty (or both boxes full but contaminated with anthrax spores) seems fitting. But simply leaving the large box empty is sufficient for decision theoretic purposes.
Out of interest, and because your other comments on the subject seem well informed, what do you choose when you encounter Transparent Newcomb and find the big box empty?
This is a question that I find confusing due to conflicting intuitions. Fortunately, since I endorse reflective consistency, I can replace that question with the following one, which is equivalent in my decision framework, and which I find significantly less confusing:
“What would you want to precommit to doing, if you encountered transparent Newcomb and found the big box (a.k.a. Box B) empty?”
My answer to this question would be dependent upon Omega’s rule for rewarding players. If Omega only fills Box B if the player employs the strategy outlined in 2, then I would want to precommit to unconditional one-boxing—and since I would want to precommit to doing so, I would in fact do so. If Omega is willing to reward the player by filling Box B even if the player employs the strategy outlined in 3, then I would see nothing wrong with two-boxing, since I would have wanted to precommit to that strategy in advance. Personally, I find the former scenario—the one where Omega only rewards people who employ strategy 2--to be more in line with the original Newcomb’s Problem, for some intuitive reason that I can’t quite articulate.
What’s interesting, though, is that some people two-box even upon hearing that Omega only rewards the strategy outlined in 2--upon hearing, in other words, that they are in the first scenario described in the above paragraph. I would imagining that their reasoning process goes something like this: “Omega has left Box B empty. Therefore he has predicted that I’m going to two-box. It is extremely unlikely a priori that Omega is wrong in his predictions, and besides, I stand to gain nothing from one-boxing now. Therefore, I should two-box, both because it nets me more money and because Omega predicted that I would do so.”
I disagree with this line of reasoning, however, because it is very similar to the line of reasoning that leads to self-fulfilling prophecies. As a rule, I don’t do things just because somebody said I would do them, even if that somebody has a reputation for being extremely accurate, because then that becomes the only reason it happened in the first place. As with most situations involving acausal reasoning, however, I can only place so much confidence in me being correct, as opposed to me being so confused I don’t even realize I’m wrong.
It would seem to me that Omega’s actions would be as follows:
IF (Two box when empty And Two box when full) THEN Empty
IF (One box when empty And One box when full) THEN Full
IF (Two box when empty And One box when full) THEN Empty or Full
IF (One box when empty And Two box when full) THEN Refuse to present boxes
Cases 1 and 2 are straightforward. Case 3 works for the problem, no matter which set of boxes Omega chooses to leave.
In order for Omega to maintain its high prediction accuracy, though, it is necessary—if Omega predicts that a given player will choose option 4 - that Omega simply refuse to present the transparent boxes to this player. Or, at least, that the number of players who follow the other three options should vastly outnumber the fourth-option players.
This is an interesting response because 4 is basically what Jiro was advocating earlier in the thread, and you’re basically suggesting that Omega wouldn’t even present the opportunity to people who would try to do that. Would you agree with this interpretation of your comment?
Yes, I would.
If we take the assumption, for the moment, that the people who would take option 4 form at least 10% of the population in general (this may be a little low), and we further take the idea that Omega has a track record of success in 99% or more of previous trials (as is often specified in Newcomb-like problems), then it is clear that whatever algorithm Omega is using to decide who to present the boxes to is biased, and biased heavily, against offering the boxes to such a person.
Consider:
P(P) = The probability that Omega will present the boxes to a given person.
P(M|P) = The probability that Omega will fill the boxes correctly (empty for a two-boxer, full for a one-boxer) P(M’|P) = The probability that Omega will fail to fill the boxes correctly
P(O) = The probability that the person will choose option 4
P (M’|O) = 1 (from the definition of option 4) therefore P(M|O) = 0
and if Omega is a perfect predictor, then P(M|O’) = 1 as well.
P (M|P) = 0.99 (from the statement of the problem)
P (O) = 0.1 (assumed)
Now, of all the people to whom boxes are presented, Omega is only getting at most one percent wrong; P(M’|P) ⇐ 0.01. Since P(M’|O) = 1, and P(M’|O’)=0, it follows that P(P|O) ⇐ 0.01.
If Omega is a less than perfect predictor, then P(M’|O’)>0, and P(P|O)<0.01.
And, since P(P|O) = 0.01 < P(O) = 0.1, I therefore conclude that Omega must have a bias—and a fairly strong one—against presenting the boxes to such perverse players.
It may be the least convenient possible world. More specifically it is the minor inconvenience of being careful to specify the problem correctly so as not to be distracted. Nshepperd gives some of the reason typically used in such cases.
What happens when you try to pick the the opposite of what you predict Omega does is something like what happens when you try to beat Deep Fritz 14 at chess while outrunning a sports car. You just fail. Your brain is a few of pounds of fat approximately optimised for out-competing other primates for mating opportunities. Omega is a super-intelligence. The assumption that Omega is smarter than the player isn’t an unreasonable one and is fundamental to the problem. Defying it is a particularly futile attempt to fight the hypothetical by basically ignoring it.
Generalising your proposed class to executing maximally inconvenient behaviours in response to, for example, the transparent Newcomb’s problem is where it gets actually gets (tangentially) interesting. In that case you can be inconvenient without out-predicting the superintelligence and so the transparent Newcomb’s problem requires more care with the if clause.
In the first scenario, I doubt you would be able to predict Omega with sufficient accuracy to be able to do what you’re suggesting. Transparent boxes, though, are interesting. The problem is, the original Newcomb’s Problem had a single situation with two possible choices involved; tranparent Newcomb, however, involves two situations:
Transparent Box B contains $1000000.
Transparent Box B contains nothing.
It’s unclear from this what Omega is even trying to predict; is he predicting your response to the first situation? The second one? Both? Is he following the rule: “If the player two-boxes in either situation, fill Box B with nothing”? Is he following the rule: “If the player one-boxes in either situation, fill Box B with $1000000″? The problem isn’t well-specified; you’ll have to give a better description of the situation before a response can be given.
That falls under 1) there are some strategies I am incapable of executing.
The transparent scenario is just a restatement of the opaque scenario with transparent boxes instead of “I predict what Omega does”. If you think the transparent scenario involves two situations, then the opaque scenario involves two situations as well. (1=opaque box B contains $1000000, and I predict that Omega put in $1000000 and 2=opaque box B contains nothing, and I predict that Omega puts in nothing.) If you object that we have no reason to think both of those opaque situations are possible, I can make a similar objection to the transparent situations.
Yes, it does, for the meaning of “decide” that I use.
LOL. It really isn’t hard. Just think about it, then accept Jesus as your personal saviour… X-)
Or think about it, then take two boxes.
Either way, you decide how much money you get, and the contents of the boxes are your fault.
What I’ve done for Newcomb problems is that I’ve precommitted to one boxing, but then I’ve paid a friend to follow me at all times. Just before I chose the boxes, he is to perform complicated neurosurgery to turn me into a two boxer. That way I maximize my gain.
That’s clever, but of course it won’t work. Omega can predict the outcome of neurosurgery.
Better wipe my memory of getting my friend to follow me then.
Also, I have built a second Omega, and given it to others. They are instructed to two box if 2 Omega predicts 1 Omega thinks they’ll one box, and visa versa.
.. and that costs less than $1000?
If I am predestined, nope, not my fault. In fact, in the full determinism case I’m not sure there’s “me” at all.
But anyway, how about that—you introduce me to Omega first, and I’ll think about his two boxes afterwards...
So the next time you cross the street, are you going to look both ways or not? You can’t calculate the physical consequences of every particle interaction taking place in your brain, so taking the route the universe takes, i.e. just let every play out at the lowest level, is not an option for you and your limited processing power. And yet, for some reason, I suspect you’ll probably answer that you will look both ways, despite being unable to actually predict your brain-state at the time of crossing the street. So if you can’t actually predict your decisions perfectly as dictated by physics… how do you know that you’ll actually look both ways next time you cross the street?
The answer is simple: you don’t know for certain. But you know that, all things being equal, you prefer not getting hit by a car to getting hit by a car. And looking both ways helps to lower the probability of getting hit by a car. Therefore, given knowledge of your preferences and your decision algorithm, you will choose to look both ways.
Note that nowhere in the above explanation was determinism violated! Every step of the physics plays out as it should… and yet we observe that your choice still exists here! Determinism explains free will, not explains it away; just because everything is determined doesn’t mean your choice doesn’t exist! You still have to choose; if I ask you if you were forced to reply to my comment earlier by the Absolute Power of Determinism, or if you chose to write that comment of your own accord, I suspect you’ll answer the latter.
Likewise, Omega may have predicted your decision, but that decision still falls to you to make. Just because Omega predicted what you would do doesn’t mean you can get away with not choosing, or choosing sub-optimally. If I said, “I predict that tomorrow Lumifer will jump off a cliff,” would you do it? Of course not. Conversely, if I said, “I predict that tomorrow Lumifer will not jump off a cliff,” would you do it? Still of course not. Your choice exists regardless of whether there’s some agent out there predicting what you do.
Well, actually, it depends. Descending from flights of imagination down to earth, I sometimes look and sometimes don’t. How do I know there isn’t a car coming? In some cases hearing is enough. It depends.
You are mistaken. If my actions are predetermined, I chose nothing. You may prefer to use the word “choice” within determinism, I prefer not to.
Yes, it does mean that. And, I’m afraid, just you asserting something—even with force—doesn’t make it automatically true.
Of course, but that’s not we are talking about. We are talking about whether choice exists at all.
Okay, it seems like we’re just arguing definitions now. Taboo “choice” and any synonyms. Now that we have done that, I’m going to specify what I mean when I use the word “choice”: the deterministic output of your decision algorithm over your preferences given a certain situation. If there is something in this definition that you feel does not capture the essence of “choice” as it relates to Newcomb’s Problem, please point out exactly where you think this occurs, as well as why it is relevant in the context of Newcomb’s Problem. In the meantime, I’m going to proceed with this definition.
So, in the above quote of mine, replacing “choice” with my definition gives you:
We see that the above quote is trivially true, and I assert that “the deterministic output of your decision algorithm over your preferences given a certain situation” is what matters in Newcomb’s Problem. If you have any disagreements, again, I would ask that you outline exactly what those disagreements are, as opposed to providing qualitative objections that sound pithy but don’t really move the discussion forward. Thank you in advance for your time and understanding.
Sure, you can define the word “choice” that way. The problem is, I don’t have that. I do not have a decision algorithm over my preferences that produces some deterministic output given a certain situation. Such a thing does not exist.
You may define some agent for whom your definition of “choice” would be valid. But that’s not me, and not any human I’m familiar with.
What is your basis for arguing that it does not exist?
What makes humans so special as to exempted from this?
Keep in mind that my goal here is not perpetuate disagreement or to scold you for being stupid; it’s to resolve whatever differences in reasoning are causing our disagreement. Thus far, your comments have been annoyingly evasive and don’t really help me understand your position better, which has caused me to update toward you not actually having a coherent position on this. Presumably, you think you do have a coherent position, in which case I’d be much gratified if you’d just lay out everything that leads up to your position in one fell swoop rather than forcing myself and others to ask questions repeatedly in hope of clarification. Thank you.
I think it became clear that this debate is pointless the moment proving determinism became a prerequisite for getting anywhere.
I did try a different approach, but that was mostly dodged. I suspect Lumifer wants determinism to be a prerequisite; the freedom to do that slippery debate dance of theirs is so much greater then.
Either way, yeah. I’d let this die.
Introspection.
What’s your basis for arguing that it does exist?
Tsk, tsk. Such naked privileging of an assertion.
Well, the differences are pretty clear. In simple terms, I think humans have free will and you think they don’t. It’s quite an old debate, at least a couple of millennia old and maybe more.
I am not quite sure why do you have difficulties accepting that some people think free will exists. It’s not a that unusual position to hold.
No offense, but this is a textbook example of an answer that sounds pithy but tells me, in a word, nothing. What exactly am I supposed to get out of this? How am I supposed to argue against this? This is a one-word answer that acts as a blackbox, preventing anyone from actually getting anything worthwhile out of it—just like “emergence”. I have asked you several times now to lay out exactly what your disagreement is. Unless you and I have wildly varying definitions of the word “exactly”, you have repeatedly failed to do so. You have displayed no desire to actually elucidate your position to the point where it would actually be arguable. I would characterize your replies to my requests so far as a near-perfect example of logical rudeness. My probability estimate of you actually wanting to go somewhere with this conversation is getting lower and lower...
This is a thinly veiled expression of contempt that again asserts nothing. The flippancy this sort of remark exhibits suggests to me that you are more interested in winning than in truth-seeking. If you think I am characterizing your attitude uncharitably, please feel free to correct me on this point.
Taboo “free will” and try to rephrase your argument without ever using that phrase or any synonymous terms/phrases. (An exception would be if you were trying to refer directly to the phrase, in which case you would put it in quotation marks, e.g. “free will”.) Now then, what were you saying?
You are supposed to get out of this that you’re asking me to prove a negative and I don’t see a way to do this other than say “I’ve looked and found nothing” (aka introspection). How do you expect me to prove that I do NOT have a deterministic algorithm running my mind?
You are not supposed to argue against this. You are supposed to say “Aha, so this a point where we disagree and there doesn’t appear to be a way to prove it one way or another”.
From my point of view you repeatedly refused to understand what I’ve been saying. You spent all your time telling me, but not listening.
Oh, it does. It asserts that you are treating determinism as a natural and default answer and the burden is upon me to prove it wrong. I disagree.
Why? This is the core of my position. If you think I’m confused by words, tell me how am I confused. It the problem that you don’t understand me? I doubt this.
Are you talking about libertarian free will? The uncaused causer? I would have hoped that LWers wouldn’t believe such absurd things. Perhaps this isn’t the right place for you if you still reject reductionism.
There is such a thing as naturalistic Libertariansm)
LOL. Do elaborate, it’s going to be funny :-)
If you’re just going to provide every banal objection that you can without evidence or explanation in order to block discussion from moving forward, you might as well just stop posting.
It’s common to believe that we have the power to “change” the future but not the past. Popular conceptions of time travel such as Back To The Future show future events wavering in and out of existence as people deliberate about important decisions, to the extent of having a polaroid from the future literally change before our eyes.
All of this, of course, is a nonsense in deterministic physics. If any part of the universe is “already” determined, it all is (and by the way quantum “uncertainty” doesn’t change this picture in any interesting way). So there is not much difference between controlling the past and controlling the future, except that we don’t normally get an opportunity to control the past, due to the usual causal structure of the universe.
In other words, the boxes are “already set up and unchangeable” even if you decide before being scanned by Omega. But you still get to decide whether they are unchangeable in a favourable or unfavourable way.
That’s the free-will debate. Does the “solution” to one-box depend on rejection of free will?
Do you believe that objects in the future waver in and out of existence as you deliberate?
(On the free will debate: The common conception of free will is confused. But that doesn’t mean our will isn’t free, or imply fatalism.)
I am aware of the LW (well, EY’s, I guess) position on free will. But here we are discussing the Newcomb’s Problem. We can leave free will to another time. Still, what about my question?
Well, if that’s true—that is, if whether you are the sort of person who one-boxes or two-boxes in Newcomblike problems is a fixed property of Lumifer that you can’t influence in any way—then you’re right that there’s no point to thinking about which choice is best with various different predictors. After all, you can’t make a choice about it, so what difference does it make which choice would be better if you could?
Similarly, in most cases, given a choice between accelerating to the ground at 1G and doing so at .01 G once I’ve fallen off a cliff, I would do better to choose the latter… but once I fall off the cliff, I don’t actually have a choice, so that doesn’t matter at all.
Many people who consider it useful to think about Newcomblike problems, by contrast, believe that there is something they can do about it… that they do indeed make a choice to be a “two-boxer” or a “one-boxer.”
It’s not a fixed property, it’s undetermined. Go ask a random person whether he one-boxes or two-boxes :-)
Correction accepted. Consider me to have repeated the comment with the word “fixed” removed, if you wish. Or not, if you prefer.
I don’t anticipate meeting Omega and his two boxes. Therefore I don’t find pre-committing to a particular decision in this situation useful.
I’m not sure I understand.
Earlier, you seemed to be saying that you’re incapable of making such a choice. Now, you seem to be saying that you don’t find it useful to do so, which seems to suggest… though not assert… that you can.
So, just to clarify: on your view, are you capable of precommitting to one-box or two-box? And if so, what do you mean when you say that you can’t make a choice to be a “one-boxer”—how is that different from precommitting to one-box?
I, personally, have heard of the Newcomb’s Problem so one can argue that I am capable of pre-committing. However a tiny minority of the world’s population have heard of that problem and, as far as I know, the default formulation of the Newcomb’s Problem assumes that the subject had no advance warning. Therefore in general case there is no pre-committment and the choice does not exist.
So, I asked:
You have answered that “one can argue that” you are capable of it.
Which, well, OK, that’s probably true.
One could also argue that you aren’t, I imagine.
So… on your view, are you capable of precommitting?
Because earlier you seemed to be saying that you weren’t able to.
I think you’re now saying that you can (but that other people can’t).
But it’s very hard to tell.
I can’t tell whether you’re just being slippery as a rhetorical strategy, or whether I’ve actually misunderstood you.
That aside: it’s not actually clear to me that precommitting to oneboxing is necessary. The predictor doesn’t require me to precommit to oneboxing, merely to have some set of properties that results in me oneboxing. Precommitment is a simple example of such a property, but hardly the only possible one.
I can precommit, but I don’t want to. Other people (in the general case) cannot precommit because they have no idea about the Newcomb’s Problem.
Sure, but that has nothing to do with my choices.
See, that’s where I disagree. If you choose to one-box, even if that choice is made on a whim right before you’re required to select a box/boxes, Omega can predict that choice with accuracy. This isn’t backward causation; it’s simply what happens when you have a very good predictor. The problem with causal decision theory is that it neglects these sorts of acausal logical connections, instead electing to only keep track of casual connections. If Omega can predict you with high-enough accuracy, he can predict choices that you would make given certain information. If you take a random passerby and present them with a formulation of Newcomb’s Problem, Omega can analyze that passerby’s disposition and predict in advance how that passerby’s disposition will affect his/her reaction to that particular formulation of Newcomb’s problem, including whether he/she will two-box or one-box. Conscious precommitment is not required; the only requirement is that you make a choice. If you or any other person chooses to one-box, regardless of whether they’ve previously heard of Newcomb’s Problem or made a precommitment, Omega will predict that decision with whatever accuracy we specify. Then the only questions are “How high of an accuracy do we need?”, followed by “Can humans reach this desired level of accuracy?” And while I’m hesitant to provide an absolute threshold for the first question, I do not hesitate at all to answer the second question with, “Yes, absolutely.” Thus we see that Newcomb-like situations can and do pop up in real life, with merely human predictors.
If there are any particulars you disagree with in the above explanation, please let me know.
Sure, I agree, Omega can do that.
However when I get to move, when I have the opportunity to make a choice, Omega is already done with his prediction. Regardless of what his prediction was, the optimal choice for me after Stage 1 is to two-box.
My choice cannot change what’s in the boxes—only Omega can determine what’s in the boxes and I have no choice with respect to his prediction.
Well, if you reason that way, you will end up two-boxing. And, of course, Omega will know that you will end up two-boxing. Therefore, he will put nothing in Box B. If, on the other hand, you had chosen to one-box instead, Omega would have known that, too. And he would have put $1000000 in Box B. If you say, “Oh, the contents of the boxes are already fixed, so I’m gonna two-box!”, there is not going to be anything in Box B. It doesn’t matter what reasoning you use to justify two-boxing, or how elaborate your argument is; if you end up two-boxing, you are going to get $1000 with probability (Omega’s-predictive-power)%. Sure, you can say, “The boxes are already filled,” but guess what? If you do that, you’re not going to get any money. (Well, I mean, you’ll get $1000, but you could have gotten $1000000.) Remember, the goal of a rationalist is to win. If you want to win, you will one-box. Period.
Notice the tense you are using: “had chosen”. When did that choice happen? (for a standard participant)
You chose to two-box in this hypothetical Newcomb’s Problem when you said earlier in this thread that you would two-box. Fortunately, since this is a hypothetical, you don’t actually gain or lose any utility from answering as you did, but had this been a real-life Newcomb-like situation, you would have. If (I’m actually tempted to say “when”, but that discussion can be held another time) you ever encounter a real-life Newcomb-like situation, I strongly recommend you one-box (or whatever the equivalent of one-boxing is in that situation).
I don’t believe real-life Newcomb situations exist or will exist in my future.
I also think that the local usage of “Newcomb-like” is misleading in that it is used to refer to situations which don’t have much to do with the classic Newcomb’s Problem.
You recommendation was considered and rejected :-)
It is my understanding that Newcomb-like situations arise whenever you deal with agents who possess predictive capabilities greater than chance. It appears, however, that you do not agree with this statement. If it’s not too inconvenient, could you explain why?
Can you define what is a “Newcomb-like” situation and how can I distinguish such from a non-Newcomb-like one?
You have elsewhere agreed that you (though not everyone) have the ability to make choices that affect Omega’s prediction (including, but not limited to, the choice of whether or not to precommit to one-boxing).
That seems incompatible with your claim that all of your relevant choices are made after Omega’s prediction.
Have you changed your mind? Have I misunderstood you? Are you making inconsistent claims in different branches of this conversation? Do you not see an inconsistency? Other?
Here when I say “I” I mean “a standard participant in the classic Newcomb’s Problem”. A standard participant has no advance warning.
Ah. OK. And just to be clear: you believe that advance warning is necessary in order to decide whether to one-box or two-box… it simply isn’t possible, in the absence of advance warning, to make that choice; rather, in the absence of advance warning humans deterministically two-box. Have I understood that correctly?
Correct.
Nope. I think two-boxing is the right thing to do but humans are not deterministic, they can (and do) all kinds of stuff. If you run an empirical test I think it’s very likely that some people will two-box and some people will one-box.
Gotcha: they don’t have a choice in which they do, on your account, but they might do one or the other. Correction accepted.
Incidentally, for the folks downvoting Lumifer here, I’m curious as to your reasons. I’ve found many of their earlier comments annoyingly evasive, but now they’re actually answering questions clearly. I disagree with those answers, but that’s another question altogether.
There are a lot of behaviorists here. If someone doesn’t see the light, apply electric prods until she does X-)
It would greatly surprise me if anyone here believed that downvoting you will influence your behavior in any positive way.
You think it’s just mood affiliation, on a rationalist forum? INCONCEIVABLE! :-D
I’m curious: do you actually believe I think that, or are you saying it for some other reason?
Either way: why?
A significant part of the time I operate in the ha-ha only serious mode :-)
The grandparent post is a reference to a quote from Princess Bride.
Yes, you do, and I understand the advantages of that mode in terms of being able to say stuff without being held accountable for it.
I find it annoying.
That said, you are of course under no obligation to answer any of my questions.
In which way am I not accountable? I am here, answering questions, not deleting my posts.
Sure, I often prefer to point to something rather than plop down a full specification. I am also rather fond of irony and sarcasm. But that’s not exactly the same thing as avoiding accountability, is it?
If you want highly specific answers, ask highly specific questions. If you feel there is ambiguity in the subject, resolve it in the question.
OK. Thanks for clarifying your position.
If you said earlier in this thread that you would two-box, you are a two-boxer. If you said earlier in this thread that you would one-box, you are a one-boxer. If Omega correctly predicts your status as a one-boxer/two-boxer, he will fill Box B with the appropriate amount. Assuming that Omega is a good predictor, his prediction is contingent on your disposition as a one-boxer or a two-boxer. This means you can influence Omega’s prediction (and thus the contents of the boxes) simply by choosing to be a one-boxer. If Omega is a good-enough predictor, he will even be able to predict future changes in your state of mind. Therefore, the decision to one-box can and will affect Omega’s prediction, even if said decision is made AFTER Omega’s prediction.
This is the essence of being a reflectively consistent agent, as opposed to a reflectively inconsistent agent. For an example of an agent that is reflectively inconsistent, see causal decision theory. Let me know if you still have any qualms with this explanation.
Oh, I can’t change my mind? I do that on regular basis, you know...
This implies that I am aware that I’ll face the Newcomb’s problem.
Let’s do the Newcomb’s Problem with a random passer-by picked from the street—he has no idea what’s going to happen to him and has never heard of Omega or the Newcomb’s problem before. Omega has to make a prediction and fill the boxes before that passer-by gets any hint that something is going to happen.
So, Step 1 happens, the boxes are set up, and our passer-by is explained the whole game. What should he do? He never chose to be a one-boxer or a two-boxer because he had no idea such things existed. He can only make a choice now and the boxes are done and immutable. Why should he one-box?
It seems unlikely to me that you would change your mind about being a one-boxer/two-boxer over the course of a single thread. Nevertheless, if you did so, I apologize for making presuppositions.
As I wrote in my earlier comment:
If our hypothetical passerby chooses to one-box, then to Omega, he is a one-boxer. If he chooses to two-box, then to Omega, he is a two-boxer. There’s no “not choosing”, because if you make a choice about what to do, you are choosing.
The only problem is that you have causality going back in time. At the time of Omega’s decision the passer-by’s state with respect to one- or two-boxing is null, undetermined, does not exist. Omega can scan his brain or whatever and make his prediction, but the passer-by is not bound by that prediction and has not (yet) made any decisions.
The first chance our passer-by gets to make a decision is after the boxes are fixed. His decision (as opposed to his personality, preferences, goals, etc.) cannot affect Omega’s prediction because causality can’t go backwards in time. So at this point, after step 2, the only time he can make a decision, he should two-box.
As far as I’m aware, what you’re saying is basically the same thing as what causal decision theory says. I hate to pass the buck, but So8res has written a very good post on this already; anything I could say here has already been said by him, and better. If you’ve read it already, then I apologize; if not, I’d say give it a skim and see what you think of it.
So8res’ post points out that
It seems I’m in good company :-)
Unless you like money and can multiply, in which case you one box and end up (almost but not quite certainly) richer.
Wat iz zat “multiply” thang u tok abut?
Think of the situation in the last round of an iterated Prisoner’s Dilemma with known bounds. Because of the variety of agents you might be dealing with, the payoffs there aren’t strictly Newcomblike, but they’re closely related; there’s a large class of opposing strategies (assuming reasonably bright agents with some level of insight into your behavior, e.g. if you are a software agent and your opponent has access to your source code) which will cooperate if they model you as likely to cooperate (but, perhaps, don’t model you as a CooperateBot) and defect otherwise. If you know you’re dealing with an agent like that, then defection can be thought of as analogous to two-boxing in Newcomb.
You may note several posts ago that I noticed the word ‘philosophy’ was not useful and tried to substitute it with other, less loaded, terms in order to more effectively communicate my meaning. This is a specific useful technique with multiple subcomponents (noticing that it’s necessary, deciding how to separate the concepts, deciding how to communicate the separation), that I’ve gotten better at because of time spent here.
Yes, comparative perspectives is much more about claims and much less about holism than any individual perspective- but for a person, the point of comparing perspectives is to choose one whereas for a professional arguer the point of comparing perspectives is to be able to argue more winningly, and so the approaches and paths they take will look rather different.
Professionals are quite capable of passionately backing a particular view. If amateurs are uninterested in arguing—your claim, not mine—that means they are uninterested in truth seeking. People who adopt beliefs they can’t defend are adopting beliefs as clothing
1 and 2 seem to mostly be objections to the presentation of the material as opposed to the content. Most of these criticisms are ones I agree with, but given the context (the Sequences being “bad amateur philosophy”), they seem largely tangential to the overall point. There are plenty of horrible math books out there; would you use that fact to claim that math itself is flawed?
As for 3 and 4, I note that the link you provided is not an objection per se, but more of an expression of surprise: “What, doesn’t everyone know this?” Note also that this comment actually has a reply attached to it, which rather undermines your point that “people on LW don’t respond to criticisms”. I’m sure you have other examples of objections being ignored, but in my opinion, this one probably wasn’t the best example to use if you were trying to make a point.
Not in the sense that I don’t like the font. Lack of justification or point are serious issues.
EDIT I have already said that this isn’t about that is right .or wrong.
I can find out what math is from good books. If the Sequences are putting forward original ideas, I have nowhere else to go,. Of course, in many cases, I can’t tell whether they are, And the author can’t tell me whether his philosophy is new because he doesn’t know the old philosophy.
The dichotomy between the Austere and the Empathic meta ethicist may well be false. I’d like to see more support for it, and specifically for the implicit claim that a question cannot be coherent unless we fully understand all its terms. Answering that claim may involve asking whether we can refer to something with a term when we do not fully understand what we are referring to (although the answer to that is surely “yes!”).
I think that some basic conceptual analysis can be important for clarifying discussion given that many of these words are used and will continue to be used. For example, it is useful to know that “justified true belief” is a useful first approximation of what is meant by knowledge, but that the situation is actually slightly more complicated by that.
On the other hand, I don’t expect that this will work for all concepts. Some concepts are extremely slippery and will lack enough of a shared meaning that we can provide a single definition. In these cases, we can simply point out the key features that these cases tend to have in common.
Some thoughts on this and related LW discussions. They come a bit late—apols to you and commentators if they’ve already been addressed or made in the commentary:
1) Definitions (this is a biggie).
There is a fair bit of confusion on LW, it seems to me, about just what definitions are and what their relevance is to philosophical and other discussion. Here’s my understanding—please say if you think I’ve gone wrong.
If in the course of philosophical debate, I explicitly define a familiar term, my aim in doing so is to remove the term from debate—I fix the value of a variable to restrict the problem. It’d be good to find a real example here, but I’m not convinced defining terms happens very often in philosophical or other debate. By way of a contrived example, one might want to consider, in evaluating some theory, the moral implications of actions made under duress (a gun held to the head) but not physically initiated by an external agent (a jostle to the arm). One might say, “Define ‘coerced action’ to mean any action not physically initiated but made under duress” (or more precise words to the effect). This done, it wouldn’t make sense simply to object that my conclusion regarding coerced actions doesn’t apply to someone physically pushed from behind—I have stuipulated for the sake of argument I’m not talking about such cases. (in this post, you distinguish stipulation and definition—do you have in mind a distinction I’m glossing over?)
Contrast this to the usual case for conceptual analyses, where it’s assumed there’s a shared concept (‘good’, ‘right’, ‘possible’, ‘knows’, etc), and what is produced is meant to be a set of necessary and sufficient conditions meant to capture the concept. Such an analysis is not a definition. Regarding such analyses, typically one can point to a particular thing and say, eg, “Our shared concept includes this specimen, it lacks a necessary condition, therefore your analysis is mistaken”—or, maybe “Intuitively, this specimen falls under our concept, it lacks...”. Such a response works only if there is broad agreement that the specimen falls under the concept. Usually this works out to be the case.
I haven’t read the Jackson book, so please do correct me if you think I’ve misunderstood, but I take it something like this is his point in the paragraphs you quote. Tom and Jack can define ‘right action’ to mean whatever they want it to. In so doing, however, we cease to have any reason to think they mean by the term what we intuitively do. Rather, Jackson is observing, what Tom and Jack should be doing is saying that rightness is that thing (whatever exactly it is) which our folk concepts roughly converge on, and taking up the task of refining our understanding from there—no defining involved.
You say,
Well, not quite. The point I take it is rather that there simply are ‘folk’ platitudes which pick-out the meanings of moral terms—this is the starting point. ‘Killing people for fun is wrong’, ‘Helping elderly ladies across the street is right’ etc, etc. These are the data (moral intuitions, as usually understood). If this isn’t the case, there isn’t even a subject to discuss. Either way, it has nothing to do with definitions.
Confusion about definitions is evident in the quote from the post you link to. To re-quote:
Possibly the problem is that ‘sound’ has two meanings, and the disputants each are failing to see that the other means something different. Definitions are not relevant here, meanings are. (Gratuitous digression: what is “an auditory experience in a brain”? If this means something entirely characterizable in terms of neural events, end of story, then plausibly one of the disputants would say this does not capture what he means by ‘sound’ - what he means is subjective and ineffable, something neural events aren’t. He might go on to wonder whether that subjective, ineffable thing, given that it is apparently created by the supposedly mind-independent event of the falling of a tree, has any existence apart from his self (not to be confused with his brain!). I’m not defending this view, just saying that what’s offered is not a response but rather a simple begging of the question against it. End of digression.)
2) In your opening section you produce an example meant to show conceptual analysis is silly. Looks to me more like a silly attempt at an example of conceptual analysis. If you really want to make your case, why not take a real example of a philosophical argument -preferably one widely held in high regard at least by philosophers? There’s lots of ’em around.
3) In your section The trouble with conceptual analysis, you finally explain,
As explained above, philosophical discussion is not about “which definition to use” -it’s about (roughly, and among other things) clarifying our concepts. The task is difficult but worthwhile because the concepts in question are important but subtle.
If you don’t have the patience to do philosophy, or you don’t think it’s of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don’t think that in doing this latter thing you’ll address the question philosophy is interested in, or that you’ve said anything at all so far to show philosophy isn’t worth doing. In this connection, one of the real benefits of doing philosophy is that it encourages precision and attention to detail in thinking. You say Eliezer Yudkowsky ”...advises against reading mainstream philosophy because he thinks it will ‘teach very bad habits of thought that will lead people to be unable to do real work.‘” The original quote continues, ”...assume naturalism! Move on! NEXT!” Unfortunately Eliezer has a bad habit of making unclear and undefended or question-begging assertions, and this is one of them. What are the bad habits, and how does philosophy encourage them? And what precisely is meant by ‘naturalism’? To make the latter assertion and simultaneously to eschew the responsibility of articulating what this commits you to is to presume you can both have your cake and eat it too. This may work in blog posts -it wouldn’t pass in serious discussion.
(Unlike some on this blog, I have not slavishly pored through Eliezer’s every post. If there is somewhere a serious discussion of the meaning of ‘naturalism’ which shows how the usual problems with normative concepts like ‘rational’ can successfully be navigated, I will withdraw this remark).
What happen to philosophers like Hume who tried to avoid “mere disputes of words?” Seriously, as much as many 20th century philosophers liked Hume, especially the first book of the Treatise (e.g., the positivists), why didn’t they pick up on that?
(I seem to remember some flippant remark making fun of philosophers for these disputes in the Treatise but google finds me nothing)
Getting hung up on the meanings of words is an attractor. Even if your community starts out consciously trying to avoid it, it’s very easy to get sucked back in. Here is a likely sequence of steps.
All this talk about words is silly! We care about actually implementing our will in the real world!
Of course, we want to implement our will precisely. We need to know how things are precisely and how we want them to be precisely, so that we can figure out what we should do precisely.
So, we want to formulate all this precise knowledge and to perform precise actions. But we’re a community, so we’re going to have to communicate all this knowledge and these plans among ourselves. Thus, we’re going to need a correspondingly precise language to convey all these precise things to one another.
Okay, so let’s get started on that precise language. Take the word A. What, precisely, does it mean? Well, what precisely are the states of affairs such that the word A applies? Wait, what precisely is a “state of affairs”? . . .
And down the rabbit-hole you go.
I would like to see some enlargement on the concept of definition. It is usually treated as a simple concept: A means B or C or D; which one depending on Z. But when we try to pin down C for instance, we find that it has a lot of baggage—emotional, framing, stylistic etc. So does B and D. And in no case is the baggage of any of them the same as the baggage of A. None of - defining terms or tabooing words or coining new words - really works all that well in the real world, although they of course help. Do you see a way around this fuzziness?
Another ‘morally good’ definition for your list is ‘that which will not make the doer feel guilty or shameful in future’. It is no better than the others but quite different.
I don’t like this one. It implies that successful suicide is always morally good.
I don’t think you’re arguing against conceptual analysis, instead you want to treat a particular conceptual analysis (reductive physicalism) as gospel. What is the claim that there are two definitions of sound that we can confuse, the acoustic vibrations in the air and the auditory experience in a brain, if it’s not a reductive conceptual analysis of the concept of sound?
Like I said at the beginning:
The definition of “right action” is the kind of action you should do.
You don’t need to know what “should” means, you just need to do what you should do and not do what you shouldn’t do.
One should be able to cash out arguments about the “definition” of “right” as arguments about the actual nature of shouldness.
Defining ‘right’ in terms of ‘should’ gets us nowhere; it just punts to another symbol. Thus, I don’t yet know what you’re trying to say in this comment. Could you taboo ‘should’ for me?
Only through the use of koans. Consider the dialog in:
http://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
Could you explain what “If A, then B” means, tabooing “if/then”,”therefore”,etc.?
Here is another way:
If a rational agent becomes aware that the statement “I should do X” is true, then it will either proceed to do X or proceed to realize that it cannot do X (at least for now).
ETA: Here is a simple Python function (I think I coded it correctly):
def square (x): y=x*x return y
“return” is not just another symbol. It is not a gensym. It is functional. The act of returning and producing an output is completely separate from and non-reducible-to everything else that a subroutine can do.
Rational agents use “should” the same way this subroutine uses “return”. It controls their output.
But better understanding of what “should” means helps, although it’s true that you should do what you should even if you have no idea what “should” means.
How do I go about interpreting that statement if I have no idea what “should” means?
Use your shouldness-detector, even if it has no user-serviceable parts within. Shouldness-detector is that white sparkly sphere over there.
I think it means something analogous to “you can staple even if you have no idea what “kramdrukker” means”. (I don’t speak Afrikaans, but that’s what a translator program just said is “stapler” in Africaans.)
~~~~~~~
I think “should” is a special case of where a “can” sentence gets infected by the sentence’s object (because the object is “should”) to become a “should” sentence.
“You can hammer the nail.” But should I? It’s unclear. “You can eat the fish.” But should I? It’s unclear. “You can do what you should do.” But should I? Yes—I definitely should, just because I can. So, “You can do what you should do” is equivalent to”You should do what you should do”.
In other words, I interpret the statement by Vladmir to be an instance of what we can generally say about “can” statements, of which “should” happens to be a special case in which there is infection from “should” to “can” such that it is more natural in English to not write “can” at all.
This allows us to go from uncontroversial “can” statements to “should” statements, all without learning Africaans!
This feel like novel reasoning by my part (i.e. the whole “can” being infected bit) as to how Vladmir’s statement is true, and I’d appreciate comments or a similarly reasoned source I might be partially remembering and repeating.
If these are equivalent, then the truth of the second statement should entail the truth of the first. But “You should do what you should do” is ostensibly a tautology, while “You can do what you should do” is not, and could be false.
One out you might want to take is to declare “S should X” only meaningful when ability and circumstance allow S to do X; when “S can X.” But then you just have two clear tautologies, and declaring them equivalent is not suggestive of much at all.
Decisive points.
As you have shown them to not be equivalent, I would have done better to say:
But if the latter statement is truly a tautology, that obviously doesn’t help. If I then add your second edit, that by “should” I mean “provided one is able to”, I am at least less wrong...but can my argument avoid being wrong only by being vacuous?
I think so.
If you don’t know what “should” means, how do you decide what to do?
This is another instance in which you can’t argue morality into a rock.
If knowing what “should” means helped something, then knowledge of a definition could lead to real actionable information. This seems, on the face of it, absurd.
I think either:
“XYZ things are things that maximize utility”
or:
“XYZ things are things that you should do”
can count as a definition of XYZ, but not both, just as:
“ABC things are red things”
pr
“ABC things are round things”
can count as a definition of ABC things, but not both. (Since if you knew both, then you would learn that red things are round and round things are red.)
I was under the impression that the example of an unobserved tree falling in the woods is taken as a naturalized version of Schrodinger’s Cat experiment. So the question of whether it makes a sound is not necessarily about the definition of a sound.
Nope.
The Wikipedia article you linked has a See Also: Schrodinger’s Cat link.
You’re missing a possible path forward here. Perhaps we aren’t the ones that need to do it. If we can implement empathy, we can get the Friendly AI to do it.
Downvoter here. Is there a custom of always explaining downvotes? Should there be one?
I down voted because it was a post about AI (yawn), and in particular a stupid one. But looking at it again I see that it may not be as stupid as I thought, downvote revoked.
No and no. However, it’s usually good when downvoted commenters learn why they got downvoted.
The most interesting comments are left by downvoters.
“Downvoters leave the most interesting comments”, my original formulation, is false in one of its natural interpretations.
Upvoted ;-)
Oftentimes the reason for a downvote may be nonobvious (for example, if there are multiple potential points of contention in a single comment). If you wish to indicate disapproval of one thing in particular, or draw the commenter’s attention to a particular error you expect they will desire to correct, or something along those lines, it can be a good idea to explain your reason for dissent.
One unique thing I haven’t heard others appreciate about the strictly dumb comment system of voting in one of two directions is that it leaves the voted upon with a certain valuable thought just within reach.
That thought is: “there are many reasons people downvote, each has his or her own criteria at different times. Some for substantive disagreement, others for tone, some because they felt their time wasted in reading it, others because they thought others would waste their time reading it, some for failing to meet the usual standard of the author, some for being inferior to a nearby but lesser ranked comment, etc.”
People have a hard enough time understanding that as it is. Introduce sophistication into the voting system, and far fewer will take it to heart, as it will be much less obvious.
Intriguing. Starting from that thought it can be frustrating not to know which of those things is the case (and thus: what, if any, corrective action might be in order). I hadn’t really thought about how alternate voting systems might obscure the thought itself. I’d think that votes + optional explanations would highlight the fact that there could be any number of explanations for a downvote…
Do we have any good anecdotes on this?
No! I don’t have enough time to write comments for all the times I downvote. And I’d rather not read pages and pages of “downvoted because something you said in a different thread offended me” every week or two.
Just click and go. If you wish to also verbalize disapproval then by all means put word to the specific nature of your contempt, ire or disinterest.
I’m somewhat upset and disappointed that adults would do this. It seems like a very kindergartener thing. Would you go around upvoting all of a user’s comments because you liked one? I wouldn’t, and I have a tendency to upvote more than I downvote. Why downvote a perfectly good, reasonable comment just because another comment by the same user wasn’t as appealing to you?
I don’t think that wedrifid was saying that he does this. (I’m not sure that you were reading him that way.) I think that he just expects that, if explaining downvotes were the norm, then he would read a comment every week or so saying, “downvoted because something you said in a different thread offended me”.
I didn’t interpret the comment as meaning that wedrifid would downvote on this policy, or that he advocated. It’s probably true that there are people who do. That just makes me sad.
Yes, although not so much ‘a comment every week or so’ as ‘a page or two every week or so’.
I do very much hope LWers can occasionally disagree with an idea, and downvote it, without feeling contempt or ire. If not, we need to have a higher proportion of social skill and emotional intelligence posts.
It’s a good thing I included even mere disinterest in the list of options. You could add ‘disagreement’ too—although some people object to downvoting just because you disagree.
It seems to me that framing the issue of a (possible) social custom in terms of whether there should be a rule that covers all situations is a debate tactic designed to undermine support for a custom similar to the all-encompassing one used in framing.
The answer to whether there should be a custom that always applies is pretty much always going to be no, which doesn’t tell us about similar customs (like one of usually or often explaining downvotes) even though it seems like it does.
There is a custom of often explaining downvotes, and there should be one of doing so more frequently.
Most of the time when I vote something down, I would not try calling the person out if the same comment were made in an ordinary conversation. Explaining a downvote feels like calling someone out, and if I explained my downvotes a lot, I’d feel like I was being aggressive. Now, it’s possible that unexplained downvotes feel equally aggressive. But really, all a downvote should mean is that someone did the site a disservice equal in size to the positive contribution represented by a mere one upvote.
I mostly find unexplained downvotes aggressive because I find it frustrating in that I made some kind of mistake but no one wants to explain it to me so that I can do better next time.
It’s not that often that mistakes are unambiguous and uncontroversial once pointed out. A lot of the time, the question isn’t “do I want to point out his mistake so he can do better next time”, but “do I want to commit to having a probably fruitless debate about this”.
Do you think that every time a mistake would, in fact, be unambiguous and uncontroversial, it should be pointed out?
If so, do you think more downvotes should be explained?
From my experience it seems like the first quote implies the second.
I think this site is already extremely good at calling out unambiguous and uncontroversial mistakes.
I don’t understand this interpretation of down/upvotes. Is it normative? Intentionally objective rather than subjective? Is this advice to downvoters or the downvoted? Could you please clarify?
To me they feel more aggressive, since they imply that the person doesn’t have enough status to deserve an explanation from the downvoter.
An equivalent behavior in real-life interaction would be saying something like “you fail”, followed by rudely ignoring the person when they attempted to follow up.
Not sure the status implication is accurate. When I vote down someone high-status, I don’t feel any particular compulsion to explain myself. If anything, it makes me anticipate that I’m unlikely to change anyone’s mind.
I think a much closer analogy than saying “you fail” is frowning.
Would you prefer that I posted a lot of comments starting with “I voted this down because”, or that I didn’t vote on comments I think detract from the site?
I prefer not having downvotes explained. It is irritating when the justification is a bad one and on average results in me having less respect for the downvoter.
I reject your normative assertion but respect your personal preference to have downvotes explained to you. I will honour your preference and explain downvotes of your comments while at the same time countering the (alleged) norm of often explaining downvotes.
In this instance I downvoted the parent from 1 to 0. This is my universal policy whenever someone projects a ‘should’ (of the normative kind not ) onto others that I don’t agree with strongly. I would prefer that kind of thing to happen less frequently.
About what fraction of downvotes have bad justifications? Is this a serious problem (measured on the level of importance of the karma system)? Is there anything that can be done about it?
I was certainly not aware of this problem.
My assertion of a norm was based on the idea that downvotes on lesswrong are often explained but usually not explained, and deviating from this fraction would bring, on average, less respect from the community, thus constituting a norm. I think the definitions of “often” and “norm” are general enough to make this statement true.
I don’t know how much of a problem it is, but there’s definitely something that can be done about it: instead of a “dumb” karma count, use some variant of Pagerank on the vote graph.
In other words, every person is a node, every upvote that each person gets from another user is a directed edge (also signed to incorporate downvotes), every person starts with a base amount of karma, and then you iteratively update the user karma by weighting each inbound vote by the karma of the voter.
When I say “variant of Pagerank”, I mean that you’d probably also have to fudge some things in there as well for practical reasons, like weighting votes by time to keep up with an evolving community, adding a bias so that a few top people don’t completely control the karma graph, tuning the base karma that people receive based on length of membership and/or number of posts, weighting submissions separately from comments, avoiding “black hat SEO” tricks, etc. You know, all those nasty things that make Google a lot more than “just” Pagerank at web scale...
IMO doing something like this would improve most high traffic comment systems and online communities substantially (Hacker News could desperately use something like that to slow its slide into Reddit territory, for instance), though it would severely de-democratize them; somehow I doubt people around here would have much of a problem with that, though. The real barrier is that it would be a major pain in the ass to actually implement, and would take several iterations to really get right. It also might be difficult to retrofit an existing voting system with anything like that because sometimes they don’t store the actual votes, but just keep a tally, so it would take a while to see if it actually helped at all (you couldn’t backtest on the existing database to tune the parameters properly).
I think they do store the votes because otherwise you’d be able to upvote something twice.
However my understanding is that changing lesswrong, even something as basic as what posts are displayed on the front page, is difficult, and so it makes sense why they haven’t implemented this.
It’s just karma. Not a big deal.
I was responding to “and there should be one of doing so more frequently”. If you declare that the community should adopt a behaviour and I don’t share your preference about the behaviour in question then I will downvote the assertion. Because I obviously prefer that people don’t tell others to do things that I don’t want others to be doing. In fact there is a fairly high bar on what ‘should be a norm’ claims I don’t downvote. All else being equal I prefer people don’t assert norms.
How can you possibly create an AI that reasons morally the way you want it to unless you can describe how that moral reasoning works?
People want stuff. I suspect there is no simple description of what people want. The AI can infer what people want from their behavior (using the aforementioned automated empathy), take the average, and that’s the AI’s utility function.
If there is no simple description of what people want, a bunch of people debating the structure of this non-simple thing on a web site isn’t going to give clarity on the issue.
ETA:
Hoping to change people’s feelings as part of an FAI implementation is steering toward failure. You’ll have to make the FAI based on the assumption that the vast majority of people won’t be persuaded by anything you say, unless you’ve had a lot more success persuading people than I have.
Downvoted for unnecessary status manoeuvring against the rest of LessWrong. Why should the location of discussion affect its value? Especially since the issue isn’t even one where people need to be motivated to act, but simply one that requires clear-headed thought.
Because the anonymity of the internet causes discussions to derail in aggressive posturing as many social restraints are absent. Also because much communication is non verbal. Also because the internet presents a low barrier for entry into the conversation.
Mostly, a communication has value separate from where it is posted (although the message is not independent from the messenger, e.g. with the advent of the internet scholarly articles often influence their field while being read by relevant people in the editing stages by peers and go unread in their final draft form) but all else equal, knowing where a conversation is taking place helps one guess at its value. So you are mostly right.
Recently, I heard a novel anti-singularity argument. That ”...we have never witnessed a greater intelligence, therefore we have no evidence that one’s existence is possible.”. Not that intelligence isn’t very useful (a common but weak argument), but that one can’t extrapolate beyond the smartest human ever and believe it likely that a slightly greater level of intelligence is possible. Talk about low barriers to entry into the conversation! This community is fortunately good at policing itself.
Now if only I could find an example of unnecessary status manoevering ;-).
I didn’t read this post as having direct implications for FAI convincing people of things. I think that for posts in which the FAI connection is tenuous, LW is best served by discussing rationality without it, so as to appeal to a wider audience.
I’m still intrigued by how the original post might be relevant for FAI in a way that I’m not seeing. Is there anything beyond, “here is how to shape the actions of an inquirer, P.S. an FAI could do it better than you can”? Because that postscript could go lots of places, and so pointing out it would fit here doesn’t tell me much.
I didn’t quite understand what you said you were seeing, but I’ll try to describe the relevance.
The normal case is people talk about moral philosophy with a fairly relaxed emotional tone, from the point of view “it would be nice if people did such-and-such, they usually don’t, nobody’s listening to us, and therefore this conversation doesn’t matter much”. If you’re thinking of making an FAI, the emotional tone is different because the point of view is “we’re going to implement this, and we have to get it right because if it’s wrong the AI will go nuts and we’re all going to DIE!!!” But then you try to sound nice and calm anyway because accurately reflecting the underlying emotions doesn’t help, not to mention being low-status.
I think most talk about morality on this website is from the more tense point of view above. Otherwise, I wouldn’t bother with it, and I think many of the other people here wouldn’t either. A minority might think it’s an armchair philosophy sort of thing.
The problem with these discussions is that you have to know the design of the FAI is correct, so that design has to be as simple as possible. If we come up with some detailed understanding of human morality and program it into the FAI, that’s no good—we’ll never know it’s right. So IMO you need to delegate the work of forming a model of what people want to the FAI and focus on how to get the FAI to correctly build that model, which is simpler.
However, if lukeprog has some simple insight, it might be useful in this context. I’m expectantly waiting for his next post on this issue.
The part that got my attention was: “You’ll have to make the FAI based on the assumption that the vast majority of people won’t be persuaded by anything you say.”
Some people will be persuaded, and some won’t be, and the AI has to be able to tell them apart reliably regardless, so I don’t see assumptions about majorities coming into play, instead they seem like an unnecessary complication once you grant the AI a certain amount of insight into individuals that is assumed as the basis for the AI being relevant.
I.e., if it (we) has (have) to make assumptions for lack of understanding about individuals, the game is up anyway. So we still approach the issue from the standpoint of individuals (such as us) influencing other individuals, because an FAI doesn’t need separate group parameters, and because it doesn’t, it isn’t an obviously relevantly different scenario than anything else we can do and it can theoretically do better.
The physicists have a clear definition of what sound is. So why can’t we just say Barry is confused?
You don’t get to call people confused just because they use a different definition than the one you prefer. You may say that they speak a different language than you do, but they’re not confused in regards to their own minds, or as to how their words maps onto a territory.
Downvoted for a very basic map-territory confusion.
I’m okay with being wrong. It’s why I ask the question.
I endorse that first bit.
I endorse the first part.
There is nothing to “endorse”. The same English word can mean two different things. Both are valid things to talk about, depending on context.
If I were to say, “Evolution is the idea that men are descended from chimpanzees,” would you let me have my definition or would you say I was confused?
edit: No, not confused, but wrong.
If you want to say that “Evolution is the idea that men are descended from chimpanzees” is a definition, it is simply wrong, except within a Creationist circle, where such straw men may be used. We are then in “you can not arbitrarily define words” land. If I am not mistaken, the appropriate Sequence post is linked in the post.
Being confused about something and being wrong about something are two different things. Saying that a falling tree does not generate vibrations in the air is wrong; discussion whether it makes sound without recognizing that you want to talk about vibrations, is confused.
Have you read A Human’s Guide to Words? You seem to be confused about how words work.
I haven’t read the entire sequence but have studied some of the entries. I’ve had this question—is it right to call it a confusion?--ever since I read Taboo Your Words but didn’t ask about it until now.
Neither, I would say that you were either horribly mistaken or deliberately misconstruing (lying) about what other people meant when they talked about evolution. It would become a lie for certain the second time you said it.
Wow. I had to go the dictionary because I thought I might be using confuse incorrectly. I mean definition 3 of the New Oxford American.
confuse: identify wrongly, mistake : a lot of people confuse a stroke with a heart attack | purchasers might confuse the two products.
You didn’t use the verb “confuse with”, you used the word “confused” as an adjective, which has a slightly different meaning. Why didn’t you go look “confused” up? I’m increasing the probability estimate you’re being deliberately disingenuous here.
But even if you were just mistaken about typical usage, not intentionally disingenuous, it would have been better still if you tried to understand the meaning I’m trying to communicate to you instead of debating the definitions.
I’m not getting that vibe at all.
Is the problem which part of speech is being used, or is it whether or not the verb is being used reflexively?
“I fed my kitten.” This sentence is ambiguous. “I fed my kitten tuna.” “I fed my kitten to a mountain lion.”
One can feed a kitten (reflexive) an item to that kitten, or one can feed the kitten to an animal.
The adjective is derived from the non-reflexive verb in this case, but can not both the verb and adjective both hold both meanings, depending on whether or not context makes them reflexive?
Other languages routinely mark the difference between reflexive and non-reflexive verbs.
I’m going to grant that my use of confused was mistaken and just rephrase: Physicists have a clear theory of sound. So why can’t we just say Barry is wrong?
He’d be wrong if he was talking about what physicists talk about when they refer to sound. He’d not be wrong if he was talking about what lots of other people talk about when they refer to “sound”.
“Sound” is a word that in our language circumscribes two different categories of phenomena—the acoustic vibration (that doesn’t require a listener), and the qualia of the sense of hearing (that does require a listener). In the circumstances of the English language the two meanings use the same word. That doesn’t necessitate for one meaning to be valid and the other meaning to be invalid. They’re both valid, they’re just different.
If I say “you have the right to bear arms” I mean a different thing with the words ‘arms’ than if I say “human arms are longer than monkey arms”, but that doesn’t make one meaning of the words ‘arms’ wrong and the other right.
The analogy I’ve always appreciated was that my map has one pixel for both my apartment and my neighbors. So why do they get mad when I go through the window and shower there? It’s mine too, just look at the map, sheesh!
Where do definitions come from?
Usage. Dave interprets a sign from Jenny as referring to something, then he tries using the same sign to refer to the same thing, and if that usage of the sign is easily understood it tends to spread like that. The dictionary definition just records the common usages that have developed in the population.
For instance, how does the alien know what Takahiro means when he extends his index finger toward earth in this Japanese commercial? The alien just assumes it means he can find more chocolate bars on planet earth. If the alien gets to earth and finds more chocolate, he(?) is probably going to decide that his interpretation of the sign is at least somewhat reliable, and update for future interactions with humans.
I’d agree that’s generally how it works. I apologize, I probably should have said something like “Where do you think definitions come from?”, I was trying to figure out CharlesR’s thought process re: physicalists, above.
My problem with Barry was he wants to include the words perception of in his definition for sound but has different rules when talking about light.
That was yesterday. I’ve updated. I’ll write more when I’ve had a chance to clarify my thoughts.
Okay, thanks!
How could you endorse the first part without endorsing the second part? Doesn’t the first part already include the second part?
After all, it says “within the range of hearing and of a level sufficiently strong to be heard”. What could that mean if not “sufficient to generate the sensation stimulated in organs of hearing by such vibrations”?
This is the part I endorse.
“Sound is a mechanical wave that is an oscillation of pressure transmitted through a solid, liquid, or gas.”
It does not require the presence of a listener. Nor need it be in a certain range of frequencies. (That would just be a sound you cannot hear.)
What I am saying is, when Barry replies as he does, why don’t we just say, “You are confused about what is and is not sound. Go ask the physicists, ‘What is sound?’ and then we can continue this conversation, or if you don’t want to bother, you can take my word for it.”
When physicists have a consensus view of a phenomenon, we shouldn’t argue over definitions. We should use their definitions, provisionally, of course.
No one thinks it makes sense to argue over what is or is not an atom. I don’t see why ‘sound’ should be in a different category.
I would need more detail to evaluate the modified scenario. As it stands, what I wrote seems trivially to survive the new challenge.